diff --git "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2022_01.jsonl" "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2022_01.jsonl" new file mode 100644--- /dev/null +++ "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2022_01.jsonl" @@ -0,0 +1,980 @@ +"---\nabstract: 'Noisy Intermediate-Scale Quantum (NISQ) algorithms, which run on noisy quantum computers should be carefully designed to boost the output state fidelity. While several compilation approaches have been proposed to minimize circuit errors, they often omit the detailed circuit structure information that does not affect the circuit depth or the gate count. In the presence of spatial variation in the error rate of the quantum gates, adjusting the circuit structure can play a major role in mitigating errors. In this paper, we exploit the freedom of gate reordering based on the commutation rules to show the impact of gate error propagation paths on the output state fidelity of the quantum circuit, propose advanced predictive techniques to project the success rate of the circuit, and develop a new compilation phase post-quantum circuit mapping to improve its reliability. Our proposed approaches have been validated using a variety of quantum circuits with different success metrics, which are executed on IBM quantum computers. Our results show that rescheduling quantum gates based on their error propagation paths can significantly improve the fidelity of the quantum circuit in the presence of variable gate error rates.'\nauthor:\n- \nbibliography:\n- 'References.bib'\ntitle: 'Pauli Error Propagation-Based Gate" +"---\nabstract: 'Laser beams carrying orbital angular momentum (OAM) provide an additional degree of freedom and have found wide applications ranging from optical communications and optical manipulation to quantum information. The efficient generation and operation of ultra-intense OAM beams is a big challenge that has to be met, currently setting a limit to the potential applications of ultra-intense OAM beams in high-energy-density physics studies. Here, we theoretically and numerically demonstrate for the first time that a pump beam with a new OAM state is generated by coupling of the seed pulse with OAM Langmuir waves arising from both backward and forward stimulated Raman scattering mechanisms. Advantage is taken of the high energy transfer efficiency from pump to amplified seed beams by operating in the non-linear regime, as this significantly reduces the size of amplification system and promotes access to high-intensity OAM laser beams for scientific and industrial applications.'\nauthor:\n- 'Q. S. Feng'\n- 'R. Aboushelbaya'\n- 'M. W. Mayr'\n- 'W. P. Wang'\n- 'R. M. G. M. Trines'\n- 'B. T. Spiers'\n- 'R. W. Paddock'\n- 'I. Ouatu'\n- 'R. Timmis'\n- 'R. H. W. Wang'\n- 'R. Bingham'\n- 'P. A. Norreys'\nbibliography:\n- 'FSRS\\_OAM.bib'\ntitle:" +"---\nabstract: 'Recently, there has been an increasing interest in modelling and computation of physical systems with neural networks. Hamiltonian systems are an elegant and compact formalism in classical mechanics, where the dynamics is fully determined by one scalar function, the Hamiltonian. The solution trajectories are often constrained to evolve on a submanifold of a linear vector space. In this work, we propose new approaches for the accurate approximation of the Hamiltonian function of constrained mechanical systems given sample data information of their solutions. We focus on the importance of the preservation of the constraints in the learning strategy by using both explicit Lie group integrators and other classical schemes.'\naddress: 'Dept. of Mathematical Sciences, Norwegian University of Science and Technology'\nauthor:\n- Elena Celledoni\n- Andrea Leone\n- Davide Murari\n- Brynjulf Owren\nbibliography:\n- 'main.bib'\ntitle: Learning Hamiltonians of constrained mechanical systems\n---\n\nHamiltonian neural networks, Lie group integrators, Homogeneous manifolds, Hamiltonian systems, Constrained mechanical systems\n\nIntroduction {#se:intro}\n============\n\nNeural networks have been proven to be effective in learning patterns from data in many different contexts. Recently there has been an increasing interest in applying neural networks to learn physical models from data, for example models of classical" +"---\nabstract: 'The numbers $f_\\lambda$ of standard tableaux of shape $\\lambda\\vdash n$ satisfy 2 fundamental recursions: $f_\\lambda = \\sum f_{\\lambda^-}$ and $(n + 1)f_\\lambda=\\sum f_{\\lambda^+}$, where $\\lambda^-$ and $\\lambda^+$ run over all shapes obtained from $\\lambda$ by adding or removing a square respectively. The first of these recursions is trivial; the second can be proven algebraically from the first. These recursions together imply algebraically the dimension formula $n! =\\sum f_\\lambda^2$ for the irreducible representations of $S_n$. We show that a combinatorial analysis of this classical algebraic argument produces an infinite family of algorithms, among which are the classical Robinson-Schensted [*row*]{} and [*column*]{} insertion algorithms. Each of our algorithms yields a bijective proof of the dimension formula.'\naddress:\n- 'Author\u2019s address: Department of Mathematics, University of California, San Diego, La Jolla, CA 92093, USA'\n- 'Author\u2019s current address: Department of Mathematics, University of Michigan, Ann Arbor, MI 48105-1003'\nauthor:\n- 'Adriano\u00a0M.\u00a0Garsia'\n- 'Timothy\u00a0J.\u00a0McLarnan'\ndate: 'June 2, 1987'\ntitle: |\n ROBINSON-SCHENSTED ALGORITHMS\\\n obtained from\\\n TABLEAU RECURSIONS\n---\n\nnamedef[subjclassname@2020]{}[1980 Mathematics Classification]{}\n\n[^1]\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe Robinson-Schensted row insertion algorithm (briefly RSA) yields a correspondence (briefly RSC) between permutations and pairs of tableaux of the same shape. It" +"---\nabstract: 'Computational notebook software such as Jupyter Notebook is popular for data science tasks. Numerous computational notebooks are available on the Web and reusable; however, searching for computational notebooks manually is a tedious task, and so far, there are no tools to search for computational notebooks effectively and efficiently. In this paper, we propose a similarity search on computational notebooks and develop a new framework for the similarity search. Given contents (i.e., source codes, tabular data, libraries, and outputs formats) in computational notebooks as a query, the similarity search problem aims to find top-$k$ computational notebooks with the most similar contents. We define two similarity measures; set-based and graph-based similarities. Set-based similarity handles each content independently, while graph-based similarity captures the relationships between contents. Our framework can effectively prune the candidates of computational notebooks that should not be in the top-$k$ results. Furthermore, we develop optimization techniques such as caching and indexing to accelerate the search. Experiments using Kaggle notebooks show that our method, in particular graph-based similarity, can achieve high accuracy and high efficiency.'\nauthor:\n- 'Misato Horiuchi[^1]\\'\n- Yuya Sasaki\n- Chuan Xiao\n- Makoto Onizuka\nbibliography:\n- 'bibliography.bib'\ntitle: Similarity Search on Computational Notebooks\n---\n\n[**Keywords:**]{}" +"---\nabstract: 'Multi-view learning is frequently used in data science. The pairwise correlation maximization is a classical approach for exploring the consensus of multiple views. Since the pairwise correlation is inherent for two views, the extensions to more views can be diversified and the intrinsic interconnections among views are generally lost. To address this issue, we propose to maximize higher order correlations. This can be formulated as a low rank approximation problem with the higher order correlation tensor of multi-view data. We use the generating polynomial method to solve the low rank approximation problem. Numerical results on real multi-view data demonstrate that this method consistently outperforms prior existing methods.'\nauthor:\n- Jiawang Nie\n- Li Wang\n- Zequn Zheng\ntitle: 'Higher Order Correlation Analysis for Multi-View Learning'\n---\n\nIntroduction\n============\n\nMulti-view learning is a frequently used paradigm for multi-view data, which has broad applications. Generally, multi-view data contains sets of samples, each of which is depicted by a different characteristic. For instance, an image can be described by different feature descriptors such as color, texture and shape. A web page contains text and images, as well as hyperlinks to other web pages. Due to heterogeneous features extracted from each view," +"---\nauthor:\n- |\n Julian Gerstenberg, Ralph Neininger and Denis Spiegel\\\n Institute for Mathematics\\\n Goethe University Frankfurt\\\n Germany\\\n [{gerstenb,neiningr,spiegel}@math.uni-frankfurt.de]{}\nbibliography:\n- 'on\\_solutions\\_to\\_distributional\\_bellmann\\_equations\\_arxiv.bib'\ntitle: On solutions of the distributional Bellman equation\n---\n\n\\\n\\\n**Keywords:** distributional reinforcement learning; distributional Bellman equation; random difference equation; perpetuity; Markov decision process; regular variation; machine learning\\\n**AMS subject classifications:** Primary: 60E05, 60H25, Secondary: 68T05, 90C40\n\nIntroduction {#sec:introduction}\n============\n\nThe objective in reinforcement learning (RL) is to teach an agent that sequentially interacts with an environment to choose \u2019good\u2019 actions. For each action the agent receives an immediate real-valued reward. The rewards are accumulated over time resulting in the so-called *return*, which describes the overall performance of the agent. Randomness may be involved at all levels of this interaction: in choosing actions, the environment reacting to actions and/or in the rewards received by the agent. Hence, the return is to be considered random as well. In more classical approaches to RL problems the randomness is averaged out and only the expected return is considered when evaluating the performance of an agent. In [@bellemare2017distributional], not only the expectation but the complete distribution of the return was considered, introducing what is now known as *distributional* RL, see [@bdr2022]." +"---\nabstract: 'We present detail thermodynamic and muon spin relaxation ($\\mu$SR) studies of quantum spin liquid (QSL) candidate H$_3$LiIr$_2$O$_6$. In agreement with the low temperature thermodynamic evidence (*e.g.* bulk magnetization and heat capacity) for the absence of magnetic transition, zero-field (ZF)-$\\mu$SR measurements indicate the absence of static magnetic ordering or spin freezing down to our lowest temperature of 80\u00a0mK. Both ZF- and longitudinal-field (LF)-$\\mu$SR measurements reveal persistent spin fluctuations at low temperatures. These results provide well-established evidence of a QSL state in H$_3$LiIr$_2$O$_6$. Furthermore, the observation of the time-field scaling behavior of $\\mu$SR spectra $A(t)\\sim A(t/H^{0.46})$, and the low temperature power-law specific heat coefficient $C/T \\sim T^{-0.57}$, indicate the finite density of state in the form of $N(E) \\sim E^{-0.5}$, in a good agreement with the disorder-induced states in the Kitaev spin liquid.'\nauthor:\n- 'Yan-Xing Yang'\n- 'Liang-Long Huang'\n- 'Zi-Hao Zhu'\n- 'Chang-Sheng Chen'\n- Qiong Wu\n- 'Zhao-Feng Ding'\n- Cheng Tan\n- 'Pabi K. Biswas'\n- 'Adrian D. Hillier'\n- 'You-Guo Shi'\n- 'Da-Peng Yu'\n- Cai Liu\n- Le Wang\n- Fei Ye\n- 'Jia-Wei Mei'\n- Lei Shu\ntitle: 'Muon Spin Relaxation Study of Spin Dynamics in Quantum Spin Liquid Candidate H$_3$LiIr$_2$O$_6$'\n---" +"---\nabstract: 'We assume dark matter to be a cosmological self-gravitating Bose\u2013Einstein condensate of non-relativistic ultralight scalar particles with competing gravitational and repulsive contact interactions and investigate the observational implications of such model. The system is unstable to the formation of stationary self-bound structures that minimize the energy functional. These *cosmological superfluid droplets*, which are the smallest possible gravitationally bound dark matter structures, exhibit a universal mass profile and a corresponding universal rotation curve. Assuming a hierarchical structure formation scenario where granular dark matter haloes grow around these primordial stationary droplets, the model predicts cored haloes with rotation curves that obey a single universal equation in the inner region ($r\\,\\lesssim\\,1$kpc). A simultaneous fit to a selection of galaxies from the SPARC database chosen with the sole criterion of being strongly dark matter dominated even within the innermost region, indicates that the observational data are consistent with the presence of a Bose\u2013Einstein condensate of ultralight scalar particles of mass $m \\simeq 2.2 \\times 10^{-22}$eVc$^{-2}$ and repulsive self-interactions characterized by a scattering length $a_s \\simeq 7.8 \\times 10^{-77}$m. Such small self-interactions have profound consequences on cosmological scales. They induce a natural minimum scale length for the size of dark matter structures that" +"---\nabstract: 'The type IIB matrix model has been proposed as a non-perturbative definition of superstring theory since 1996. We study a simplified model that describes the late time behavior of the type IIB matrix model non-perturbatively using Monte Carlo methods, and we use the complex Langevin method to overcome the sign problem. We investigate a scenario where the space\u2013time signature changes dynamically from Euclidean at early times to Lorentzian at late times. We discuss the possibility of the emergence of the (3+1)D expanding universe.'\naddress: |\n $^{1)}$ [Theory Center, Institute of Particle and Nuclear Studies,\\\n High Energy Accelerator Research Organization (KEK),\\\n 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan\\\n E-mail: khat@post.kek.jp, jnishi@post.kek.jp]{}\\\n $^{2)}$ [Physics Department, School of Applied Mathematical and Physical Sciences, National Technical University of Athens, Zografou Campus,\\\n GR-15780 Athens, Greece\\\n E-mail: konstant@mail.ntua.gr, sp10018@central.ntua.gr]{}\\\n $^{3)}$ [Setsunan University,\\\n 17-8 Ikeda Nakamachi, Neyagawa, Osaka, 572-8508, Japan\\\n E-mail: azuma@mpg.setsunan.ac.jp]{}\\\n $^{4)}$ [Sezione di Milano-Bicocca, Istituto Nazionale di Fisica Nucleare (INFN),\\\n Piazza della Scienza 3, I-20126 Milano, Italy\\\n E-mail: Mitsuaki.Hirasawa@mib.infn.it]{}\\\n $^{5)}$ [National Institute of Technology, Tokuyama College,\\\n Gakuendai, Shunan, Yamaguchi 745-8585, Japan\\\n E-mail: y-itou@tokuyama.ac.jp]{}\\\n $^{6)}$ [Department of Particle and Nuclear Physics,\\\n School of High Energy Accelerator Science,\\\n Graduate University for Advanced Studies (SOKENDAI),\\\n 1-1 Oho," +"---\nabstract: 'Let $(R,{\\mathfrak{m}},{\\mathbb{k}})$ be an equicharacteristic one-dimensional complete local domain over an algebraically closed field ${\\mathbb{k}}$ of characteristic $0$. R. Berger conjectured that $R$ is regular if and only if the universally finite module of differentials $\\Omega_R$ is a torsion-free $R$ module. We give new cases of this conjecture by extending works of G\u00fcttes ([@Guttes1990]) and Corti\u00f1as, Geller and Weibel ([@ABC1]). This is obtained by constructing a new subring $S$ of ${\\operatorname{Hom}}_R({\\mathfrak{m}},{\\mathfrak{m}})$ and constructing enough torsion in $\\Omega_S$, enabling us to pull back a nontrivial torsion to $\\Omega_R$.'\naddress:\n- 'Department of Mathematics, University of Virginia, Charlottesville, VA 22904-4135, USA'\n- 'Department of Mathematics, Indian Institute of Technology Delhi, India'\nauthor:\n- Craig Huneke\n- Sarasij Maitra\n- Vivek Mukundan\nbibliography:\n- 'references.bib'\ntitle: 'Torsion in differentials and Berger\u2019s Conjecture'\n---\n\nIntroduction\n============\n\nThis paper gives new cases of a conjecture made by R. Berger in 1963 [@Berger63]. Let ${\\mathbb{k}}$ be an algebraically closed field of characteristic $0$, and let $(R,\\mathfrak{m}_R,{\\mathbb{k}})$ be an equicharacteristic reduced one-dimensional complete local ${\\mathbb{k}}$-algebra. Berger conjectured that the universally finite module of differentials, $\\Omega_R$, is torsion-free if and only if $R$ is regular. The case in which $R$ is regular is easy, since in" +"---\nabstract: 'In this paper, we present a simple analytic proof of Siegel\u2019s theorem that concerns the lower bound of $L(1,\\chi)$ for primitive quadratic $\\chi$. Our new method compares an elementary lower bound with an analytic upper bound obtained by the inverse Mellin transform of $\\Gamma(s)$.'\nauthor:\n- Zihao Liu\nbibliography:\n- 'refs.bib'\ntitle: 'A Simple Proof of Siegel\u2019s Theorem Using Mellin Transform'\n---\n\n[ **Keywords:** Analytic number theory, Dirichlet L-function, Mellin transform, Siegel\u2019s theorem, Siegel-Walfisz theorem ]{}\n\nIntroduction\n============\n\nIn 1935, Siegel[@Siegel1935] introduces the function $$\\label{eqnfs}\n f(s)=\\zeta(s)L(s,\\chi_1)L(s,\\chi_2)L(s,\\chi_1\\chi_2)$$ where $\\chi_1$ and $\\chi_2$ are primitive quadratic characters modulo $q_1$ and $q_2$ respectively. By exploring its algebraic properties, he shows that a very strong lower bound can be established for $L(1,\\chi)$:\n\n\\[thsiegel\\] For all $\\varepsilon>0$ there exists a constant $C(\\varepsilon)>0$ such that $$L(1,\\chi)>C(\\varepsilon)q^{-\\varepsilon}$$ holds for any primitive quadratic $\\chi$ modulo $q$.\n\nAlthough the statement of is analytic, it leads to strong conclusions in the distribution of prime numbers in arithmetic progressions. Using this result, Walfisz[@walfisz_zur_1936] improved the zero-free region of $L(s,\\chi)$ to obtain the prime number theorem for arithmetic progressions in the following form:\n\n\\[thsw\\] Let $\\pi(x;q,a)$ denotes the number of primes $\\le x$ that are $\\equiv a\\pmod q$. Then for all" +"---\nabstract: 'Identifying entanglement-based order parameters characterizing topological systems, in particular topological superconductors and topological insulators, has remained a major challenge for the physics of quantum matter in the last two decades. Here we show that the end-to-end, long-distance, bipartite squashed entanglement between the edges of a many-body system, defined in terms of the edge-to-edge quantum conditional mutual information, is the natural nonlocal order parameter for topological superconductors in one dimension as well as in quasi one-dimensional geometries. For the Kitaev chain in the entire topological phase, the edge squashed entanglement is quantized to $\\log(2)/2$, half the maximal Bell-state entanglement, and vanishes in the trivial phase. Such topological squashed entanglement exhibits the correct scaling at the quantum phase transition, is stable in the presence of interactions, and is robust against disorder and local perturbations. Edge quantum conditional mutual information and edge squashed entanglement defined with respect to different multipartitions discriminate topological superconductors from symmetry breaking magnets, as shown by comparing the fermionic Kitaev chain and the spin-1/2 Ising model in transverse field. For systems featuring multiple topological phases with different numbers of edge modes, like the quasi 1D Kitaev ladder, topological squashed entanglement counts the number of Majorana excitations and" +"---\nabstract: 'Mobile apps are indispensable for people\u2019s daily life. Complementing with automated testing, manual testing is the last line of defence for app quality. However, the repeated actions and easily missing of functionalities make manual testing time-consuming and inefficient. Inspired by the game candy crush with flashy candies as hint moves for players, we propose an approach named [[`NaviDroid`]{}]{} for navigating testers via highlighted next operations for more effective and efficient testing. Within [[`NaviDroid`]{}]{}, we construct an enriched state transition graph with the triggering actions as the edges for two involved states. Based on it, we utilize the dynamic programming algorithm to plan the exploration path, and augment the GUI with visualized hints for testers to quickly explore untested activities and avoid duplicate explorations. The automated experiments demonstrate the high coverage and efficient path planning of [[`NaviDroid`]{}]{} and a user study further confirms its usefulness. It can help us develop more robust software that works in more mission-critical settings, not only by performing more thorough testing with the same effort that has been put in before, but also by integrating these techniques into different parts of development pipeline.'\nauthor:\n- Zhe Liu\n- Chunyang Chen\n- Junjie Wang\n-" +"---\nabstract: 'This paper demonstrates the systematic use of combinatorial coverage for selecting and characterizing test and training sets for machine learning models. The presented work adapts combinatorial interaction testing, which has been successfully leveraged in identifying faults in software testing, to characterize data used in machine learning. The MNIST hand-written digits data is used to demonstrate that combinatorial coverage can be used to select test sets that stress machine learning model performance, to select training sets that lead to robust model performance, and to select data for fine-tuning models to new domains. Thus, the results posit combinatorial coverage as a holistic approach to training and testing for machine learning. In contrast to prior work which has focused on the use of coverage in regard to the internal of neural networks, this paper considers coverage over simple features derived from inputs and outputs. Thus, this paper addresses the case where the supplier of test and training sets for machine learning models does not have intellectual property rights to the models themselves. Finally, the paper addresses prior criticism of combinatorial coverage and provides a rebuttal which advocates the use of coverage metrics in machine learning applications.'\nauthor:\n- |\n $^a$*National Security" +"---\nabstract:\n- 'We introduce a new family of techniques to post-process (\u201cwrap\") a black-box classifier in order to reduce its bias. Our technique builds on the recent analysis of improper loss functions whose optimization can correct any *twist* in prediction, unfairness being treated as a twist. In the post-processing, we learn a wrapper function which we define as an $\\alpha$-tree, which modifies the prediction. We provide two generic boosting algorithms to learn $\\alpha$-trees. We show that our modification has appealing properties in terms of composition of $\\alpha$-trees, generalization, interpretability, and KL divergence between modified and original predictions. We exemplify the use of our technique in three fairness notions: conditional value-at-risk, equality of opportunity, and statistical parity; and provide experiments on several readily available datasets.'\n- 'This is the Supplementary Material to Paper \u201c[Fair Wrapping for Black-box Predictions]{}\u201d. To differentiate with the numberings in the main file, the numbering of Theorems is letter-based (A, B, ...).'\nauthor:\n- |\n Alexander Soen^^ Ibrahim Alabdulmohsin^^ Sanmi Koyejo^,^\\\n Yishay Mansour^,\\ $\\diamond$^ Nyalleng Moorosi^^ Richard Nock^,^\\\n Ke Sun^,\\ $\\circ$^ Lexing Xie^^\\\n \\\n Australian National University^^ Google Research^^\\\n Stanford University^^ Tel Aviv University^$\\diamond$^\\\n Data61/CSIRO^$\\circ$^\nbibliography:\n- 'main.bib'\ntitle: '[Fair Wrapping for Black-box Predictions]{}'\n---\n\nIntroduction" +"---\nabstract: 'Encoding logical quantum information in harmonic oscillator modes is a promising and hardware-efficient approach to the realization of a quantum computer. In this work, we propose to encode logical qubits in grid states of an ensemble of harmonic oscillator modes. We first discuss general results about these multimode bosonic codes; how to design them, how to practically implement them in different experimental platforms and how lattice symmetries can be leveraged to perform logical non-Clifford operations. We then introduce in detail two two-mode grid codes based on the hypercubic and $D_4$ lattices, respectively, showing how to perform a universal set of logical operations. We demonstrate numerically that multimode grid codes have, compared to their single-mode counterpart, increased robustness against propagation of errors from ancillas used for error correction. Finally, we highlight some interesting links between multidimensional lattices and single-mode grid codes concatenated with qubit codes.'\nauthor:\n- Baptiste Royer\n- Shraddha Singh\n- 'S.M. Girvin'\ntitle: Encoding qubits in multimode grid states\n---\n\nBy redundantly encoding logical quantum information, quantum error correction (QEC) can protect quantum computations against the effects of decoherence, allowing the realization of quantum algorithms requiring large circuit depths. One promising approach to QEC is to" +"---\nabstract: 'We investigate the phase diagram of the complex cubic unitary ensemble of random matrices with the potential $V(M)=-\\frac{1}{3}M^3+tM$ where $t$ is a complex parameter. As proven in our previous paper [@BlDeaY17], the whole phase space of the model, $t\\in{\\mathbb{C}}$, is partitioned into two phase regions, $O_{\\mathsf{one-cut}}$ and $O_{\\mathsf{two-cut}}$, such that in $O_{\\mathsf{one-cut}}$ the equilibrium measure is supported by one Jordan arc (cut) and in $O_{\\mathsf{two-cut}}$ by two cuts. The regions $O_{\\mathsf{one-cut}}$ and $O_{\\mathsf{two-cut}}$ are separated by critical curves, which can be calculated in terms of critical trajectories of an auxiliary quadratic differential. In [@BlDeaY17] the one-cut phase region was investigated in detail. In the present paper we investigate the two-cut region. We prove that in the two-cut region the endpoints of the cuts are analytic functions of the real and imaginary parts of the parameter $t$, but not of the parameter $t$ itself (so that the Cauchy\u2013Riemann equations are violated for the endpoints). We also obtain the semiclassical asymptotics of the orthogonal polynomials associated with the ensemble of random matrices and their recurrence coefficients. The proofs are based on the Riemann\u2013Hilbert approach to semiclassical asymptotics of the orthogonal polynomials and the theory of $S$-curves and quadratic differentials.'\naddress:" +"---\nabstract: 'Batch Normalization (BN) is an essential layer for training neural network models in various computer vision tasks. It has been widely used in continual learning scenarios with little discussion, but we find that BN should be carefully applied, particularly for the exemplar memory based class incremental learning (CIL). We first analyze that the empirical mean and variance obtained for normalization in a BN layer become highly biased toward the current task. To tackle its significant problems in training and test phases, we propose Task-Balanced Batch Normalization (TBBN). Given each mini-batch imbalanced between the current and previous tasks, TBBN first reshapes and repeats the batch, calculating near task-balanced mean and variance. Second, we show that when the affine transformation parameters of BN are learned from a reshaped feature map, they become less-biased toward the current task. Based on our extensive CIL experiments with CIFAR-100 and ImageNet-100 datasets, we demonstrate that our TBBN is easily applicable to most of existing exemplar-based CIL algorithms, improving their performance by decreasing the forgetting on the previous tasks.'\nauthor:\n- |\n Sungmin Cha^1,\\ 2^,\u00a0\u00a0Soonwon Hong^1^,\u00a0\u00a0Moontae Lee^2,3^,\u00a0\u00a0and Taesup Moon^1^\\\n ^1^ Department of Electrical and Computer Engineering, Seoul National University\\\n ^2^ Fundamental Research" +"---\nabstract: 'We report the $^{63}$Cu and $^{65}$Cu nuclear spin-lattice relaxation rate measurements of cuprous oxide Cu$_2$O in a zero field Cu nuclear quadrupole resonance at $T$ = 77$-$325 K. From the detailed isotopic measurements of the relaxation rates, we successfully estimated a finite magnetic relaxation rate $^{63}W_M$ and a predominant nuclear quadrupole relaxation rate $^{63}W_Q$. $^{63}W_Q$ changed as $T^{2.1}$, while $^{63}W_M$ changed as $T^{1.6}$ or $T^{\\beta}\\mathrm{exp}(-\\it\\Delta/T)$ with $\\beta$ = 0.6(3) and $\\it\\Delta$ = 190(62) K. The nuclear spin scattering process due to a non-degenerate Fermi gas was discussed as a possible candidate of the magnetic relaxation.'\nauthor:\n- 'Yutaka Itoh[^1]'\ntitle: 'Nuclear spin-lattice relaxation studies of Cu$_{2}$O$_{}$'\n---\n\nIntroduction\n============\n\nElectron spin, charge, and lattice fluctuations play vital roles in solids, because these fluctuations characterize the microscopic properties of the solids. Experimental efforts have been devoted to elucidate the individual fluctuations.\n\nQuadrupole nuclei can be powerful probes to detect magnetic and lattice fluctuations in solids. Naturally abundant quadrupole nuclei $^{63}$Cu and $^{65}$Cu have nuclear spin $I$ = 3/2 with different nuclear gyromagnetic ratios $^{63, 65}\\gamma_n$ ($^{63}\\gamma_n < {^{65}\\gamma_n}$) and quadrupole moments $^{63, 65}Q$ ($^{63}Q > {^{65}Q}$)\u00a0[@NST]. The nuclear spin-lattice relaxation can be due to magnetic relaxation via local" +"---\nabstract: 'Extreme precipitation wreaks havoc throughout the world, causing billions of dollars in damage and uprooting communities, ecosystems, and economies. Accurate extreme precipitation prediction allows more time for preparation and disaster risk management for such extreme events. In this paper, we focus on short-term extreme precipitation forecasting (up to a 12-hour ahead-of-time prediction) from a sequence of sea level pressure and zonal wind anomalies. Although existing machine learning approaches have shown promising results, the associated model and climate uncertainties may reduce their reliability. To address this issue, we propose a self-attention augmented convolution mechanism for extreme precipitation forecasting, systematically combining attention scores with traditional convolutions to enrich feature data and reduce the expected errors of the results. The proposed network architecture is further fused with a highway neural network layer to gain the benefits of unimpeded information flow across several layers. Our experimental results show that the framework outperforms classical convolutional models by 12%. The proposed method increases machine learning as a tool for gaining insights into the physical causes of changing extremes, lowering uncertainty in future forecasts.'\nauthor:\n- |\n Weichen Huang\\\n St. Andrews College\\\n Dublin, A94 XN72 Ireland\\\n `w.huang@students.st-andrews.ie`\\\nbibliography:\n- 'climatenets.bib'\ndate: 'Jan 27, 2022'\ntitle:" +"---\nauthor:\n- 'C.V.\u00a0Vletter[^1]'\n- 'H.L.\u00a0Burger'\n- 'H.\u00a0Alers[^2]'\n- 'N.\u00a0Sourlos[^3]'\n- 'Z.\u00a0Al-Ars[^4]'\nbibliography:\n- 'references.bib'\ndate: \ntitle: Towards an Automatic Diagnosis of Peripheral and Central Palsy Using Machine Learning on Facial Features\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nCentral palsy is a form of facial paralysis that requires urgent medical attention and has to be differentiated from other, similar conditions such as peripheral palsy. To aid in fast and accurate diagnosis of this condition, we propose a machine learning approach to automatically classify peripheral and central facial palsy. The Palda dataset is used, which contains 103 peripheral palsy images, 40 central palsy, and 60 healthy people. Experiments are run on five machine learning algorithms. The best performing algorithms were found to be the SVM (total accuracy of 85.1%) and the Gaussian naive Bayes (80.7%). The lowest false negative rate on central palsy was achieved by the naive Bayes approach (80% compared to 70%). This condition could prove to be the most severe, and thus its sensitivity is another good way to compare algorithms. By extrapolation, a dataset size of 334 total pictures is estimated to achieve a central palsy sensitivity of 95%. All code used for" +"---\nabstract: 'Unsupervised person re-identification (ReID) aims to match a query image of a pedestrian to the images in gallery set without supervision labels. The most popular approaches to tackle unsupervised person ReID are usually performing a clustering algorithm to yield pseudo labels at first and then exploit the pseudo labels to train a deep neural network. However, the pseudo labels are noisy and sensitive to the hyper-parameter(s) in clustering algorithm. In this paper, we propose a Hybrid Contrastive Learning (HCL) approach for unsupervised person ReID, which is based on a hybrid between instance-level and cluster-level contrastive loss functions. Moreover, we present a Multi-Granularity Clustering Ensemble based Hybrid Contrastive Learning (MGCE-HCL) approach, which adopts a multi-granularity clustering ensemble strategy to mine priority information among the pseudo positive sample pairs and defines a priority-weighted hybrid contrastive loss for better tolerating the noises in the pseudo positive samples. We conduct extensive experiments on two benchmark datasets Market-1501 and DukeMTMC-reID. Experimental results validate the effectiveness of our proposals.'\nauthor:\n- He Sun\n- Mingkun Li\n- 'Chun-Guang Li'\nbibliography:\n- 'main.bib'\ntitle:\n- 'A Cluster Ensemble Method with Local Mining Metric for Unsuperevised Person Re-identification'\n- 'Cluster Ensemble with Local Contrastive Learning for" +"---\nabstract: |\n We introduce a family of Markov growth processes on discrete height functions defined on the 2-dimensional square lattice. Each height function corresponds to a configuration of the six vertex model on the infinite square lattice. We focus on the stochastic six vertex model corresponding to a particular two-parameter family of weights within the ferroelectric regime. It is believed (and partially proven, see Aggarwal [@aggarwal2020nonexistence]) that the stochastic six vertex model displays nontrivial pure (i.e., translation invariant and ergodic) Gibbs states of two types, KPZ and liquid. These phases have very different long-range correlation structure. The Markov processes we construct preserve the KPZ pure states in the full plane. We also show that the same processes put on the torus preserve arbitrary Gibbs measures for generic six vertex weights (not necessarily in the ferroelectric regime).\n\n Our dynamics arise naturally from the Yang\u2013Baxter equation for the six vertex model via its bijectivisation, a technique first used in Bufetov\u2013Petrov [@BufetovPetrovYB2017]. The dynamics we construct are irreversible; in particular, the height function has a nonzero average drift. In each KPZ pure state, we explicitly compute the average drift (also known as the current) as a function of the slope. We use" +"---\nabstract: 'We address the question of the large scale or hydrodynamic behavior of a 2-species generalization of TASEP (2\u2013TASEP), consisting of two kinds of particles, moving in opposite directions and swapping their positions. We compute the rarefaction and shock solutions of the hydrodynamic equations of the model, showing that these equations form a Temple class system. We solve completely the Riemann problem and compare the theoretical prediction to Monte Carlo simulations.'\naddress:\n- 'Laboratoire de Physique Th\u00e9orique et Mod\u00e9lisation (CNRS UMR 8089), CY Cergy Paris Universit\u00e9, F-95302 Cergy-Pontoise, France'\n- 'Laboratoire de Physique Th\u00e9orique et Mod\u00e9lisation (CNRS UMR 8089), CY Cergy Paris Universit\u00e9, F-95302 Cergy-Pontoise, France'\nauthor:\n- Luigi\u00a0Cantini\n- Ali\u00a0Zahra\nbibliography:\n- 'biblio-gen.bib'\ntitle: 'Hydrodynamic behavior of the 2\u2013TASEP'\n---\n\nIntroduction\n============\n\nThe asymmetric simple exclusion process (ASEP) is a minimal model of transport in (quasi) one\u2013dimensional systems. It consists of particles which occupy the sites of a one dimensional lattice with only one particle allowed on each lattice site. These particles hop under the effect of an external driving force which breaks detailed balance and creates a stationary current. This model was introduced independently in the late 60s in biology (to model translation in protein" +"---\nabstract: 'We report on a quantum thermodynamic method to purify a qubit on a quantum processing unit (QPU) equipped with (nearly) identical qubits. Our starting point is a three qubit design that emulates the well known two qubit swap engine. Similar to standard fridges, the method would allow to cool down a qubit at the expense of heating two other qubits. A minimal modification thereof leads to a more practical three qubit design that allows for enhanced refrigeration tasks, such as increasing the purity of one qubit at the expense of decreasing the purity of the other two. The method is based on the application of properly designed quantum circuits, and can therefore be run on any gate model quantum computer. We implement it on a publicly available superconducting qubit based QPU, and observe a purification capability down to $200 $ mK. We identify gate noise as the main obstacle towards practical application for quantum computing.'\nauthor:\n- Andrea Solfanelli\n- Alessandro Santini\n- Michele Campisi\ntitle: Quantum thermodynamic methods to purify a qubit on a quantum processing unit\n---\n\nIntroduction\n============\n\nQuantum computing technology is currently developing at a very fast pace. The main obstacle towards scaling up" +"---\nabstract: 'In this paper, high-order numerical integrators on homogeneous spaces will be presented as an application of nonholonomic partitioned Runge-Kutta Munthe-Kaas (RKMK) methods on Lie groups. A homogeneous space $M$ is a manifold where a group $G$ acts transitively. Such a space can be understood as a quotient $M \\cong G/H$, where $H$ a closed Lie subgroup, is the isotropy group of each point of $M$. The Lie algebra of $G$ may be decomposed into $\\mathfrak{g} = \\mathfrak{m} \\oplus \\mathfrak{h}$, where $\\mathfrak{h}$ is the subalgebra that generates $H$ and $\\mathfrak{m}$ is a subspace. Thus, variational problems on $M$ can be treated as nonholonomically constrained problems on $G$, by requiring variations to remain on $\\mathfrak{m}$. Nonholonomic partitioned RKMK integrators are derived as a modification of those obtained by a discrete variational principle on Lie groups, and can be interpreted as obeying a discrete Chetaev principle. These integrators tend to preserve several properties of their purely variational counterparts.'\nauthor:\n- |\n Rodrigo T. Sato Mart\u00edn de Almagro[^1]\\\n [Friedrich-Alexander-Universit\u00e4t Erlangen-N\u00fcrnberg (FAU),]{}\\\n [Institute of Applied Dynamics]{}\\\n [Immerwahrstrasse 1, 91058 Erlangen, Germany]{}\nbibliography:\n- 'references.bib'\ntitle: 'High-order integrators for Lagrangian systems on homogeneous spaces via nonholonomic mechanics'\n---\n\n***Keywords\u2014*** Homogeneous spaces, High-order integrators, Runge-Kutta" +"---\nabstract: |\n Neuromorphic vision-based sensors are gaining popularity in recent years with their ability to capture Spatio-temporal events with low power sensing. These sensors record events or spikes over traditional cameras which helps in preserving the privacy of the subject being recorded. These events are captured as per-pixel brightness changes and the output data stream is encoded with time, location, and pixel intensity change information. This paper proposes and benchmarks the performance of fine-tuned conventional vision models on neuromorphic human action recognition and fall detection datasets. The Spatio-temporal event streams from the Dynamic Vision Sensing cameras are encoded into a standard sequence image frames. These video frames are used for benchmarking conventional deep learning-based architectures. In this proposed approach, we fine-tuned the state-of-the-art vision models for this Dynamic Vision Sensing (DVS) application and named these models as DVS-R2+1D, DVS-CSN, DVS-C2D, DVS-SlowFast, DVS-X3D, and DVS-MViT. Upon comparing the performance of these models, we see the current state-of-the-art MViT based architecture DVS-MViT outperforms all the other models with an accuracy of 0.958 and an F-1 score of 0.958. The second best is the DVS-C2D with an accuracy of 0.916 and an F-1 score of 0.916. Third and Fourth are DVS-R2+1D and" +"---\nabstract: 'Unlike classical centrality measures, recently developed community-aware centrality measures use a network\u2019s community structure to identify influential nodes in complex networks. This paper investigates their relationship on a set of fifty real-world networks originating from various domains. Results show that classical and community-aware centrality measures generally exhibit low to medium correlation values. These results are consistent across networks. Transitivity and efficiency are the most influential macroscopic network features driving the correlation variation between classical and community-aware centrality measures. Additionally, the mixing parameter, the modularity, and the Max-ODF are the main mesoscopic topological properties exerting the most substantial effect.'\nauthor:\n- 'Stephany Rajeh\\*'\n- Marinette Savonnet\n- Eric Leclercq\n- Hocine Cherifi\nbibliography:\n- 'biblio.bib'\ntitle: 'How Correlated are Community-aware and Classical Centrality Measures in Complex Networks?'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nIdentifying influential nodes is crucial for accelerating or mitigating propagation processes in complex networks. To this end, numerous classical centrality measures relying on various topological properties have been proposed. One can distinguish two main categories: local and global measures [@lu2016vital]. Local metrics use information in the node neighborhood while global ones gather information from the whole network. Note that some works combine local and global information [@ibnoulouafi2018m]." +"---\nabstract: 'In this article, we consider a semilinear pseudo parabolic heat equation with the nonlinearity which is the product of logarithmic and polynomial functions. Here we prove the global existence of solution to the problem for arbitrary dimension $n \\geq 1$ and power index $p>1$. Asymptotic behaviour of the solution has been addressed at different energy levels. Moreover, we prove that the global solution indeed decays with an exponential rate. Finally, sufficient conditions are provided under which blow up of solutions take place.'\nauthor:\n- 'Joydev Halder[^1]\u00a0'\n- 'Bhargav Kumar Kakumani[^2]'\n- 'Suman Kumar Tumuluri [^3]'\nbibliography:\n- 'references.bib'\ntitle:\n- '[A family of potential wells for a semilinear pseudo-parabolic equation]{}'\n- '[Existence of global solutions to a semilinear pseudo-parabolic equation]{}'\n---\n\n[ ***Keywords\u2014*** Global existence; potential well; decay estimates; finite time blow-up ]{}\\\n[ ***Mathematics Subject Classification (2020) \u2014*** 35A01, 35B40, 35K61, 35S16 ]{}\n\nIntroduction\n============\n\nIn this article, we are interested to study the global existence and the longtime behaviour of the following semilinear pseudo parabolic heat equation $$\\label{main}\n \\left\\{\n \\begin{aligned}\n &v_t-\\Delta v_t -\\Delta v=v|v|^{p-1}\\log|v|, && t\\in \\mathbb{R}^+,\\ x \\in {{U}}, \n \\\\\n &v(x,t)=0, && t\\in \\mathbb{R}^+,\\ x \\in \\partial {{U}}, \n \\\\\n &v(x,0)=v_0(x), && x \\in {{U}}," +"---\nauthor:\n- |\n Yves Meinard, Alexis Tsouki\u00e0s,\\\n CNRS-LAMSADE, PSL, Universit\u00e9 Paris Dauphine\nbibliography:\n- 'legitimacy.bib'\ntitle: 'What is legitimate decision support?'\n---\n\n(500,700) (60,670)[(0,0)]{} (60,70)[(0,0)]{} (320,350)[(0,0)]{} (140,10) (190,330)[(1,0)[263]{}]{} (320,310)[(0,0)]{} (320,290)[(0,0)[January 2022]{}]{} (320,210)[(0,0)]{} (320,100)[(0,0)]{} (320,670)[(0,0)]{} (320,650)[(0,0)]{} (320,630)[(0,0)]{}\n\nIntroduction\n============\n\nAlthough the term \u201cdecision\u201d might at first sight seem to refer a punctual event, in fact most decisions are made through a set of cognitive activities that the decision maker performs: a decision *process*. Decision support is the science and associated practice that consist in providing recommendations to clients (possibly decision makers) facing problems, based on available theoretical knowledge and empirical data. Just like decisions or decision making, decision support is a process, rather than a punctual event. What do we do when, as decision analysts, we engage in a such processes [@Tsoukias07aor]? From an analyst\u2019s perspective, the answer is that we *manipulate* information to provide recommendations. To formulate this idea, we purportedly use the ambiguous term \u201cmanipulate\u201d, because it conveniently conveys the idea that this task is double-edged. Indeed, depending on the context, \u201cmanipulate\u201d can be either a neutral term, synonym for \u201ccompute\u201d or handle\u201c, or be attached with negative connotations, and mean something more akin to \u201ddistort\u201c or \u201dfalsify\u201c." +"---\nabstract: 'Universal fault-tolerant quantum computers will require the use of efficient protocols to implement encoded operations necessary in the execution of algorithms. In this work, we show how solvers for satisfiability modulo theories (SMT solvers) can be used to automate the construction of Clifford circuits with certain fault-tolerance properties and we apply our techniques to a fault-tolerant magic-state-preparation protocol. Part of the protocol requires converting magic states encoded in the color code to magic states encoded in the surface code. Since the teleportation step involves decoding a color code merged with a surface code, we develop a decoding algorithm that is applicable to such codes.'\nauthor:\n- Noah Shutty\n- Christopher Chamberland\nbibliography:\n- 'refs.bib'\ntitle: 'Decoding Merged Color-Surface Codes and Finding Fault-Tolerant Clifford Circuits Using Solvers for Satisfiability Modulo Theories'\n---\n\nIntroduction {#sec:Intro}\n============\n\nMany problems in quantum computing require the construction of Clifford circuits with some desired properties. For instance, in topological quantum error correction, multi-qubit gates used to measure the stabilizers of the code must be implemented in a particular order to prevent small errors from propagating to large errors which reduce the effective distance of the code [@Yoder2017surfacecodetwist; @Litinski2018latticesurgery; @chamberland2020triangular; @PrabhuReichardtv1]. In many cases, fault-tolerant" +"---\nabstract: 'It is of paramount importance to uncover influential nodes to control diffusion phenomena in a network. In recent works, there is a growing trend to investigate the role of the community structure to solve this issue. Up to now, the vast majority of the so-called community-aware centrality measures rely on non-overlapping community structure. However, in many real-world networks, such as social networks, the communities overlap. In other words, a node can belong to multiple communities. To overcome this drawback, we propose and investigate the \u201cOverlapping Modularity Vitality\u201d centrality measure. This extension of \u201cModularity Vitality\u201d quantifies the community structure strength variation when removing a node. It allows identifying a node as a hub or a bridge based on its contribution to the overlapping modularity of a network. A comparative analysis with its non-overlapping version using the Susceptible-Infected-Recovered (SIR) epidemic diffusion model has been performed on a set of six real-world networks. Overall, Overlapping Modularity Vitality outperforms its alternative. These results illustrate the importance of incorporating knowledge about the overlapping community structure to identify influential nodes effectively. Moreover, one can use multiple ranking strategies as the two measures are signed. Results show that selecting the nodes with the top positive" +"---\nauthor:\n- 'Veronica\u00a0Biffi[^1]'\n- 'John\u00a0A.\u00a0ZuHone'\n- Tony\u00a0Mroczkowski\n- Esra\u00a0Bulbul\n- William Forman\nbibliography:\n- 'bibl.bib'\ndate: 'Received ; accepted'\ntitle: 'The velocity structure of the intracluster medium during a major merger: simulated microcalorimeter observations'\n---\n\n[Major mergers between galaxy clusters can produce large turbulent and bulk flow velocities in the intra-cluster medium (ICM) and thus imprint useful diagnostic features in X-ray spectral emission lines from heavy ions. As successfully achieved by *Hitomi* in observations of the Perseus cluster, measurements of gas velocities in clusters from high-resolution X-ray spectra will be achievable with upcoming X-ray calorimeters like those on board [*XRISM*]{}, [*Athena*]{}, or a *Lynx* like mission. An interesting application to clusters involves detecting multiple velocity components or velocity gradients from diagnostic observations of specific interesting locations across the cluster. To explore this possibility in the case of a major head-on cluster merger, we perform velocity analyses of a cluster-cluster merger from a hydrodynamical simulation by means of X-ray synthetic spectra with spectral resolution of order of a few eV. We observed the system along two extreme line-of-sight directions: 1) perpendicular to the plane of the merger and 2) along the merger axis. In these" +"---\nabstract: |\n We study cycle counts in permutations of $1,\\dots,n$ drawn at random according to the Mallows distribution. Under this distribution, each permutation $\\pi \\in S_n$ is selected with probability proportional to $q^{\\operatorname{inv}(\\pi)}$, where $q>0$ is a parameter and $\\operatorname{inv}(\\pi)$ denotes the number of inversions of $\\pi$. For $\\ell$ fixed, we study the vector $(C_1(\\Pi_n),\\dots,C_\\ell(\\Pi_n))$ where $C_i(\\pi)$ denotes the number of cycles of length $i$ in $\\pi$ and $\\Pi_n$ is sampled according to the Mallows distribution. When $q=1$ the Mallows distribution simply samples a permutation of $1,\\dots,n$ uniformly at random. A classical result going back to Kolchin and Goncharoff states that in this case, the vector of cycle counts tends in distribution to a vector of independent Poisson random variables, with means $1,\\frac12,\\frac13,\\dots,\\frac{1}{\\ell}$.\n\n Here we show that if $01$ there is a striking difference between the behaviour of the even and the odd cycles. The even cycle counts still have linear means, and when properly" +"---\nabstract: |\n In this paper, a novel approach via embedded tensor manifold regularization for 2D+3D facial expression recognition (FERETMR) is proposed. Firstly, 3D tensors are constructed from 2D face images and 3D face shape models to keep the structural information and correlations. To maintain the local structure (geometric information) of 3D tensor samples in the low-dimensional tensors space during the dimensionality reduction, the $\\ell_0$-norm of the core tensors and a tensor manifold regularization scheme embedded on core tensors are adopted via a low-rank truncated Tucker decomposition on the generated tensors. As a result, the obtained factor matrices will be used for facial expression classification prediction. To make the resulting tensor optimization more tractable, $\\ell_1$-norm surrogate is employed to relax $\\ell_0$-norm and hence the resulting tensor optimization problem has a nonsmooth objective function due to the $\\ell_1$-norm and orthogonal constraints from the orthogonal Tucker decomposition. To efficiently tackle this tensor optimization problem, we establish the first-order optimality condition in terms of stationary points, and then design a block coordinate descent (BCD) algorithm with convergence analysis and the computational complexity. Numerical results on BU-3DFE database and Bosphorus databases demonstrate the effectiveness of our proposed approach.\n\n **Key words.** 2D+3D facial expression recognition," +"---\nauthor:\n- 'Jorge Lerendegui-Marco'\n- 'Javier Balibrea-Correa'\n- 'V\u00edctor Babiano-Su\u00e1rez'\n- Ion Ladarescu\n- 'C\u00e9sar Domingo-Pardo'\ntitle: 'Towards machine learning aided real-time range imaging in proton therapy'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nProton therapy in comparison to conventional radiation therapy is able to target the tumor thanks to the maximum dose deposition at the end of the trajectory of the protons (Bragg peak) and its finite penetration in matter. As the dose deposit beyond this distal edge is very low, proton therapy minimizes the damage to neighbouring tissues compared to photon therapy and is hence especially well-suited for tumors close to sensitive organs and in pediatric cases because the lower dose received by healthy tissues reduces the long-term secondary effects\u00a0[@Knopf:13]. However, inherent range uncertainties associated to anatomical changes, patient setup errors and range errors from uncertainties in particle stopping power, and imaging reconstruction artifacts\u00a0[@Kraan:2015] lead to the application of conservative safety margins. Indeed, up to 1\u00a0cm of margin is considered nowadays for a prescribed range of 30 cm\u00a0[@Paganetti:12], limiting significantly the potential benefits of protons over photons.\n\nIn this context, several experimental methods to verify the proton beam range have been developed in recent years," +"---\nabstract: 'The SEparator for CApture Reactions (SECAR) is a next-generation recoil separator system at the Facility for Rare Isotope Beams (FRIB) designed for the direct measurement of capture reactions on unstable nuclei in inverse kinematics. To maximize the performance of this system, stringent requirements on the beam alignment to the central beam axis and on the ion-optical settings need to be achieved. These can be difficult to attain through manual tuning by human operators without potentially leaving the system in a sub-optimal and irreproducible state. In this work, we present the first development of online Bayesian optimization with a Gaussian process model to tune an ion beam through a nuclear astrophysics recoil separator. We show that this method achieves small incoming angular deviations (<1 mrad) in an efficient and reproducible manner that is at least 3 times faster than standard hand-tuning. Additionally, we present a Bayesian method for experimental optimization of the ion optics, and show that it validates the nominal theoretical ion-optical settings of the device, and improves the mass separation by 32% for some beams.'\nauthor:\n- 'S. A. Miskovich[[](https://orcid.org/0000-0002-3302-838X)]{}'\n- 'F. Montes'\n- 'G. P. A. Berg'\n- 'J. Blackmon'\n- 'K. A. Chipps'\n- 'M." +"---\nabstract: 'Bacteria biodegradation of immiscible oil requires cell-droplet encounters, surface attachment, and hydrocarbon metabolism. Chemical dispersants are applied to oil spills to reduce the mean dispersed droplet size, thereby increasing the available surface area for attachment, in attempts to facilitate bacterial biodegradation. However, their effectiveness remains contentious as studies have shown that dispersants can inhibit, enhance, or have no effect on biodegradation. Therefore, questions remain on whether dispersants affect surface attachment or cell viability. Here, using microfluidics and time-lapse microscopy, we directly observe the attachment and growth of the marine bacterium, *Alcanivorax borkumensis*, on stationary crude oil droplets ($5$ m $< R < 150$ m) in the presence of Corexit 9500. We show that the average colonization time, or the time comprised of encounters, attachment, and growth, is dependent on droplet size and primarily driven by diffusive encounters. Our results suggest that dispersants do not inhibit or enhance these biophysical processes.'\nauthor:\n- 'Vincent Hickl$^{a}$'\n- 'Gabriel Juarez$^{b}$'\nbibliography:\n- 'refs2.bib'\ntitle: 'Effect of dispersants on bacterial colonization of oil droplets: a microfluidic approach'\n---\n\n[^1]\n\nIntroduction\n============\n\nOil spills remain a common and extremely dangerous threat to marine ecosystems around the world. Understanding the fate of spilled oil" +"---\nabstract: 'We develop the search strategy for a heavy Majorana neutrino via the lepton number violation signal process $p\\, e^- \\to \\mu^+ jjj$ at future electron-proton colliders. The signal and dominant standard model background events are generated with the fast detector simulation. We apply the pre-selection criteria and perform the multi-variate analysis based on machine-learning to reject the background. Distributions of representative kinematic observables are presented for both signal and background processes and effects on final limits are compared by inputting two different set of observables when performing multi-variate analysis. The 2- and 5-$\\sigma$ limits on the mixing parameter $|V_{\\ell N}|^2$ are predicted for the heavy neutrino mass $m_N$ in the range of 10$-$1000 GeV. At the LHeC (FCC-eh) with an electron beam energy of 60 GeV, a proton beam energy of 7 (50) TeV and an integrated luminosity of 1 (3) ab$^{-1}$, the mixing parameter $|V_{\\ell N}|^2$ can be constrained to be below $\\sim 3.0~(1.0) \\times 10^{-6}$ for $m_N$ around $\\mathcal{O}(100)$ GeV at 2-$\\sigma$ level. The limits are much stronger than the current experiment limits at the LHC for $m_N$ above 30 GeV. The positron signal final state and the effect of long-lived cases of heavy neutrinos are" +"---\nauthor:\n- Anthony Ortiz\n- Dhaval Negandhi\n- Sagar R Mysorekar\n- Joseph Kiesecker\n- Shivaprakash K Nagaraju\n- Caleb Robinson\n- Priyal Bhatia\n- Aditi Khurana\n- Jane Wang\n- Felipe Oviedo\n- Juan Lavista Ferres\nbibliography:\n- 'mybibfile.bib'\ntitle: An Artificial Intelligence Dataset for Solar Energy Locations in India\n---\n\nBackground & Summary {#background-summary .unnumbered}\n====================\n\n![image](figures/pipeline_design.png){width=\".9\\linewidth\"}\n\nIndia is rapidly expanding its deployment of clean energy\u00a0[@climatescope2020]. The dual benefits of climate mitigation potential, and lower cost of production, makes renewable energy cost-competitive compared to coal and other conventional energy sources. Therefore, to achieve the nationally determined contribution (NDC) targets such as: 40% share of non-fossil fuel cumulative power generation capacity, and to halt greenhouse gasses (GHGs) emission from fossil fuels, India has committed to 500 gigawatts (GW) of installed renewable energy capacity by 2030\u00a0[@ceainstallcapacity2021]. India intends to reach 225 GW of renewable power capacity by 2022 exceeding the target of 175 GW pledged during the Paris Agreement. As of 2018 India ranks fifth in installed renewable energy capacity with fourth most attractive renewable energy market in the world.\n\nSolar energy is expected to play an increasingly large role in India\u2019s clean energy transition. Of the" +"---\nabstract: 'Real-time estimation of actual object depth is an essential module for various autonomous system tasks such as 3D reconstruction, scene understanding and condition assessment. During the last decade of machine learning, extensive deployment of deep learning methods to computer vision tasks has yielded approaches that succeed in achieving realistic depth synthesis out of a simple RGB modality. Most of these models are based on paired RGB-depth data and/or the availability of video sequences and stereo images. The lack of sequences, stereo data and RGB-depth pairs makes depth estimation a fully unsupervised single-image transfer problem that has barely been explored so far. This study builds on recent advances in the field of generative neural networks in order to establish fully unsupervised single-shot depth estimation. Two generators for RGB-to-depth and depth-to-RGB transfer are implemented and simultaneously optimized using the Wasserstein-1 distance, a novel perceptual reconstruction term and hand-crafted image filters. We comprehensively evaluate the models using industrial surface depth data as well as the Texas 3D Face Recognition Database, the CelebAMask-HQ database of human portraits and the SURREAL dataset that records body depth. For each evaluation dataset the proposed method shows a significant increase in depth accuracy compared to state-of-the-art" +"---\nabstract: 'Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR). The boost in performance can be attributed to the presence of residual or dense connections within the intermediate layers of these networks. The efficient combination of such connections can reduce the number of parameters drastically while maintaining the restoration quality. In this paper, we propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs)) that allow extraction of abundant local features from the image. Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches. To further improve the performance of our network, we employ multiple residual connections in intermediate layers (referred to as Multi-Residual Dense Blocks), which improves gradient propagation in existing layers. Recent works have discovered that conventional loss functions can guide a network to produce results which have high PSNRs but are perceptually inferior. We mitigate this issue by utilizing a Generative Adversarial Network (GAN) based framework and deep feature (VGG) losses to train our network. We experimentally demonstrate that different" +"---\nauthor:\n- 'Daqi\u00a0Liu,\u00a0Miroslaw\u00a0Bober,\u00a0,\u00a0Josef\u00a0Kittler,\u00a0'\nbibliography:\n- 'IEEEabrv.bib'\n- 'Semantic.bib'\ntitle: Constrained Structure Learning for Scene Graph Generation\n---\n\ngraph generation (SGG) task involves building a visually-grounded scene graph to explicitly model objects and their relationships in an input image. Its aim is to facilitate downstream vision tasks such as image captioning [@anderson2018bottom], [@yang2019auto] and visual question answering [@teney2017graph], [@shi2019explainable]. As a structured prediction task, SGG is generally NP-hard, owing to the exponential complexity of interactions among the output variables (which are expected to form coherent visual relationships) present a huge challenge for directly computing the desired statistics, i.e. the underlying posterior or the relevant marginals. Currently, only pairwise interactions are considered in the SGG task and they are often formulated as triplet structures, in which each triplet consists of three components: a subject, a predicate and an object.\n\nMore specifically, given an input image $x$, a specific type of approximation strategies - variational Bayesian (VB) [@wainwright2008graphical], [@fox2012tutorial] - is often applied to accomplish the SGG generation task in the current methods. In this approach, the variational inference step aims to infer the optimum interpretation $z^*$ by means of a max aposteriori (MAP) estimation" +"---\nabstract: 'Conventional image compression methods typically aim at pixel-level consistency while ignoring the performance of downstream AI tasks. To solve this problem, this paper proposes a Semantic-Assisted Image Compression method (SAIC), which can maintain semantic-level consistency to enable high performance of downstream AI tasks. To this end, we train the compression network using semantic-level loss function. In particular, semantic-level loss is measured using gradient-based semantic weights mechanism (GSW). GSW directly consider downstream AI tasks\u2019 perceptual results. Then, this paper proposes a semantic-level distortion evaluation metric to quantify the amount of semantic information retained during the compression process. Experimental results show that the proposed SAIC method can retain more semantic-level information and achieve better performance of downstream AI tasks compared to the traditional deep learning-based method and the advanced perceptual method at the same compression ratio.'\naddress: |\n $^{\\ast}$Beijing Laboratory of Advanced Information Networks,\\\n Beijing University of Posts and Telecommunications, Beijing, China 100876\\\n $^{\\dagger}$Beijing Key Laboratory of Network System Architecture and Convergence,\\\n Beijing University of Posts and Telecommunications, Beijing, China 100876\\\n Email: {qizheng\\_sun, guocaili, yangyang01, chenjiujiu}@bupt.edu.cn, xuexj@chinatelecom.cn \nbibliography:\n- 'icme2022template.bib'\ntitle: 'Semantic-Assisted Image Compression'\n---\n\n\u0141[[L]{}]{}\n\nImage compression, semantic-level loss, task performance maintenance\n\nIntroduction\n============\n\nWith the explosion of visual" +"---\nabstract: '*Multilingual* task-oriented dialogue ([ToD]{}) facilitates access to services and information for many (communities of) speakers. Nevertheless, the potential of this technology is not fully realised, as current datasets for multilingual [ToD]{}\u2014both for modular and end-to-end modelling\u2014suffer from severe limitations. **1)** When created from scratch, they are usually small in scale and fail to cover many possible dialogue flows. **2)** Translation-based [ToD]{}datasets might lack naturalness and cultural specificity in the target language. In this work, to tackle these limitations we propose a novel *outline-based* annotation process for multilingual [ToD]{}datasets, where domain-specific abstract schemata of dialogue are mapped into natural language outlines. These in turn guide the target language annotators in writing a dialogue by providing instructions about each turn\u2019s intents and slots. Through this process we annotate a new large-scale dataset for training and evaluation of multilingual and cross-lingual [ToD]{}systems. Our **C**ross-lingual **O**utline-based **D**ialogue dataset (termed [cod]{}) enables natural language understanding, dialogue state tracking, and end-to-end dialogue modelling and evaluation in 4 diverse languages: Arabic, Indonesian, Russian, and Kiswahili. Qualitative and quantitative analyses of [cod]{} versus an equivalent translation-based dataset demonstrate improvements in data quality, unlocked by the outline-based approach. Finally, we" +"---\nabstract: 'Sequential decoding, commonly applied to substitution channels, is a sub-optimal alternative to Viterbi decoding with significantly reduced memory costs. In this work, a sequential decoder for convolutional codes over channels that are prone to insertion, deletion, and substitution errors, is described and analyzed. Our decoder expands the code trellis by a new channel-state variable, called drift state, as proposed by Davey and MacKay. A suitable decoding metric on that trellis for sequential decoding is derived, generalizing the original Fano metric. The decoder is also extended to facilitate the simultaneous decoding of multiple received sequences that arise from a single transmitted sequence. Under low-noise environments, our decoding approach reduces the decoding complexity by a couple orders of magnitude in comparison to Viterbi\u2019s algorithm, albeit at slightly higher bit error rates. An analytical method to determine the computational cutoff rate is also suggested. This analysis is supported with numerical evaluations of bit error rates and computational complexity, which are compared with respect to optimal Viterbi decoding.'\nauthor:\n- \nbibliography:\n- 'SequentialDecoding.bib'\ntitle: 'Sequential Decoding of Multiple Sequences for Synchronization Errors [^1] '\n---\n\nIntroduction\n============\n\nMost error-control systems operate under the assumption of perfect synchronization between transmitter and receiver, while" +"---\nabstract: 'We study the set of possible traces of anisotropic least gradient functions. We show that even on the unit disk it changes with the anisotropic norm: for two sufficiently regular strictly convex norms the trace spaces coincide if and only if the norms coincide. The example of a function in exactly one of the trace spaces is given by a characteristic function of a suitably chosen Cantor set.'\naddress: ' W. G\u00f3rny: Faculty of Mathematics, Universit\u00e4t Wien, Oskar-Morgerstern-Platz 1, 1090 Vienna, Austria; Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Banacha 2, 02-097 Warsaw, Poland '\nauthor:\n- Wojciech G\u00f3rny\ntitle: '**The trace space of anisotropic least gradient functions depends on the anisotropy**'\n---\n\nIntroduction\n============\n\nThe least gradient problem is the following minimisation problem $$\\label{eq:lgpisotropic}\\tag{LGP}\n\\min \\bigg\\{ \\int_{\\Omega} |Du|: \\, u \\in BV(\\Omega), \\, u|_{\\partial\\Omega} = f \\bigg\\},$$ where $f \\in L^1(\\partial\\Omega)$. It was first considered in this form by Sternberg, Williams and Ziemer in [@SWZ], but its roots go back to the works of Miranda [@Mir0; @Mir] and Bombieri, de Giorgi and Giusti [@BGG] on area-minimising sets. It can be also expressed as the Dirichlet problem for the $1$-Laplace operator, see [@MazRoSe]. This problem and" +"---\nabstract: 'Optimum parameter estimation methods require knowledge of a parametric probability density that statistically describes the available observations. In this work we examine Bayesian and non-Bayesian parameter estimation problems under a data-driven formulation where the necessary parametric probability density is replaced by available data. We present various data-driven versions that either result in neural network approximations of the optimum estimators or in well defined optimization problems that can be solved numerically. In particular, for the data-driven equivalent of non-Bayesian estimation we end up with optimization problems similar to the ones encountered for the design of generative networks.'\nauthor:\n- \ntitle: 'Data-Driven Parameter Estimation'\n---\n\nParameter estimation, Neural networks, Data-driven estimation.\n\nIntroduction {#sec:1}\n============\n\ntheory of Detection and Estimation constitutes a major background knowledge in Engineering and Statistics. The corresponding methodologies find application in numerous scientific problems and either provide the actual solution or serve as a starting point for developing techniques that are practically implementable. It is remarkable that with very introductory knowledge of Probability Theory one can derive optimum Detection and Parameter Estimation methods [@poor; @moulin]. In parameter estimation, common denominator in all the optimum approaches is the key assumption that we have a complete statistical description in" +"---\nabstract: 'Safe Policy Improvement (SPI) aims at provable guarantees that a learned policy is at least approximately as good as a given baseline policy. Building on SPI with Soft Baseline Bootstrapping (Soft-SPIBB) by Nadjahi et al., we identify theoretical issues in their approach, provide a corrected theory, and derive a new algorithm that is provably safe on finite Markov Decision Processes (MDP). Additionally, we provide a heuristic algorithm that exhibits the best performance among many state of the art SPI algorithms on two different benchmarks. Furthermore, we introduce a taxonomy of SPI algorithms and empirically show an interesting property of two classes of SPI algorithms: while the mean performance of algorithms that incorporate the uncertainty as a penalty on the action-value is higher, actively restricting the set of policies more consistently produces good policies and is, thus, safer.'\nauthor:\n- |\n Philipp Scholl\\\n Department of Mathematics\\\n Ludwig-Maximilians University\\\n Munich, Germany\\\n `scholl@math.lmu.de`\\\n Felix Dietrich\\\n Department of Computer Science\\\n Technical University of Munich\\\n Munich, Germany\\\n `felix.dietrich@tum.de`\\\n Clemens Otte\\\n Learning Systems\\\n Siemens Technology\\\n Munich, Germany\\\n `clemens.otte@siemens.com`\\\n Steffen Udluft\\\n Learning Systems\\\n Siemens Technology\\\n Munich, Germany\\\n `steffen.udluft@siemens.com`\\\nbibliography:\n- 'main.bib'\ntitle: |\n Safe Policy Improvement Approaches\\\n on Discrete Markov Decision Processes\n---\n\nIntroduction {#sec:introduction}" +"---\nabstract: 'Consider the problem of approximating a given probability distribution on the cube $[0,1]^n$ via the use of a square lattice discretization with mesh-size $1/N$ and the Metropolis algorithm. Here the dimension $n$ is fixed and we focus for the most part on the case $n=2$. In order to understand the speed of convergence of such a procedure, one needs to control the spectral gap, $\\lambda$, of the associated finite Markov chain, and how it depends on the parameter $N$. In this work, we study basic examples for which good upper-bounds and lower-bounds on $\\lambda$ can be obtained via appropriate application of path techniques.'\naddress: |\n Department of Mathematics, Cornell University\\\n Ithaca, NY, USA\nauthor:\n- 'Laurent Saloff-Coste'\n- Sophie Uluatam\nbibliography:\n- 'Sophie.bib'\ntitle: Multidimensional examples of the Metropolis algorithm\n---\n\nIntroduction\n============\n\nIn the article [@What] titled [*What do we know about the Metropolis algorithm?*]{}, Persi Diaconis and the first author reviewed some of the basic ideas behind the celebrated Metropolis algorithm and its quantitative analysis in the case of simple one-dimensional examples. In this sequel, we complement this reference by treating some additional examples including basic multidimensional examples. Throughout, we require the dimension to be fixed" +"---\nabstract: 'Recently, Trepte et al. \\[J. Chem. Phys., vol. 155, 2021\\] pointed out the importance of analyzing dipole moments in the Fermi-L[\u00f6]{}wdin orbital (FLO) self-interaction correction (SIC) for cyclic, planar molecules. In this manuscript, the effect of the molecular and electronic geometries on dipole moments and polarizabilities is discussed for non-cyclic molecules. Computed values are presented for water, formaldehyde, and nitromethane. Continuing the work of Schwalbe et al. \\[J. Chem. Phys. vol. 153, (2020)\\], we reconfirm that systematic numerical parameter studies are essential to obtain consistent results in density functional theory (DFT) and SIC. In agreement with Trepte et al. \\[J. Chem. Phys., vol. 155, 2021\\], DFT agrees well with experiment for dipole moments, while SIC slightly overestimates them. A Linnett double-quartet electronic geometry is found to be energetically preferred for nitromethane.'\nauthor:\n- Simon Liebing\n- Kai Trepte\n- Sebastian Schwalbe\nbibliography:\n- 'main.bib'\ntitle: 'Effect of molecular and electronic geometries on the electronic density in FLO-SIC'\n---\n\nIntroduction/Motivation {#sec:intro}\n=======================\n\nElectronic structure methods have become more important over recent years.\u00a0[@Becke2014_18A301; @verma2020_302] These methods can be used to verify experimental observations.\u00a0[@Forster2012_856; @Pfaff2012_6761; @Seidel2013_601; @Trepte2017_10020; @Trepte2018_25039] However, the role of electronic structure methods has changed significantly over" +"---\nabstract: 'This paper presents the network load balancing problem, a challenging real-world task for multi-agent reinforcement learning (MARL) methods. Conventional heuristic solutions like Weighted-Cost Multi-Path (WCMP) and Local Shortest Queue (LSQ) are less flexible to the changing workload distributions and arrival rates, with a poor balance among multiple load balancers. The cooperative network load balancing task is formulated as a Dec-POMDP problem, which naturally induces the MARL methods. To bridge the reality gap for applying learning-based methods, all models are directly trained and evaluated on a real-world system from moderate- to large-scale setups. Experimental evaluations show that the independent and \u201cselfish\u201d load balancing strategies are not necessarily the globally optimal ones, while the proposed MARL solution has a superior performance over different realistic settings. Additionally, the potential difficulties of the application and deployment of MARL methods for network load balancing are analysed, which helps draw the attention of the learning and network communities to such challenges.'\nauthor:\n- Zhiyuan Yao\n- Zihan Ding\n- Thomas Clausen\nbibliography:\n- 'reference.bib'\ntitle: 'Multi-Agent Reinforcement Learning for Network Load Balancing in Data Center'\n---\n\n<ccs2012> <concept> <concept\\_id>10010147.10010257.10010258.10010261.10010275</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Multi-agent reinforcement learning</concept\\_desc> <concept\\_significance>500</concept\\_significance> <concept> <concept\\_id>10003033.10003068.10003073.10003074</concept\\_id> <concept\\_desc>Networks\u00a0Network resources allocation</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </concept>" +"---\nabstract: 'We study the performance of a phase-noise impaired double reconfigurable intelligent surface (RIS)-aided multiuser (MU) multiple-input single-output (MISO) system under spatial correlation at both RISs and base-station (BS). The downlink achievable rate is derived in closed-form under maximum ratio transmission (MRT) precoding. In addition, we obtain the optimal phase-shift design at both RISs in closed-form for the considered channel and phase-noise models. Numerical results validate the analytical expressions, and highlight the effects of different system parameters on the achievable rate. Our analysis shows that phase-noise can severely degrade the performance when users do not have direct links to both RISs, and can only be served via the double-reflection link. Also, we show that high spatial correlation at RISs is essential for high achievable rates.'\nauthor:\n- 'Zaid\u00a0Abdullah,\u00a0 Anastasios\u00a0Papazafeiropoulos,\u00a0 Steven\u00a0Kisseleff,\u00a0 Symeon\u00a0Chatzinotas,\u00a0 and\u00a0Bj$\\ddot{\\text{o}}$rn\u00a0Ottersten,\u00a0 [^1]'\nbibliography:\n- 'Double\\_RIS.bib'\ntitle: 'Impact of Phase-Noise and Spatial Correlation on Double-RIS-Assisted Multiuser MISO Networks'\n---\n\nReconfigurable intelligent surface (RIS), phase-noise, channel correlation, multiuser communication.\n\nIntroduction\n============\n\n, there is no lack of interest among researchers around the globe when it comes to the potential of reconfigurable intelligent surfaces (RISs) [@pan2021reconfigurable]. This does not come as a surprise since the" +"---\nabstract: 'This paper proposes a deep learning based power allocation (DL-PA) and hybrid precoding technique for multi-user massive multiple-input multiple-output (MU-[m]{}MIMO) systems. We first utilize an angular-based hybrid precoding technique for reducing the number of RF chains and channel estimation overhead. [Then]{}, we develop the DL-PA algorithm via a fully-connected deep neural network (DNN). DL-PA has two phases: (i) offline supervised learning with the optimal allocated powers obtained by particle swarm optimization based PA (PSO-PA) algorithm, (ii) online power prediction by the trained DNN. In comparison to the computationally expensive PSO-PA, it is shown that DL-PA greatly reduces the runtime by $98.6\\%$-$99.9\\%$, while closely achieving the optimal sum-rate capacity. It makes DL-PA a promising algorithm for the real-time online applications in MU-[m]{}MIMO systems.'\nauthor:\n- \nbibliography:\n- 'bibAsil\\_2110.bib'\ntitle: 'Deep Learning based Multi-User Power Allocation and Hybrid Precoding in Massive MIMO Systems [^1] '\n---\n\nDeep learning, massive MIMO, hybrid precoding, power allocation, millimeter wave communications, [PSO]{}.\n\nIntroduction\n============\n\nwave (mmWave) has been considered as a promising candidate for the fifth-generation (5G) and beyond [for its large available]{} bandwidth [@Uwaechia2020]. Also, [its]{} shorter wavelengths are appealing for massive multiple-input multiple-output ([m]{}MIMO) technology [since it]{} enables the implementation of large" +"---\nbibliography:\n- 'DM\\_bib.bib'\n---\n\n[**Searching for inelastic dark matter\\\nwith future LHC experiments** ]{}\n\n[ [Enrico Bertuzzo]{},$^a$ [Andre Scaffidi]{},$^{b}$ [Marco Taoso]{}$^{b}$[^1] ]{}\\\n[*$^a$ Instituto de F\u00edsica, Universidade de S\u00e3o Paulo, C.P. 66.318, 05315-970 S\u00e3o Paulo, Brazil*]{}\\\n[*$^b$ I.N.F.N. sezione di Torino, via P. Giuria 1, I-10125 Torino, Italy*]{}\\\n\n**Abstract**\n\n> We consider a dark sector containing a pair of almost degenerate states coupled to the Standard Model through a dark photon mediator. This set-up constitutes a simple realization of the inelastic dark matter scenario. The heaviest dark state is long-lived, in the limit of a small kinetic mixing among the dark photon and the Standard Model hypercharge gauge boson, and/or of a small mass splitting among the dark states. We study the prospects for detection of this scenario at proposed LHC experiments dedicated to search for long-lived particles, namely FASER, MATHUSLA, CODEX-b, AL3X, MAPP, ANUBIS and FACET. We consider both the cases of fermionic and scalar inelastic dark matter. We show that these experimental facilities can probe unexplored regions of the parameter space of this model, and we highlight their complementary roles.\n\nIntroduction {#sec:intro}\n============\n\nIn recent years, a rich experimental program has been put forward to search for" +"---\nabstract: 'Many natural and socio-economic systems are characterized by power-law distributions that make the occurrence of extreme events not negligible. Such events are sometimes referred to as Black Swans, but a quantitative definition of a Black Swan is still lacking. Here, by leveraging on the properties of Zipf-Mandelbrot law, we investigate the relations between such extreme events and the dynamics of the upper cutoff of the inherent distribution. This approach permits a quantification of extreme events and allows to classify them as White, Grey, or Black Swans. Our criterion is in accordance with some previous findings, but also allows us to spot new examples of Black Swans, such as Lionel Messi and the Turkish Airline Flight 981 disaster. The systematic and quantitative methodology we developed allows a scientific and immediate categorization of rare events, providing also new insight into the generative mechanism behind Black Swans.'\nauthor:\n- 'Giordano De Marzo$^{1, 4, 5}$'\n- 'Andrea Gabrielli$^{1, 2, 3}$'\n- 'Andrea Zaccaria$^{2}$'\n- 'Luciano Pietronero$^{1, 2, 4}$'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Quantifying the Unexpected: a scientific approach to Black Swans'\n---\n\nIntroduction\n============\n\nDuring the last decade the complexity paradigm [@anderson1972more; @Jaguar; @pietronero2008complexity] has been successfully applied not only to the" +"---\nabstract: 'Legged robot locomotion on a dynamic rigid surface (i.e., a rigid surface moving in the inertial frame) involves complex full-order dynamics that is high-dimensional, nonlinear, and time-varying. Towards deriving an analytically tractable dynamic model, this study theoretically extends the reduced-order linear inverted pendulum (LIP) model from legged locomotion on a stationary surface to locomotion on a dynamic rigid surface (DRS). The resulting model is herein termed as DRS-LIP. Furthermore, this study introduces an approximate analytical solution of the proposed DRS-LIP that is computationally efficient with high accuracy. To illustrate the practical uses of the analytical results, they are used to develop a hierarchical planning framework that efficiently generates physically feasible trajectories for DRS locomotion. The effectiveness of the proposed theoretical results and motion planner is demonstrated both through simulations and experimentally on a Laikago quadrupedal robot that walks on a rocking treadmill.'\nauthor:\n- 'Amir\u00a0Iqbal$^{1}$,\u00a0 Sushant\u00a0Veer$^{2}$, and\u00a0 Yan\u00a0Gu$^{1,\\dagger}$[^1] [^2] [^3]'\nbibliography:\n- 'ReferencesAbbrev.bib'\ntitle: 'DRS-LIP: Linear Inverted Pendulum Model for Legged Locomotion on Dynamic Rigid Surfaces'\n---\n\nLegged locomotion, nonstationary surfaces, dynamic modeling, analytical solution, motion planning.\n\nIntroduction\n============\n\nLegged robots have the potential to traverse various challenging surfaces, including stationary (uneven or discrete) surfaces" +"---\nabstract: 'The cancellations of poles of degenerate Eisenstein series were studied by Hanzer and Mui\u0107. This paper generalized the result to Eisenstein series constructed from inducing two Speh representations $\\Delta(\\tau,m)|\\cdot|^{s_1}\\otimes\\Delta(\\tau,n)|\\cdot|^{s_2}$ for the group $GL(m+n,\\mathbb{A}_\\mathbb{Q})$ for self-dual cuspidal automorphic representation $\\tau$ by describing the combinatorics of the relevant Weyl group coset.'\nauthor:\n- |\n Zhuohui Zhang\\\n Tel Aviv University\nbibliography:\n- 'main.bib'\ntitle: 'Some Combinatorics in the Cancellation of Poles of Eisenstein Series for $GL(n,\\mathbb{A}_\\mathbb{Q})$'\n---\n\nIntroduction\n============\n\nIn [@moeglin1989spectre], M\u0153glin and Waldspurger described the residual automorphic spectrum of $GL(N)$. A residual automorphic representation can always be realized as a *generalized Speh representation* $\\Delta(\\tau, n)$, where $\\tau$ is an irreducible unitary cuspidal automorphic representation of the group $GL(a)$ with $a$ satisfying $N = an$. For convenience, we will assume the cuspidal automorphic representation $\\tau = \\bigotimes_v \\tau_v$ has a unitary central character, and $\\tau_v$ is tempered at each local place $v$. The automorphic representation $\\Delta(\\tau,n)$ is a global Langlands quotient of a principal series representation, and can be realized as the automorphic representation generated by the residue of an Eisenstein series. For any partition $\\underline{N} = (N_1,\\ldots,N_r)$ of $N$, denoting the standard parabolic subgroup with Levi subgroup $M_{\\underline{N}}$ isomorphic to" +"---\nabstract: 'In this paper, we continue our study of the motion of spinning test bodies orbiting Kerr black holes. Non-spinning test bodies follow geodesics of the spacetime in which they move. A test body\u2019s spin couples to the curvature of that spacetime, introducing a \u201cspin-curvature force\u201d which pushes the body\u2019s worldline away from a geodesic trajectory. The spin-curvature force is an important example of a post-geodesic effect which must be modeled carefully in order to accurately characterize the motion of bodies orbiting black holes. One motivation for this work is to understand how to include such effects in models of gravitational waves produced from the inspiral of stellar mass bodies into massive black holes. In this paper\u2019s predecessor, we describe a technique for computing bound orbits of spinning bodies around black holes with a frequency-domain description which can be solved very precisely. In that paper, we present an overview of our methods, as well as present results for orbits which are eccentric and nearly equatorial (i.e., the orbit\u2019s motion is no more than $\\mathcal{O}(S)$ out of the equatorial plane). In this paper, we apply this formulation to the fully generic case \u2014 orbits which are inclined and eccentric, with" +"---\nabstract: 'In this paper, we build a compactification by a strictly pseudoconvex CR structure for complete and non-compact K\u00e4hler manifolds whose curvature tensor is asymptotic to that of the complex hyperbolic space.'\naddress:\n- 'Institut Montpelli\u00e9rain Alexander Grothendieck, Universit\u00e9 de Montpellier'\n- 'Unit\u00e9 de Math\u00e9matiques Pures et Appliqu\u00e9es, \u00c9cole Normale Sup\u00e9rieure de Lyon'\nauthor:\n- Alan Pinoy\nbibliography:\n- 'biblio.bib'\ntitle: Asymptotic strictly pseudoconvex CR structure for asymptotically locally complex hyperbolic manifolds\n---\n\nIntroduction {#section:intro .unnumbered}\n============\n\nThe study of the asymptotic geometry of complete non-compact Riemannian manifolds have proven to be fruitful in the understanding of the geometry of complex domains, driven by the following remark. By endowing the interior of a bounded domain with a complete metric, which then sends its boundary to infinity, one can read much information on the geometry of the boundary in the asymptotic development of the metric, see [@fefferman_bergman_1974; @fefferman_monge-ampere_1976; @hirachi_construction_2000]. The induced geometric structure on the boundary leads to geometric invariants of the domain. The Bergman metric and the K\u00e4hler-Einstein metric are examples of such metrics, and have been at the center of complex geometry for decades. On the unit ball of ${\\mathbb{C}}^n$, these two latter metrics are equal up to" +"---\nauthor:\n- |\n Cristiano Capone$^\\ast$\\\n INFN, Sezione di Roma, Rome, Italy\\\n \\\n- |\n Cosimo Lupo$^\\ast$\\\n INFN, Sezione di Roma, Rome, Italy\\\n- |\n Paolo Muratore\\\n SISSA, International School for\\\n Advanced Studies, Trieste, Italy\\\n- |\n Pier Stanislao Paolucci\\\n INFN, Sezione di Roma, Rome, Italy\\\nbibliography:\n- 'references.bib'\ntitle: '**Burst-dependent plasticity and dendritic amplification support target-based learning and hierarchical imitation learning**'\n---\n\nIntroduction\n============\n\nThe brain can learn a wide range of tasks very efficiently in terms of energy consumption and required evidences, motivating the search for biologically inspired learning rules for improving the efficiency of artificial intelligence. Most biologically plausible neural networks are composed so far of point neurons. Despite recent outstanding advances in this field [@nicola2017supervised; @bellec2020], biologically plausible neural networks cannot achieve the state-of-art performances of artificial intelligence (e.g. they struggle to solve the credit assignment problem [@payeur2021burst]).\n\nRecent findings on dendritic computational properties [@poirazi2020illuminating] and on the complexity of pyramidal neurons dynamics [@larkum2013cellular] motivated the study of multi-compartment neuron model in the development of new biologically plausible learning rules [@urbanczik2014learning; @guerguiev2017towards; @sacramento2018dendritic; @payeur2021burst].\n\nRecent works have proposed that segregation of dendritic input (neurons receive sensory information and higher-order feedback in segregated compartments) [@guerguiev2017towards] and generation" +"---\nabstract: 'The Hausdorff dimension of general Sierpinski carpets, [@4] and [@20], and the generalization on Lalley-Gatzouras carpets, [@10], are today well known results, the formulas being obtain via the variational principle for the dimension. We call the multidimensional versions of these carpets Sierpinski sponges and self-affine sponges, respectively,. In this paper we show that the Hausdorff dimension of self-affine sponges, defined in $\\mathrm{R}^3$, is a Lipschitz continuous function at Sierpinski sponges.'\naddress: |\n Universidade Federal do Rio de Janeiro, Instituto de Matem\u00e1tica\\\n Rio de Janeiro 21941-909, Brazil\nauthor:\n- Nuno Luzia\ntitle: 'Lipschitz continuity of the Hausdorff dimension of self-affine sponges at Sierpinski sponges'\n---\n\nIntroduction and statements\n===========================\n\nThe dimension theory of $C^{1+\\alpha}$ conformal repellers is well understood by means of the thermodynamic formalism introduced by Sinai-Ruelle-Bowen [@24], [@23], [@5] and the famous Bowen\u2019s equation [@6], [@22]. In particular there is a unique ergodic measure of full dimension which is a Gibbs state.\n\nThe dimension theory of *non-conformal* repellers is still being developed and no such general formalism exists. The computation of Hausdorff dimension of non-conformal fractals began with the fundamental works by Bedford [@4] and McMullen [@20] on the *general Sierpinski carpets*, and their generalization [@10] on" +"---\nabstract: 'As Android malware grows and evolves, deep learning has been introduced into malware detection, resulting in great effectiveness. Recent work is considering hybrid models and multi-view learning. However, they use only simple features, limiting the accuracy of these approaches in practice. This paper proposes [DeepCatra]{}, a multi-view learning approach for Android malware detection, whose model consists of a bidirectional LSTM (BiLSTM) and a graph neural network (GNN) as subnets. The two subnets rely on features extracted from statically computed call traces leading to critical APIs derived from public vulnerabilities. For each Android app, [DeepCatra]{} first constructs its call graph and computes call traces reaching critical APIs. Then, temporal opcode features used by the BiLSTM subnet are extracted from the call traces, while flow graph features used by the GNN subnet are constructed from all the call traces and inter-component communications. We evaluate the effectiveness of [DeepCatra]{} by comparing it with several state-of-the-art detection approaches. Experimental results on over 18,000 real-world apps and prevalent malware show that [DeepCatra]{} achieves considerable improvement, e.g., 2.7% to 14.6% on the F1 measure, which demonstrates the feasibility of [DeepCatra]{} in practice.'\nauthor:\n- 'Yafei\u00a0Wu, Jian\u00a0Shi, Peicheng\u00a0Wang, Dongrui\u00a0Zeng, Cong\u00a0Sun[^1]" +"---\nabstract: 'Loosely speaking, the Mpemba effect appears when hotter systems cool sooner or, in a more abstract way, when systems further from equilibrium relax faster. In this paper, we investigate the Mpemba effect in a molecular gas with nonlinear drag, both analytically (by employing the tools of kinetic theory) and numerically (direct simulation Monte Carlo of the kinetic equation and event-driven molecular dynamics). The analysis is carried out via two alternative routes, recently considered in the literature: first, the kinetic or thermal route, in which the Mpemba effect is characterized by the crossing of the evolution curves of the kinetic temperature (average kinetic energy), and, second, the stochastic thermodynamics or entropic route, in which the Mpemba effect is characterized by the crossing of the distance to equilibrium in probability space. In general, a nonmutual correspondence between the thermal and entropic Mpemba effects is found, i.e., there may appear the thermal effect without its entropic counterpart or vice versa. Furthermore, a nontrivial overshoot with respect to equilibrium of the thermal relaxation makes it necessary to revise the usual definition of the thermal Mpemba effect, which is shown to be better described in terms of the relaxation of the local equilibrium" +"---\nabstract: 'A Java parallel streams implementation of the $K$-nearest neighbor descent algorithm is presented using a natural statistical termination criterion. Input data consist of a set $S$ of $n$ objects of type `V`, and a `Function>`, which enables any $x \\in S$ to decide which of $y, z \\in S\\setminus\\{x\\}$ is more similar to $x$. Experiments with the Kullback-Leibler divergence `Comparator` support the prediction that the number of rounds of $K$-nearest neighbor updates need not exceed twice the diameter of the undirected version of a random regular out-degree $K$ digraph on $n$ vertices. Overall complexity was $O(n K^2 \\log_K(n))$ in the class of examples studied. When objects are sampled uniformly from a $d$-dimensional simplex, accuracy of the $K$-nearest neighbor approximation is high up to $d = 20$, but declines in higher dimensions, as theory would predict.'\naddress: 'National Security Agency, Fort George G.\u00a0Meade, MD 20755-6844, USA'\nauthor:\n- 'Jacob D.\u00a0Baron R.W.R.\u00a0Darling'\ntitle: |\n Empirical complexity of\\\n comparator-based nearest neighbor descent\n---\n\n[**Keywords:** similarity search, nearest neighbor, ranking system, triplet comparison, comparator, random graph, proximity graph, expander graph\\\n**MSC class:** Primary: 90C35; Secondary: 06A07 ]{}\n\nIntroduction\n============\n\nContext\n-------\n\nBaron and Darling [@bar] provided a theoretical" +"---\nabstract: 'We solve for the light-front wave functions (LFWFs) of the physical photon from the eigenvectors of the light-front quantum electrodynamics (QED) Hamiltonian with the aim to determine its bare photon and electron-positron Fock components. We then employ the resulting LFWFs to compute the transverse momentum dependent parton distributions (TMDs) and the generalized parton distributions (GPDs) of the photon. The TMDs are found to be in excellent agreement with the lowest-order perturbative results calculated using the electron-positron quantum fluctuation of the photon. The GPDs are also consistent with the perturbative calculations.'\naddress:\n- 'Institute for Modern Physics, Chinese Academy of Sciences, Lanzhou-730000, China'\n- 'School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, China'\n- 'CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China'\n- 'Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India'\n- 'Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA'\nauthor:\n- Sreeraj Nair\n- Chandan Mondal\n- Xingbo Zhao\n- Asmita Mukherjee\n- 'James P. Vary'\n- |\n \\\n (BLFQ Collaboration)\ntitle: 'Basis light-front quantization approach to photon'\n---\n\nLight-front quantization ,Quantum electrodynamics" +"---\nabstract: 'We report on the internal distribution of star formation efficiency in IRAS\u00a008339+6517 (hereafter IRAS08), using $\\sim$200\u00a0pc resolution CO(2-1) observations from NOEMA. The molecular gas depletion time changes by 2 orders-of-magnitude from disk-like values in the outer parts to less than 10$^8$\u00a0yr inside the half-light radius. This translates to a star formation efficiency per free-fall time that also changes by 2 orders-of-magnitude, reaching 50-100%, different than local spiral galaxies and typical assumption of constant, low star formation efficiencies. Our target is a compact, massive disk galaxy that has SFR 10$\\times$ above the $z=0$ main-sequence; Toomre $Q\\approx0.5-0.7$ and high gas velocity dispersion ($\\sigma_{mol}\\approx 25$\u00a0km\u00a0s$^{-1}$). We find that IRAS08 is similar to other rotating, starburst galaxies from the literature in the resolved $\\Sigma_{SFR}\\propto\\Sigma_{mol}^N$ relation. By combining resolved literature studies we find that distance from the main-sequence is a strong indicator of the Kennicutt-Schmidt powerlaw slope, with slopes of $N\\approx1.6$ for starbursts from 100-10$^4$\u00a0M$_{\\odot}$\u00a0pc$^{-2}$. Our target is consistent with a scenario in which violent disk instabilities drive rapid inflows of gas. It has low values of Toomre-$Q$, and also at all radii the inflow timescale of the gas is less than the depletion time, which" +"---\nabstract: 'Bennet-Brassard 1984 (BB84) protocol, we optimize the ratio of the choice of two bases, the bit basis and the phase basis by using the second order expansion for the length of the generation keys under the coherent attack. This optimization addresses the trade-off between the loss of transmitted bits due to the disagreement of their bases and the estimation error of the error rate in the phase basis. Then, we derive the optimum ratio and the optimum length of the generation keys with the second order asymptotics. Surprisingly, the second order has the order $n^{\\frac{3}{4}}$, which is much larger than the second order $n^{\\frac{1}{2}}$ in the conventional setting when $n$ is the number of quantum communication. This fact shows that our setting has much larger importance for the second order analysis than the conventional problem. To illustrate this importance, we numerically plot the effect of the second order correction.'\nauthor:\n- Masahito Hayashi\ntitle: 'Optimum ratio between two bases in Bennett-Brassard 1984 protocol with second order analysis'\n---\n\nIntroduction\n============\n\nBennet-Brassard 1984 (BB84) protocol [@BB84] is a standard protocol for quantum key distribution. The key point of this protocol is the evaluation of the amount of information leakage" +"---\nabstract: 'Quantum communications require efficient implementations of quantum state transportation with high fidelity. Here, we consider the transport of entanglement along a chain of qubits. A series of SWAP operations involving successive pairs of qubits can transport entanglement along the chain. We report that the fidelity of the abovementioned gate has a maximum value corresponding to an optimum value of the drive amplitude in the presence of drive-induced decoherence. To incorporate environmental effect, we use a previously reported fluctuation-regulated quantum master equation \\[A. Chakrabarti and R. Bhattacharyya, Phys. Rev. A 97, 063837 (2018)\\]. The existence of an optimum drive amplitude implies that these series of SWAP operations on open quantum systems would have an optimal transfer speed of the entanglement.'\nauthor:\n- Gourab Das\n- Rangeet Bhattacharyya\nbibliography:\n- 'references.bib'\ntitle: Efficient transfer of entanglement along a qubit chain in the presence of thermal fluctuations\n---\n\nIntroduction\n============\n\nTransfer of coherences and entanglements along a qubit-network is one of the important aspects of the quantum information processing. We investigate the efficiency of the transfer of coherences and entanglements along the spin-chain in a dissipative environment. To this end, we use a fluctuation-regulated quantum master equation (frQME) to include the" +"---\nabstract: 'Due to the black-box nature of deep learning models, there is a recent development of solutions for visual explanations of CNNs. Given the high cost of user studies, metrics are necessary to compare and evaluate these different methods. In this paper, we critically analyze the Deletion Area Under Curve (DAUC) and Insertion Area Under Curve (IAUC) metrics proposed by Petsiuk et al. (2018). These metrics were designed to evaluate the faithfulness of saliency maps generated by generic methods such as Grad-CAM or RISE. First, we show that the actual saliency score values given by the saliency map are ignored as only the ranking of the scores is taken into account. This shows that these metrics are insufficient by themselves, as the visual appearance of a saliency map can change significantly without the ranking of the scores being modified. Secondly, we argue that during the computation of DAUC and IAUC, the model is presented with images that are out of the training distribution which might lead to unexpected behavior of the model being explained. To complement DAUC/IAUC, we propose new metrics that quantify the sparsity and the calibration of explanation methods, two previously unstudied properties. Finally, we give general" +"---\nabstract: 'We show that the fibrant objects in the minimal model structure on the category of simplicial sets are characterized by a lifting condition with respect to maps which resemble the horn inclusions that define Kan complexes.'\naddress: 'Department of Mathematics, University of Virginia, Charlottesville, VA 22904'\nauthor:\n- Matt Feller\nbibliography:\n- 'Minimal.bib'\ntitle: 'A horn-like characterization of the fibrant objects in the minimal model structure on simplicial sets'\n---\n\n[^1]\n\nIntroduction\n============\n\nModel categories play a crucial role in modern homotopy theory, underpinning much of the current work in higher categories. A model category consists of a category with a chosen model structure, which amounts to a choice of \u201chomotopy theory\u201d for the given category. More precisely, a model structure is a choice of weak equivalences, cofibrations, and fibrations satisfying certain axioms which abstract the behavior of the analogous classes of maps from the ordinary homotopy theory of topological spaces. (See [@Cisinski:Cambridge Def.\u00a02.2.1] for an explicit definition of model categories.)\n\nOften times, one can model different homotopy theories with a single category by constructing multiple model structures on that category. A major example is ${\\mathsf{sSet}}$, the category of simplicial sets, which admits the Kan-Quillen model structure" +"**On the algorithm of best approximation\\\nby low rank matrices in the Chebyshev norm.[^1]$^)$**\n\n[**S. Morozov$^{1*}$, N. Zamarashkin$^{1**}$, E. Tyrtyshnikov$^{1***}$**]{}\\\n\\\n\n[The low-rank matrix approximation problem is ubiquitous in computational mathematics. Traditionally, this problem is solved in spectral or Frobenius norms, where the accuracy of the approximation is related to the rate of decrease of the singular values of the matrix. However, recent results indicate that this requirement is not necessary for other norms. In this paper, we propose a method for solving the low-rank approximation problem in the Chebyshev norm, which is capable of efficiently constructing accurate approximations for matrices, whose singular values do not decrease or decrease slowly.]{}\n\n[**Keywords:**]{} [Low-rank matrix approximation, Remez algorithm, Chebyshev approximation]{}.\n\n[1. INTRODUCTION]{}\n\nLow-rank matrices are ubiquitous in science. They serve as a tool for low-parametric matrix approximation in numerous applications such as computational mathematics [@bebendorf2008means], computational fluid dynamics [@son2014data], recommender systems [@he2016fast], machine learning [@yang2018oboe], and others.\n\nHowever, one typically assumes that the singular values of the matrix that needs to be approximated decay rapidly. This assumption is made, primarily, because there are efficient algorithms for close-to-optimal low-rank approximation in unitarily invariant norms [@GTZ-1997; @HMT-2011; @OZ-2018].\n\nOn the other hand, in modern" +"---\nabstract: 'Network Intrusion Detection Systems (NIDSs) are widely regarded as efficient tools for securing in-vehicle networks against diverse cyberattacks. However, since cyberattacks are always evolving, signature-based intrusion detection systems are no longer adopted. An alternative solution can be the deployment of deep learning based intrusion detection system which play an important role in detecting unknown attack patterns in network traffic. Hence, in this paper, we compare the performance of different unsupervised deep and machine learning based anomaly detection algorithms, for real-time detection of anomalies on the Audio Video Transport Protocol (AVTP), an application layer protocol implemented in the recent Automotive Ethernet based in-vehicle network. The numerical results, conducted on the recently published \u201cAutomotive Ethernet Intrusion Dataset\u201d, show that deep learning models significantly outperfom other state-of-the art traditional anomaly detection models in machine learning under different experimental settings.'\nauthor:\n- |\n Natasha Alkhatib, Maria Mushtaq, Hadi Ghauch, Jean-Luc Danger\\\n *T\u00e9l\u00e9com Paris, IP Paris, Palaiseau, France*\\\n [ ]{}\ntitle: Unsupervised Network Intrusion Detection System for AVTP in Automotive Ethernet Networks\n---\n\nAVTP , Anomaly Detection, Automotive Ethernet, Neural Network, In-Vehicle Network\n\nIntroduction\n============\n\nSince the advent of powerful electronic components such as sensors and actuators as well as a robust in-vehicle" +"---\nabstract: 'Topological quantum state described by the global invariant has been extensively studied in theory and experiment. In this letter, we investigate the relationship between *Zitterbewegung* and the topology of systems that reflect the properties of the local and whole energy bands, respectively. We generalize the usual two-band effective Hamiltonian to characterize the topological phase transition of the spin-$J$ topological insulator. By studying *Zitterbewegung* dynamics before and after topological phase transition, we find that the direction of quasiparticles\u2019 oscillation can well reflect topological properties. Furthermore, we develop a quantitative calculation formula for the topological invariant in the spin-$J$ Chern insulator and give the selection rule of the corresponding dynamics. Finally, we demonstrate that our theory is valid in different topological systems. The topological invariant can be represented by local dynamical properties of the high-symmetry points in the first Brillouin zone, which provides a new measurement method from the dynamical perspective.'\nauthor:\n- Xin Shen\n- 'Yan-Qing Zhu'\n- Zhi Li\nbibliography:\n- 'ref.bib'\ntitle: 'Link between *Zitterbewegung* and topological phase transition'\n---\n\n*Introduction.*\u2014As a state of matter beyond the conventional symmetry breaking paradigm in terms of the classification of phases, topological quantum state has long been a hot topic" +"---\nabstract: 'Distributed machine learning (DML) over time-varying networks can be an enabler for emerging decentralized ML applications such as autonomous driving and drone fleeting. However, the commonly used weighted arithmetic mean model aggregation function in existing DML systems can result in high model loss, low model accuracy, and slow convergence speed over time-varying networks. To address this issue, in this paper, we propose a novel non-linear class of model aggregation functions to achieve efficient DML over time-varying networks. Instead of taking a linear aggregation of neighboring models as most existing studies do, our mechanism uses a nonlinear aggregation, a weighted power-$p$ mean (WPM) where $p$ is a positive integer, as the aggregation function of local models from neighbors. The subsequent optimizing steps are taken using mirror descent defined by a Bregman divergence that maintains convergence to optimality. In this paper, we analyze properties of the WPM and rigorously prove convergence properties of our aggregation mechanism. Additionally, through extensive experiments, we show that when $p>1$, our design significantly improves the convergence speed of the model and the scalability of DML under time-varying networks compared with arithmetic mean aggregation functions, with little additional computation overhead.'\nauthor:\n- \n- \n- \n- \n-" +"---\nabstract: 'These notes provide a student-friendly introduction to the theory of gravitational waves in full, non-linear general relativity (GR). We aim for a balance between physical intuition and mathematical rigor and cover topics such as the Newman-Penrose formalism, electromagnetic waves, asymptotically Minkowski spacetimes, the peeling theorem, the universal structure of null infinity, the Bondi-Metzner-Sachs group, and the definition of radiative modes in linear as well as in non-linear GR. Many exercises and some explicitly calculated examples complement the abstract theory and are designed to help students build up their intuition and see the mathematical machinery at work.'\nauthor:\n- 'Fabio D\u2019Ambrosio$^{\\textsf{, }}$'\n- 'Shaun D.\u00a0B. Fell$^{\\textsf{, }}$'\n- 'Lavinia Heisenberg$^{\\textsf{, }}$'\n- 'David Maibach$^{\\textsf{, }}$'\n- 'Stefan Zentarra$^{\\textsf{, }}$'\n- 'Jann Zosso$^{\\textsf{, }}$'\nbibliography:\n- 'Bibliography.bib'\ntitle: ' **Gravitational Waves in Full, Non-Linear General Relativity**'\n---\n\nPreface {#preface .unnumbered}\n=======\n\nThese notes are based on a lecture series by Prof. Abhay Ashtekar, which can be found on the YouTube channel of the Institute for Gravitation and the Cosmos at Penn State\u00a0[@Ashtekar:2019YT].\n\nIn 2021, the authors of these notes founded the *Gravitational Waves Working Group* at ETH Zurich, with the purpose of studying and discussing recent advances in" +"---\nabstract: 'We investigate the potential of fusing human examiner decisions for the task of digital face manipulation detection. To this end, various decision fusion methods are proposed incorporating the examiners\u2019 decision confidence, experience level, and their time to take a decision. Conducted experiments are based on a psychophysical evaluation of digital face image manipulation detection capabilities of humans in which different manipulation techniques were applied, face morphing, face swapping and retouching. The decisions of 223 participants were fused to simulate crowds of up to seven human examiners. Experimental results reveal that (1) despite the moderate detection performance achieved by single human examiners, a high accuracy can be obtained through decision fusion and (2) a weighted fusion which takes the examiners\u2019 decision confidence into account yields the most competitive detection performance.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: 'Crowd\u2013powered Face Manipulation Detection: Fusing Human Examiner Decisions'\n---\n\nImage forensics, manipulation detection, information fusion, human examiners\n\nIntroduction\n============\n\nDigital face manipulation [@2021_Book_DigitalFaceManipulation] has rapidly advanced in the recent past and numerous facial alteration methods have been proposed, face swapping or morphing. Digitally manipulated face images can be misused for malicious purposes, document fraud or spreading of misinformation. Hence, harms caused by face" +"---\nabstract: 'In this paper, we consider the numerical approximation for a diffuse interface model of the two-phase incompressible inductionless magnetohydrodynamics problem. This model consists of Cahn-Hilliard equations, Navier-Stokes equations and Poisson equation. We propose a linear and decoupled finite element method to solve this highly nonlinear and multi-physics system. For the time variable, the discretization is a combination of first order Euler semi-implicit scheme, several first order stabilization terms and implicit-explicit treatments for coupling terms. For the space variables, we adopt the finite element discretization, especially, we approximate the current density and electric potential by inf-sup stable face-volume mixed finite element pairs. With these techniques, the scheme only involves a sequence of decoupled linear equations to solve at each time step. We show that the scheme is provably mass-conservative, charge-conservative and unconditionally energy stable. Numerical experiments are performed to illustrate the features, accuracy and efficiency of the proposed scheme.'\naddress:\n- 'LSEC, NCMIS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences; School of Mathematical Science, University of Chinese Academy of Sciences, Beijing 100190, China.'\n- 'Henan Academy of Big Data, Zhengzhou University, Zhengzhou 450001, China.'\nauthor:\n- Xiaorong Wang\n- Xiaodi Zhang\nbibliography:\n- '1203.bib'\ntitle: 'Decoupled," +"---\nabstract: 'We give a general construction of extremal K\u00e4hler metrics on the total space of certain holomorphic submersions, extending results of Dervan-Sektnan, Fine, and Hong. We consider submersions whose fibres admit a degeneration to K\u00e4hler manifolds with constant scalar curvature, in a way that is compatible with the fibration structure. Thus we allow fibres that are K-semistable, rather than K-polystable; this is crucial to moduli theory. On these fibrations we phrase a partial differential equation whose solutions, called *optimal symplectic connections*, represent a canonical choice of a relatively K\u00e4hler metric. We expect this to be the most general construction of a canonical relatively K\u00e4hler metric provided all input is smooth. We use the notion of an optimal symplectic connection and the geometry related to it to construct K\u00e4hler metrics with constant scalar curvature and extremal metrics on the total space, in adiabatic classes.'\nauthor:\n- Annamaria Ortu\nbibliography:\n- 'OSCbibliography.bib'\ntitle: Optimal symplectic connections and deformations of holomorphic submersions\n---\n\nIntroduction\n============\n\nLet $\\pi: (X,H) \\to (B,L)$ be a holomorphic submersion of a relatively polarised compact K\u00e4hler manifold onto a compact K\u00e4hler base. We address the problem of finding conditions under which the total space $X$ admits an extremal" +"---\nabstract: 'We report a doping study directed to intentionally induce disorder in PdTe$_{2}$\u00a0by the isoelectronic substitution of Pt. Two single-crystalline batches Pd$_{1-x}$Pt$_x$Te$_2$ have been prepared with nominal doping concentrations $x=0.05$ and $x=0.10$. Sample characterization by energy dispersive x-ray spectroscopy (EDX) revealed Pt did not dissolve homogeneously in the crystals. For the nominal value $x=0.10$ small single crystals cut from the batch appeared to have $x=0.09$, as well as the non stoichiometric composition Pd$_{0.97}$Pt$_{<0.004}$Te$_{2.03}$. Magnetic and heat capacity measurements demonstrate a transition from type-I to type-II superconducting behavior upon increasing disorder. From transport measurements we calculate a residual resistivity $\\rho_0 = 1.4~\\mu\\Omega$cm suffices to turn PdTe$_{2}$\u00a0into a superconductor of the second kind.'\nauthor:\n- 'M. V. Salis'\n- 'J. P. Lorenz'\n- 'Y. K. Huang'\n- 'A. de Visser'\ntitle: 'Disorder induced transition from type-I to type-II superconductivity in the Dirac semimetal PdTe$_{2}$'\n---\n\nintroduction\n============\n\nRecently, interest in transition metal dichalcogenides has increased significantly due to their extraordinary electronic properties. Notably, the opportunity to realize novel quantum states arising from the topologically non-trivial band structure, as found by density functional theory\u00a0[@Soluyanov2015; @Huang2016; @Yan2017; @Bahramy2018], attracts much attention. The formation of both type-I and type-II bulk Dirac" +"---\nabstract: 'Tensor numerical methods, based on the rank-structured tensor representation of $d$-variate functions and operators discretized on large $n^{\\otimes d }$ grids, are designed to provide $O(dn)$ complexity of numerical calculations contrary to $O(n^d)$ scaling by conventional grid-based methods. However, multiple tensor operations may lead to enormous increase in the tensor ranks (curse of ranks) of the target data, making calculation intractable. Therefore one of the most important steps in tensor calculations is the robust and efficient rank reduction procedure which should be performed many times in the course of various tensor transforms in multidimensional operator and function calculus. The rank reduction scheme based on the Reduced Higher Order SVD (RHOSVD) introduced in [@KhKh3:08] played a significant role in the development of tensor numerical methods. Here, we briefly survey the essentials of RHOSVD method and then focus on some new theoretical and computational aspects of the RHOSVD demonstrating that this rank reduction technique constitutes the basic ingredient in tensor computations for real-life problems. In particular, the stability analysis of RHOSVD is presented. We recall the performance of the RHOSVD in tensor-based calculation of the Hartree potential in computational quantum chemistry. We introduce the multilinear algebra of tensors represented in" +"---\nabstract: 'The Eridanus II (EriII) \u2018ultra-faint\u2019 dwarf has a large ($15\\,$pc) and low mass ($4.3\\times10^3$M$_\\odot$) star cluster (SC) offset from its centre by $23\\pm3$pc in projection. Its size and offset are naturally explained if EriII has a central dark matter core, but such a core may be challenging to explain in a $\\Lambda$CDM cosmology. In this paper, we revisit the survival and evolution of EriII\u2019s SC, focussing for the first time on its puzzlingly large ellipticity ($0.31^{+0.05}_{-0.06}$). We perform a suite of 960 direct $N$-body simulations of SCs, orbiting within a range of spherical background potentials fit to ultra-faint dwarf (UFD) galaxy simulations. We find only two scenarios that come close to explaining EriII\u2019s SC. In the first, EriII has a low density dark matter core (of size $\\sim70\\,\\text{pc}$ and density $\\lesssim2\\times10^8\\,\\text{M}_{\\odot}\\,\\text{kpc}^{-3}$). In this model, the high ellipticity of EriII\u2019s SC is set at birth, with the lack of tidal forces in the core allowing its ellipticity to remain frozen in for long times. In the second, EriII\u2019s SC orbits in a partial core, with its high ellipticity owing to its imminent tidal destruction. However, this latter model struggles to reproduce the large size of EriII\u2019s SC, and it" +"---\nabstract: 'We study the quantum dynamics of a one-dimensional SU($3$)-symmetric system of cold atoms in the presence of two-body losses. We exploit the representation theory of SU($3$), the so-called eightfold way, as a scheme to organize the dark states of the dissipative dynamics in terms of generalized Dicke states and show how they are dynamically approached, both in the weakly- and and strongly-interacting and dissipative regimes. Our results are relevant for a wide class of alkaline-earth(-like) gases experiments, paving the way to the dissipative preparation and exploitation of generalized Dicke states.'\nauthor:\n- Lorenzo Rosso\n- Leonardo Mazza\n- Alberto Biella\nbibliography:\n- 'SUNdiss.bib'\ntitle: '[Eightfold way to dark states in SU($3$) cold gases with two-body losses]{}'\n---\n\n**Introduction \u2013** Ultracold atomic gases represent a clean and flexible playground to study quantum many-body physics, at equilibrium or in dynamical settings\u00a0[@LangenRev_2015; @Gross2017; @Schafer2020]. Cold-atom experiments usually feature a high degree of control over system parameters and allow for an almost perfect decoupling from the surrounding environment. However, despite the tremendous experimental progresses, a perfect isolation has never been reached, for instance because of particle losses, causing energy relaxation and decoherence phenomena\u00a0[@Zurek_2003]. On one hand, this fact introduces a" +"---\nabstract: 'Fog computing has emerged as a new paradigm in mobile network communications, aiming to equip the edge of the network with the computing and storing capabilities to deal with the huge amount of data and processing needs generated by the users\u2019 devices and sensors. Optimizing the assignment of users to fogs is, however, still an open issue. In this paper, we formulated the problem of users-fogs association, as a matching game with minimum and maximum quota constraints, and proposed a Multi-Stage Differed Acceptance (MSDA) in order to balance the use of fogs resources and offer a better response time for users. Simulations results show that the performance of the proposed model compared to a baseline matching of users, achieves lowers delays for users.'\nauthor:\n- \ntitle: 'Matching-Game for User-Fog Assignment'\n---\n\nFog computing, Computing tasks offloading, Minimum/Maximum quota, Users assignment.\n\nIntroduction\n============\n\nIn facing the challenges associated with huge data processing, and storage, cloud computing is now a mature technology that provides interesting features such as fault tolerance and elasticity [@Rachedi; @Azizian1]. A new model has become possible, where resource-limited devices, especially mobile ones, can move computationally-intensive tasks to the cloud, letting device be merely used as interfaces" +"---\nabstract: 'We study the ground state phases of interacting bosons in the presence of a 2D Aubry-Andr\u00e9 potential. By using a a mean-field percolation analysis, we focus on several superlattice and quasicrystalline regimes of the 2D Aubry-Andr\u00e9 model, including generalisations that account for a tilting or skewing of the potential. We show that barriers to the onset of macroscopic phases naturally arise from weakly modulated domains in the 2D Aubry-Andr\u00e9 model. This leads to the formation of mixed phases, in which the macroscopic properties are dominated by a minority of the system. The phase diagrams then exhibit substantially different features when compared against crystalline systems, including a lobe-like or wave-like appearance of the Bose glass, sharp extrusions and extended domains with weak percolation. By studying the 2D Aubry-Andr\u00e9 model across multiple regimes, we have shown that the unique properties of mixed phases are not distinct to a small set of parameters.'\naddress:\n- '$^1$ SUPA, Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK'\n- '$^2$ Department of Physics, SUPA and University of Strathclyde, Glasgow G4 0NG, United Kingdom'\nauthor:\n- 'Dean Johnstone$^1$, Patrik \u00d6hberg$^1$ and Callum W. Duncan$^{2}$'\ntitle: 'Barriers to Macroscopic Superfluidity and Insulation" +"---\nabstract: 'Parallel applications often rely on work stealing schedulers in combination with fine-grained tasking to achieve high performance and scalability. However, reducing the total energy consumption in the context of work stealing runtimes is still challenging, particularly when using asymmetric architectures with different types of CPU cores. A common approach for energy savings involves dynamic voltage and frequency scaling (DVFS) wherein throttling is carried out based on factors like task parallelism, stealing relations and task criticality. This paper makes the following observations: (i) leveraging DVFS on a per-task basis is impractical when using fine-grained tasking and in environments with cluster/chip-level DVFS; (ii) task moldability, wherein a single task can execute on multiple threads/cores via work-sharing, can help to reduce energy consumption; and (iii) mismatch between tasks and assigned resources (i.e.\u00a0core type and number of cores) can detrimentally impact energy consumption. In this paper, we propose ERASE (EneRgy Aware SchedulEr), an intra-application task scheduler on top of work stealing runtimes that aims to reduce the total energy consumption of parallel applications. It achieves energy savings by guiding scheduling decisions based on per-task energy consumption predictions of different resource configurations. In addition, ERASE is capable of adapting to both given" +"---\nabstract: 'We investigate the early time development of the anisotropic transverse flow and spatial eccentricities of a fireball with various particle-based transport approaches using a fixed initial condition. In numerical simulations ranging from the quasi-collisionless case to the hydrodynamic regime, we find that the onset of $v_n$ and of related measures of anisotropic flow can be described with a simple power-law ansatz, with an exponent that depends on the amount of rescatterings in the system. In the few-rescatterings regime we perform semi-analytical calculations, based on a systematic expansion in powers of time and the cross section, which can reproduce the numerical findings.'\nauthor:\n- Nicolas Borghini\n- Marc Borrell\n- Hendrik Roch\ntitle: Early time behavior of spatial and momentum anisotropies in kinetic theory across different Knudsen numbers\n---\n\nIntroduction {#s:intro}\n============\n\nCollisions of heavy nuclei at high energies create a highly dynamical system, which develops some collective behavior over a timescale of order 10fm$/c$. The emission pattern of particles in the final state appears to be strongly correlated to the initial system geometry determined by the overlap region of the colliding nuclei. In particular, initial asymmetries in the geometry are converted into transverse momentum space anisotropies, referred to" +"---\nabstract: 'These proceedings discuss recent jet measurements by the STAR experiment at RHIC to study jet substructure in [$p$+$p$]{}\u00a0and jet quenching in Au+Au collisions at [$\\sqrt{s_\\mathrm{NN}}$]{}\u00a0= 200 GeV. Furthermore, STAR\u2019s future plans for precision jet measurements with the upcoming data-taking periods in 2023-2025 are presented.'\naddress: |\n Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong, 266237, China\\\n Key Laboratory of Particle Physics and Particle Irradiation, Shandong University, Qingdao, Shandong, 266237, China ,[^1]\\\nauthor:\n- 'Nihar Ranjan Sahoo (for the STAR collaboration)'\ntitle: An overview of recent STAR jet measurements\n---\n\nIntroduction\n============\n\nJets in [$p$+$p$]{}\u00a0and heavy-ion collisions arise from hard-scattered (high-$Q^{2}$) quarks and gluons of the incoming beams. In vacuum, a highly virtual parton generated in such interaction comes on-shell by radiating gluons, resulting in a jet shower. Studying jet properties in [$p$+$p$]{}\u00a0collisions provides the opportunity to explore the perturbative and non-perturbative QCD effects in vacuum. In addition, the comparison between data and different QCD-based Monte Carlo (MC) event generators helps to constrain model parameters. In heavy-ion collisions, a highly energetic parton\u2014while traversing through the Quark-Gluon Plasma (QGP)\u2014interacts with the colored medium and loses its energy via medium-induced gluon radiation. This phenomenon is" +"---\nauthor:\n- 'H\u00e5kan\u00a0Carlsson,\u00a0 Isaac\u00a0Skog,\u00a0 Gustaf\u00a0Hendeby,\u00a0 and\u00a0Joakim\u00a0Jald\u00e9n,\u00a0 [^1][^2][^3][^4] [^5]'\nbibliography:\n- 'ref.bib'\ntitle: Inertial Navigation Using an Inertial Sensor Array\n---\n\nIntroduction {#sec:introduction}\n============\n\nnavigation is the process of estimating the traveled distance and orientation of an object by time-integration of measured velocities and accelerations\u00a0[@Titterton2004]. Due to the integrative nature of inertial navigation, the estimated position and orientation do inherently accumulate errors over time. This unbounded accumulation of position and orientation errors can only be limited by including external information from other sensor systems that provide an absolute reference to the environment or by including motion constraints. For instance, such aided inertial navigation has been done using satellite\u00a0[@Farrell2008], video\u00a0[@Huang2019], radio\u00a0[@Angelis2009], LIDAR\u00a0[@Tang2015], and magnetic field data\u00a0[@Kok2013]. Otherwise, the position and orientation errors\u2019 growth rate can be reduced by decreasing the measurement errors of the inertial sensors or using additional motion information such as \u00a0[@Wahlstroem2021]. Improving the sensor hardware\u00a0[@King1998] reduces measurement errors, but this typically comes with increasing cost and sensor size. Another approach to reducing the measurement error is to fuse the measurements from a redundant amount of inertial sensors to produce a virtual sensor with higher accuracy. In" +"---\nabstract: 'The existence of completely aligned and paired multi-modal neuroimaging data has proved its effectiveness in the diagnosis of brain diseases. However, collecting the full set of well-aligned and paired data is impractical, since the practical difficulties may include high cost, long time acquisition, image corruption, and privacy issues. Previously, the misaligned unpaired neuroimaging data (termed as MUD) are generally treated as noisy label. However, such a noisy label-based method fail to accomplish well when misaligned data occurs distortions severely. For example, the angle of rotation is different. In this paper, we propose a novel federated self-supervised learning (FedMed) for brain image synthesis. An affine transform loss (ATL) was formulated to make use of severely distorted images without violating privacy legislation for the hospital. We then introduce a new data augmentation procedure for self-supervised training and fed it into three auxiliary heads, namely auxiliary rotation, auxiliary translation and auxiliary scaling heads. The proposed method demonstrates the advanced performance in both the quality of our synthesized results under a severely misaligned and unpaired data setting, and better stability than other GAN-based algorithms. The proposed method also reduces the demand for deformable registration while encouraging to leverage the misaligned and unpaired" +"---\nabstract: 'In personalized Federated Learning, each member of a potentially large set of agents aims to train a model minimizing its loss function averaged over its local data distribution. We study this problem under the lens of stochastic optimization. Specifically, we introduce information-theoretic lower bounds on the number of samples required from all agents to approximately minimize the generalization error of a fixed agent. We then provide strategies matching these lower bounds, in the *all-for-one* and *all-for-all* settings where respectively one or all agents desire to minimize their own local function. Our strategies are based on a *gradient filtering* approach: provided prior knowledge on some notions of distances or discrepancies between local data distributions or functions, a given agent filters and aggregates stochastic gradients received from other agents, in order to achieve an optimal bias-variance trade-off.'\nbibliography:\n- 'biblio.bib'\n---\n\nSample Optimality and *All-for-all* Strategies in Personalized Federated and Collaborative Learning\n\nMathieu Even^1^, Laurent Massouli\u00e9^1,2^ and Kevin Scaman^1^\n\n^1^Inria - D\u00e9partement d\u2019informatique de l\u2019ENS\\\n\n^2^MSR-Inria Joint Centre\\\n\nIntroduction\n============\n\nA central task in Federated Learning [@mcmahan2017fl; @kairouz_advances_2019] is the training of a common model from local data sets held by individual agents. A typical application is when users (*e.g.*" +"---\nabstract: 'Linear non-compact operators are difficult to study because they do not exist in the finite dimensional world. Recently, Math\u00e9 and Hofmann studied the singular values of the compact composition of the non-compact Hausdorff moment operator and the compact integral operator and found credible arguments, but no strict proof, that those singular values fall only slightly faster than those of the integral operator alone. However, the fact that numerically the singular values of the combined operator fall exponentially fast was not mentioned. In this note, we provide the missing numerical results and provide an explanation why the two seemingly contradicting results may both be true.'\nauthor:\n- 'Daniel Gerth[^1]'\ntitle: 'A note on numerical singular values of compositions with non-compact operators'\n---\n\nIntroduction\n============\n\nFor numerical computations one often needs to find a finite dimensional approximation to a real world problem that can be modelled as an infinite dimensional operator equation. A curious case is when a non-compact operator is involved in the modelling. The reason for this is that any linear operator with finite dimensional range (in particular discrete operators used for computations) is necessarily compact. On the other hand, whenever a non-compact operator is paired with a" +"---\nabstract: 'We introduce a model of greenhouse gas emissions due to on-chain activity on Ethereum, focusing on cryptoart. We also estimate the impact of individual transactions on the environment, both before and after the London hard fork. We find that with the current fee mechanism, spending one dollar on transaction fees corresponds to emitting at least the equivalent of 1.305 kilograms of CO~2~. We also describe several techniques to reduce cryptoart emissions, both in the short and long term.'\nauthor:\n- 'Samuele Marro[^1]'\n- 'Luca Donno[^2]'\nbibliography:\n- 'main.bib'\ndate: May 2021\ntitle: 'Green NFTs: A Study on the Environmental Impact of Cryptoart Technologies'\n---\n\n=1\n\nIntroduction\n============\n\nIn the last year, there has been an exponential growth of cryptoart, a new blockchain-based art form. After the publication of several articles that aimed to raise awareness about its environmental impact [@akten2020unreasonable] [@lemercier2021joanie], there has also been a growing interest in developing solutions to reduce greenhouse gas (GHG) emissions due to cryptoart. A range of proposals have been advanced, from reducing gas usage [@pipkin2021here] to switching to Proof of Stake blockchains [@wintermeyer2021climate] to carbon offsets [@kahn2021how]. Others have even asserted that cryptoart has no impact on carbon emissions [@superrare2021no] [@mattei2021should]." +"---\nabstract: 'In recent years, ensemble modeling has been widely employed in space weather to estimate uncertainties in forecasts. We here focus on the ensemble modeling of CME arrival times and arrival velocities using a drag-based model, which is well-suited for this purpose due to its simplicity and low computational cost. Although ensemble techniques have previously been applied to the drag-based model, it is still not clear how to best determine distributions for its input parameters, namely the drag parameter and the solar wind speed. The aim of this work is to evaluate statistical distributions for these model parameters starting from a list of past CME-ICME events . We employ LASCO coronagraph observations to measure initial CME position and speed, and in situ data to associate them with an arrival date and arrival speed. For each event we ran a statistical procedure to invert the model equations, producing parameters distributions as output. Our results indicate that the distributions employed in previous works were appropriately selected, even though they were based on restricted samples and heuristic considerations. On the other hand, possible refinements to the current method are also identified, such as the dependence of the drag parameter distribution on the" +"---\nabstract: 'To measure node importance, network scientists employ centrality scores that typically take a microscopic or macroscopic perspective, relying on node features or global network structure. However, traditional centrality measures such as degree centrality, betweenness centrality, or PageRank neglect the community structure found in real-world networks. To study node importance based on network flows from a mesoscopic perspective, we analytically derive a community-aware information-theoretic centrality score based on network flow and the coding principles behind the map equation: map equation centrality. Map equation centrality measures how much further we can compress the network\u2019s modular description by not coding for random walker transitions to the respective node, using an adapted coding scheme and determining node importance from a network flow-based point of view. The information-theoretic centrality measure can be determined from a node\u2019s local network context alone because changes to the coding scheme only affect other nodes in the same module. Map equation centrality is agnostic to the chosen network flow model and allows researchers to select the model that best reflects the dynamics of the process under study. Applied to synthetic networks, we highlight how our approach enables a more fine-grained differentiation between nodes than node-local or network-global measures." +"---\nabstract: |\n Recent research suggests that predictions made by machine-learning models can amplify biases present in the training data. When a model amplifies bias, it makes certain predictions at a higher rate for some groups than expected based on training-data statistics. Mitigating such bias amplification requires a deep understanding of the mechanics in modern machine learning that give rise to that amplification. We perform the first systematic, controlled study into when and how bias amplification occurs. To enable this study, we design a simple image-classification problem in which we can tightly control (synthetic) biases. Our study of this problem reveals that the strength of bias amplification is correlated to measures such as model accuracy, model capacity, model overconfidence, and amount of training data. We also find that bias amplification can vary greatly during training. Finally, we find that bias amplification may depend on the difficulty of the classification task relative to the difficulty of recognizing group membership: bias amplification appears to occur primarily when it is easier to recognize group membership than class membership. Our results suggest best practices for training machine-learning models that we hope will help pave the way for the development of better mitigation strategies.\n\n Code" +"---\nabstract: 'We consider a stochastic conservation law on the line with solution-dependent diffusivity, a super-linear, sub-quadratic Hamiltonian, and smooth, spatially-homogeneous kick-type random forcing. We show that this Markov process admits a unique ergodic spatially-homogeneous invariant measure for each mean in a non-explicit unbounded set. This generalizes previous work on the stochastic Burgers equation.'\nauthor:\n- 'Theodore D. Drivas[^1] [^2]'\n- 'Alexander Dunlap[^3]'\n- 'Cole Graham[^4]'\n- 'Joonhyun La[^5]'\n- 'Lenya Ryzhik[^6]'\nbibliography:\n- 'burgers.bib'\ntitle: Invariant measures for stochastic conservation laws on the line\n---\n\nIntroduction\n============\n\nWe consider the stochastic conservation law $$\\label{eq:uPDE}\n \\partial_{t}u = \\partial_{x}\\big[\\kappa(u)\\partial_{x}u - H(u) + V(t,x)\\big]\n \\quad \\text{for } t \\in {\\mathbb{R}}_+,\\, x \\in {\\mathbb{R}}.$$ Here, $\\kappa(u)$ is a H\u00f6lder continuous nonlinear diffusivity bounded from above and below, $H(u)$ is a sub-quadratic, super-linear Hamiltonian, and $V(t,x)$ is a random, space-stationary noise that is smooth in space and \u201ckick-type\u201d in time. We defer the precise assumptions on $\\kappa$, $H$, and $V$ to \\[assu:kappa,assu:H,assu:V\\], respectively, in \\[subsec:Assumptions\\] below.\n\nThe stochastic Burgers equation is an important special case of \\[eq:uPDE\\], corresponding to constant\u00a0$\\kappa$ and $H(u)=u^{2}/2$. Ergodic properties of this equation on the whole line and with various forms of the random noise $V(t,x)$ have been studied" +"---\nabstract: 'The need to analyze information from streams arises in a variety of applications. One of its fundamental research directions is to mine sequential patterns over data streams. Current studies mine series of items based on the presence of the pattern in transactions but pay no attention to the series of itemsets and their multiple occurrences. The pattern over a window of itemsets stream and their multiple occurrences, however, provides additional capability to recognize the essential characteristics of the patterns and the inter-relationships among them that are unidentifiable by the existing presence-based studies. In this paper, we study such a new sequential pattern mining problem and propose a corresponding sequential miner with novel strategies to prune the search space efficiently. Experiments on both real and synthetic data show the utility of our approach.'\nauthor:\n- Thomas Guyet\n- Wenbin Zhang\n- Albert Bifet\nbibliography:\n- 'ref.bib'\ntitle: Incremental Mining of Frequent Serial Episodes Considering Multiple Occurrences\n---\n\nIntroduction\n============\n\nOnline mining of frequent patterns over a sliding window is one of the most important tasks in data stream mining with broad applications. In this case, the data stream is made of items or itemsets that arrive continuously. The aim" +"---\nabstract: 'Currently available quantum computers, so called Noisy Intermediate-Scale Quantum (NISQ) devices, are characterized by relatively low number of qubits and moderate gate fidelities. In such scenario, the implementation of quantum error correction is impossible and the performance of those devices is quite modest. In particular, the depth of circuits implementable with reasonably high fidelity is limited, and the minimization of circuit depth is required. Such depths depend on the efficiency of the universal set of gates $\\mathcal{S}$ used in computation, and can be bounded using the Solovay-Kitaev theorem. However, it is known that much better, asymptotically tight bounds of the form $\\mathcal{O}(\\mathrm{log}(\\epsilon^{-1}))$, can be obtained for specific $\\mathcal{S}$. Those bounds are controlled by so called spectral gap, denoted $\\mathrm{gap}(\\mathcal{S})$. Yet, the computation of $\\mathrm{gap}(\\mathcal{S})$ is not possible for general $\\mathcal{S}$ and in practice one considers spectral gap at a certain scale $r(\\epsilon)$, denoted $\\mathrm{gap}_r(\\mathcal{S})$. This turns out to be sufficient to bound the efficiency of $\\mathcal{S}$ provided that one is interested in a physically feasible case, in which an error $\\epsilon$ is bounded from below. In this paper we derive lower bounds on $\\mathrm{gap}_r(\\mathcal{S})$ and, as a consequence, on the efficiency of universal sets of $d$-dimensional quantum gates" +"---\nabstract: 'An algorithm for the calculation of hyperfine structure and spectra of diatomic molecules based on the variational nuclear motion is presented. Hyperfine coupling terms considered are Fermi-contact, nuclear spin-electron spin dipole-dipole, nuclear spin-orbit, nuclear spin-rotation and nuclear electric quadrupole interactions. Initial hyperfine-unresolved wavefunctions are obtained for given set of potential energy curves and associated couplings by variation solution of the nuclear-motion Schr\u00f6dinger equation. Fully hyperfine-resolved parity-conserved rovibronic Hamiltonian matrices for a given final angular momentum, $\\bm{F}$, are constructed and then diagonalized to give hyperfine-resolved energies and wavefunctions. Electric transition dipole moment curves can then be used to generate a hyperfine-resolved line list by applying rigorous selection rules. The algorithm is implemented in Duo, which is a general program for calculating spectra of diatomic molecules. This approach is tested for NO and MgH, and the results are compared to experiment and shown to be consistent with those given the well-used effective Hamiltonian code PGOPHER.'\nauthor:\n- Qianwei Qu\n- 'Sergei N. Yurchenko'\n- Jonathan Tennyson\nbibliography:\n- 'bib\\_journals\\_iso.bib'\n- 'bib\\_hyperfine.bib'\n- 'bib\\_jtj.bib'\n- 'bib\\_methods.bib'\n- 'bib\\_NO.bib'\n- 'bib\\_MgH.bib'\n- 'bib\\_VO.bib'\n- 'bib\\_sy.bib'\ntitle: 'A method for the variational calculation of hyperfine-resolved rovibronic spectra of diatomic molecules'\n---\n\n![image](fig_entry.pdf)" +"---\nabstract: 'Algorithms which minimize the averaged loss have been widely designed for dealing with noisy labels. Intuitively, when there is a finite training sample, penalizing the variance of losses will improve the stability and generalization of the algorithms. Interestingly, we found that the variance should be increased for the problem of learning with noisy labels. Specifically, increasing the variance will boost the memorization effects and reduce the harmfulness of incorrect labels. By exploiting the label noise transition matrix, regularizers can be easily designed to reduce the variance of losses and be plugged in many existing algorithms. Empirically, the proposed method by increasing the variance of losses significantly improves the generalization ability of baselines on both synthetic and real-world datasets.'\nauthor:\n- |\n Yexiong Lin$^{1}$, Yu Yao$^{2}$, Yuxuan Du$^{2}$,\\\n Jun Yu$^{3}$, Bo Han$^{4}$, Mingming Gong$^{5}$, Tongliang Liu$^{\\dagger\\text{ }2}$\\\n $^1$Hunan University; $^2$University of Sydney;\\\n $^3$University of Science and Technology of China;\\\n $^4$Hong Kong Baptist University; $^5$University of Melbourne;\\\nbibliography:\n- 'bib.bib'\ntitle: 'Do We Need to Penalize Variance of Losses for Learning with Label Noise?'\n---\n\nIntroduction\n============\n\nLearning with noisy labels can be dated back to [@angluin1988learning]. It has recently drawn a lot of attention [@liu2015classification; @nguyen2019self; @li2020dividemix; @li2021provably] because" +"---\nabstract: 'The nature and origin of electronic nematicity remains a significant challenge in our understanding of the iron-based superconductors. This is particularly evident in the iron chalcogenide, FeSe, where it is currently unclear how the experimentally determined Fermi surface near the M point evolves from having two electron pockets in the tetragonal state, to exhibiting just a single electron pocket in the nematic state. This has posed a major theoretical challenge, which has become known as the missing electron pocket problem of FeSe, and is of central importance if we wish to uncover the secrets behind nematicity and superconductivity in the wider iron-based superconductors. Here, we review the recent experimental work uncovering this nematic Fermi surface of FeSe from both ARPES and STM measurements, as well as current theoretical attempts to explain this missing electron pocket of FeSe, with a particular focus on the emerging importance of incorporating the $d_{xy}$ orbital into theoretical descriptions of the nematic state. Furthermore, we will discuss the consequence this missing electron pocket has on the theoretical understanding of superconductivity in this system and present several remaining open questions and avenues for future research.'\nauthor:\n- 'Luke C. Rhodes'\n- Matthias Eschrig\n- 'Timur" +"---\nabstract: 'It is well established that it is possible to switch certain antiferromagnets electrically, yet the interplay of and thermal activation is only poorly understood. Combining *ab initio* calculations and atomistic spin dynamics simulations we develop a multiscale model to study the current induced switching in . We compute from first principles the strength and direction of the electrically induced magnetic moments, caused by the Rashba\u2013Edelstein effect, and take these into account in atomistic spin dynamics simulations. Our simulations reveal the switching paths as well as the time scales for switching. The size of the induced moments, however, turns out to be insufficient to lead to fully deterministic switching. Instead, we find that a certain degree of thermal activation is required to help overcoming the relevant energy barrier.'\nauthor:\n- Severin Selzer\n- Leandro Salemi\n- Andr\u00e1s De\u00e1k\n- Eszter Simon\n- L\u00e1szl\u00f3 Szunyogh\n- 'Peter M. Oppeneer'\n- Ulrich Nowak\nbibliography:\n- 'literature.bib'\ntitle: Current induced switching in from first principles\n---\n\nIntroduction\n============\n\nare promising materials for spintronic devices. Among the advantages over are the lack of stray fields, the very low susceptibility to magnetic fields, the abundance of materials and much faster spin dynamics [@jungwirth_antiferromagnetic_2016; @zelezny_spin_2018;" +"---\nabstract: 'Many variations of the classical graph coloring model have been intensively studied due to their multiple applications; scheduling problems and aircraft assignments, for instance, motivate the *robust coloring problem*. This model gets to capture natural constraints of those optimization problems by combining the information provided by two colorings: a vertex coloring of a graph and the induced edge coloring on a subgraph of its complement; the goal is to minimize, among all proper colorings of the graph for a fixed number of colors, the number of edges in the subgraph with the endpoints of the same color. The study of the robust coloring model has been focused on the search for heuristics due to its NP-hard character when using at least three colors, but little progress has been made in other directions. We present a new approach on the problem obtaining the first collection of non-heuristic results for general graphs; among them, we prove that robust coloring is the model that better approaches the equitable partition of the vertex set, even when the graph does not admit a so-called *equitable coloring*. We also show the NP-completeness of its decision problem for the unsolved case of two colors, obtain" +"---\nabstract: 'In this article we show that the Erd\u0151s-Kac theorem, which informally states that the number of prime divisors of very large integers converges to a normal distribution, has an elegant proof via Algorithmic Information Theory.'\nauthor:\n- |\n Aidan Rocke\\\n `aidanrocke@gmail.com`\\\ntitle: 'An information-theoretic proof of the Erd\u0151s-Kac theorem'\n---\n\n=1\n\n*From an information-theoretic perspective, in order to analyse the normal order of prime divisors of a typical integer we must first carefully define the Algorithmic Probability that a prime number is observed, either through multiplication or division.*\n\nAlgorithmic Probability as the Universal A Priori Probability\n=============================================================\n\nThe notion of Algorithmic Probability allows us to define a *Universal A Priori Probability* for two reasons. First, although Kolmogorov Complexity is not computable for any data structure $X$ its Minimum Description Length is independent of the choice of description language $U$ due to the Invariance theorem \\[4\\]:\n\n$$\\lvert K_U(X)-K_{U'}(X)\\rvert \\leq \\text{Cst}$$\n\nand so in a precise sense, asymptotic Kolmogorov Complexity results are Universal. Second, Levin\u2019s Coding theorem, asserts that \\[3\\]:\n\n$$-\\log_2 m(X) = K_U(X) + \\mathcal{O}(1)$$\n\nwhere $m(X)$ is the *Algorithmic Probability* of $X$.\n\nThe Algorithmic Probability of a prime number\n=============================================\n\nFrom a frequentist perspective, the typical probability that" +"---\nabstract: 'Field of view (FoV) prediction is critical in 360-degree video multicast, which is a key component of the emerging Virtual Reality (VR) and Augmented Reality (AR) applications. Most of the current prediction methods combining saliency detection and FoV information neither take into account that the distortion of projected 360-degree videos can invalidate the weight sharing of traditional convolutional networks, nor do they adequately consider the difficulty of obtaining complete multi-user FoV information, which degrades the prediction performance. This paper proposes a spherical convolution-empowered FoV prediction method, which is a multi-source prediction framework combining salient features extracted from 360-degree video with limited FoV feedback information. A spherical convolution neural network (CNN) is used instead of a traditional two-dimensional CNN to eliminate the problem of weight sharing failure caused by video projection distortion. Specifically, salient spatial-temporal features are extracted through a spherical convolution-based saliency detection model, after which the limited feedback FoV information is represented as a time-series model based on a spherical convolution-empowered gated recurrent unit network. Finally, the extracted salient video features are combined to predict future user FoVs. The experimental results show that the performance of the proposed method is better than other prediction methods.'\nauthor:\n-" +"---\nabstract: 'Recent analyses have shown that close encounters between stars and stellar black holes occur frequently in dense star clusters. Depending upon the distance at closest approach, these interactions can lead to dissipating encounters such as tidal captures and disruptions, or direct physical collisions, all of which may be accompanied by bright electromagnetic transients. In this study, we perform a wide range of hydrodynamic simulations of close encounters between black holes and main-sequence stars that collectively cover the parameter space of interest, and we identify and classify the various possible outcomes. In the case of nearly head-on collisions, the star is completely disrupted with roughly half of the stellar material becoming bound to the black hole. For more distant encounters near the classical tidal-disruption radius, the star is only partially disrupted on the first pericenter passage. Depending upon the interaction details, the partially disrupted stellar remnant may be tidally captured by the black hole or become unbound (in some cases, receiving a sufficiently large impulsive kick from asymmetric mass loss to be ejected from its host cluster). In the former case, the star will undergo additional pericenter passages before ultimately being disrupted fully. Based on the properties of the" +"---\nabstract: 'Video restoration (, video super-resolution) aims to restore high-quality frames from low-quality frames. Different from single image restoration, video restoration generally requires to utilize temporal information from multiple adjacent but usually misaligned video frames. Existing deep methods generally tackle with this by exploiting a sliding window strategy or a recurrent architecture, which either is restricted by frame-by-frame restoration or lacks long-range modelling ability. In this paper, we propose a Video Restoration Transformer (VRT) with parallel frame prediction and long-range temporal dependency modelling abilities. More specifically, VRT is composed of multiple scales, each of which consists of two kinds of modules: temporal mutual self attention (TMSA) and parallel warping. TMSA divides the video into small clips, on which mutual attention is applied for joint motion estimation, feature alignment and feature fusion, while self attention is used for feature extraction. To enable cross-clip interactions, the video sequence is shifted for every other layer. Besides, parallel warping is used to further fuse information from neighboring frames by parallel feature warping. Experimental results on five tasks, including video super-resolution, video deblurring, video denoising, video frame interpolation and space-time video super-resolution, demonstrate that VRT outperforms the state-of-the-art methods by large margins (**up to" +"---\nabstract: 'We consider a network of bank holdings, where every holding has two subsidiaries of different types. A subsidiary can trade with another holding\u2019s subsidiary of the same type. Holdings support their subsidiaries up to a certain level when they would otherwise fail to honor their financial obligations. We investigate the spread of contagion in this banking network when the number of bank holdings is large, and find the final number of defaulted subsidiaries under different rules for the holding support. We also consider resilience of this multilayered network to small shocks. Our work sheds light onto the role that holding structures can play in the amplification of financial stress. We find that depending on the capitalization of the network, a holding structure can be beneficial as compared to smaller separated entities. In other instances it can be harmful and actually increase contagion. We illustrate our results in a numerical case study and also determine the optimal level of holding support from a regulator perspective.'\nauthor:\n- 'Maxim Bichuch[^1]'\n- 'Nils Detering[^2]'\ntitle: '**When do you Stop Supporting your Bankrupt Subsidiary? A Systemic Risk Perspective**'\n---\n\n*Keywords:* systemic risk, financial contagion, holdings, multilayered networks\n\nIntroduction\n============\n\nAt first, financial" +"---\nauthor:\n- 'P. Hadrava [^1]'\n- 'M. Cabezas'\n- 'G. Djura\u0161evi\u0107'\n- 'J. Garc\u00e9s'\n- 'S. Yu. Gorda'\n- 'M. I. Jurkovic'\n- 'D. Kor\u010d\u00e1kov\u00e1'\n- |\n \\\n H. Markov\n- 'R. E. Mennickent'\n- 'J. Petrovi\u0107'\n- 'I. Vince'\n- 'S. Zharikov'\nbibliography:\n- 'AAA.bib'\ndate: 'Received 28 October 2021; accepted 7 January 2022'\ntitle: Spectroscopy of the massive interacting binary UU\u00a0Cassiopeiae\n---\n\n[The eclipsing close binary UU\u00a0Cas is an interacting massive double-periodic system with a gainer star partly hidden in an accretion disk.]{} [In order to study the physics of the accretion process in greater detail, along with the structure and dynamics of the circumstellar matter in the system, we supplement our previous results obtained from photometry with an analysis of the spectra of UU\u00a0Cas.]{} [We collected all available spectra used in previous publications on UU\u00a0Cas and we acquired new ones. The method of disentangling was applied to this set of spectra spanning the years 2008\u20132021. The orbital parameters were disentangled and a fit of the separated component spectra by synthetic ones has been used to determine the physical parameters of the component stars. We compared the results to models of the evolution" +"---\nabstract: 'We propose a theoretical framework for non redundant reconstruction of a global loss from a collection of local ones under constraints given by a functor; we call this loss the regionalized loss in honor to Yedidia, Freeman, Weiss\u2019 celebrated article \u2018Constructing free-energy approximations and generalized belief propagation algorithms\u2019 where a first example of regionalized loss, for entropy and the marginal functor, is built. We show how one can associate to these regionalized losses message passing algorithms for finding their critical points. It is a natural mathematical framework for optimization problems where there are multiple points of views on a dataset and replaces message passing algorithms as canonical ways of finding the optima of these problems. We explain how Generalized Belief propagation algorithms fall into the framework we propose and propose novel message passing algorithms for noisy channel networks.'\nauthor:\n- 'Gr\u00e9goire Sergeant-Perthuis'\nbibliography:\n- 'bibliography.bib'\ntitle: Regionalized Optimization\n---\n\nOptimization, Category Theory, Message Passing algorithms, Free energy, Belief Propagation, Variational inference, Noisy channel networks.\n\nIntroduction\n============\n\nMotivation\n----------\n\nRecent computational models of adaptive systems are based on the premise that these systems have an internal model of the state and dynamics of their environment that they infer through" +"---\nabstract: 'We introduce a Fourier method (Fm) for the determination of best focus for telescopes with stars. Our method fits a power function, that we will derive in this paper, to a set of images taken as a function of focuser position. The best focus position is where the power is maximum. Fm was first tested with small refractor and Schmidt-Cassegrain (SCT) telescopes. After the successful small telescope tests, we then tested Fm with a 2\u00a0m Ritchey-Chr\u00e9tien-Coud\u00e9 (RCC). Our tests show that Fm is immune to the problems inherent in the popular half-flux diameter method.'\nauthor:\n- |\n C.Y. Tan,$^{1}$[^1] and B. Schulz$^{2}$[^2]\\\n $^{1}$Aurora, IL60504, USA\\\n $^{2}$81249 M\u00fcnchen, Germany\nbibliography:\n- 'FM\\_v4.bib'\ndate: 'Accepted XXX. Received YYY; in original form ZZZ'\ntitle: A Fourier method for the determination of focus for telescopes with stars\n---\n\n\\[firstpage\\]\n\nmethods: data analysis \u2013 methods: analytical \u2013 methods: numerical\n\nIntroduction {#sec:intro}\n============\n\nOne very important aspect in the collection of good astronomical data is the quality of the focusing of a telescope. A typical observing session can last between 1 to 12 hours depending on the season and latitude. During this time, the ambient temperature will fluctuate. Temperature changes induce strain in" +"---\nabstract: 'We consider a walker moving in a one-dimensional interval with absorbing boundaries under the effect of Markovian resettings to the initial position. The walker\u2019s motion follows a random walk characterized by a general waiting time distribution between consecutive short jumps. We investigate the existence of an optimal reset rate, which minimizes the mean exit passage time, in terms of the statistical properties of the waiting time probability. Generalizing previous results restricted to Markovian random walks, we here find that, depending on the value of the relative standard deviation of the waiting time probability, resetting can be either (i) never beneficial, (ii) beneficial depending on the distance of the reset to the boundary, or (iii) always beneficial.'\nauthor:\n- 'Vicen\u00e7 M\u00e9ndez, Axel Mas\u00f3-Puigdellosas and Daniel Campos'\nbibliography:\n- 'references.bib'\ntitle: 'Non-standard diffusion under Markovian resetting in bounded domains'\n---\n\nINTRODUCTION\n============\n\nBrownian motion under restart has been widely studied from a theoretical point of view [@EvMa11p]. Depending on the type of random walk and the characteristics of the resetting mechanism, the overall process may reach an equilibrium state [@MeCa16] and have an optimal strategy to reach a fixed target [@CaMe15]. The mean first passage time (MFPT) of a random" +"---\nabstract: |\n We investigate the geometry of a typical spin cluster in random triangulations sampled with a probability proportional to the energy of an Ising configuration on their vertices, both in the finite and infinite volume settings. This model is known to undergo a combinatorial phase transition at an explicit critical temperature, for which its partition function has a different asymptotic behavior than uniform maps. The purpose of this work is to give geometric evidence of this phase transition.\n\n In the infinite volume setting, called the Infinite Ising Planar Triangulation, we exhibit a phase transition for the existence of an infinite spin cluster: for critical and supercritical temperatures, the root spin cluster is finite almost surely, while it is infinite with positive probability for subcritical temperatures. Remarkably, we are able to obtain an explicit parametric expression for this probability, which allows to prove that the percolation critical exponent is $\\beta=1/4$.\n\n We also derive critical exponents for the tail distribution of the perimeter and of the volume of the root spin cluster, both in the finite and infinite volume settings. Finally, we establish the scaling limit of the interface of the root spin cluster seen as a looptree. In particular" +"---\nabstract: 'We employ a lattice-gas extension of the Maier\u2013Saupe model with discrete orientation states to study the phase behavior of a statistical model for biaxial nematogenic units in mean-field theory. The phase behavior of the system is investigated in terms of the strength of isotropic interaction between anisotropic objects, as well as the degree of biaxiality and the concentration of those units. We obtain phase diagrams with isotropic phases and stable biaxial and uniaxial nematic structures, various phase coexistences, many types of critical and multicritical behaviors, such as ordinary vapor-liquid critical points, critical end points and tricritical points, and distinct Landau-like multicritical points. Our results widen the possibilities of relating the phenomenological coefficients of the Landau\u2013de Gennes expansion to microscopic parameters, allowing an improved interpretation of theoretical fittings to experimental data.'\nauthor:\n- 'W. G. C. Oropesa'\n- 'E. S. Nascimento'\n- 'A. P. Vieira'\nbibliography:\n- 'references.bib'\ntitle: 'Phase behavior of a lattice-gas model for biaxial nematics'\n---\n\nIntroduction {#sec1}\n============\n\nNematic mesophases are probably the simplest states of matter observed in liquid-crystalline systems that exhibit long-range orientational order in the absence of translational symmetry breaking [@deGennes_Book; @FigueiredoNeto2005; @Singh2000; @Palffy2007]. Indeed, uniaxial nematic structures are characterized macroscopically by" +"---\nauthor:\n- 'T. Wevers'\n- 'D.R. Pasham'\n- 'P. Jalan'\n- 'S. Rakshit'\n- 'R. Arcodia'\nbibliography:\n- 'aanda.bib'\ntitle: 'Host galaxy properties of quasi-periodically erupting X-ray sources'\n---\n\n[Quasi-periodic X-ray eruptions (QPEs) are a recently discovered phenomenon, the nature of which remains unclear. Based on their discovery in active galactic nuclei (AGN), explanations related to an AGN accretion disk, or potentially stellar tidal disruption event (TDE), were put forward. Following the report of QPEs in apparently passive galaxies, alternatives including highly unequal mass compact object binaries have been proposed to explain their properties.]{} [We perform a systematic study of the five known QPE host galaxies with the aim of providing new insights into their nature.]{} [We analyse new and archival medium resolution optical spectroscopy of the QPE hosts. We measure emission (and absorption) line fluxes, their ratios and equivalent widths (EWs), to locate the QPE hosts on diagnostic diagrams. We also measure the velocity dispersion of the stellar absorption lines to estimate their black hole masses.]{} [All QPE host galaxies show emission lines in their optical spectra. Based on their ratios and EWs, we find evidence for the presence of an active galactic nucleus in all sources, including" +"---\nabstract: 'We study the sensitivity of a Mach-Zehnder interferometer that contains in addition to the phase shifter a non-linear element. By including both elements in a cavity or a loop that the light transverses many times, a non-linear kicked version of the interferometer arises. We study its sensitivity as function of the phase shift, the kicking strength, the maximally reached average number of photons, and damping due to photon loss for an initial coherent state. We find that for vanishing damping Heisenberg-limited scaling of the sensitivity arises if squeezing dominates the total photon number. For small to moderate damping rates the non-linear kicks can considerably increase the sensitivity as measured by the quantum Fisher information per unit time.'\nauthor:\n- Sabrina M\u00fcller and Daniel Braun\nbibliography:\n- '../../../bibfile\\_master/mybibs\\_bt.bib'\ntitle: 'Quantum metrology with a non-linear kicked Mach-Zehnder interferometer'\n---\n\nIntroduction {#sec:intro}\n============\n\nA Mach-Zehnder interferometer is one of the basic tools in optics for measuring phase shifts in a light beam relative to a reference beam: a light beam is split into two beams with a beam splitter, one beam undergoes the phase shift $\\phi$, e.g.\u00a0by passing through a dispersive medium, and then the two beams are combined again" +"---\nabstract: |\n Secure [*multi-party computation*]{} (MPC) is a fundamental problem in secure distributed computing. An MPC protocol allows a set of $n$ mutually distrusting parties to carry out any joint computation of their private inputs, without disclosing any additional information about their inputs. MPC with [*information-theoretic*]{} security (also called [*unconditional security*]{}) provides the strongest security guarantees and remains secure even against [*computationally unbounded*]{} adversaries. [*Perfectly-secure*]{} MPC protocols is a class of information-theoretically secure MPC protocols, which provides all the security guarantees in an [*error-free*]{} fashion. The focus of this work is perfectly-secure MPC. Known protocols are designed [*assuming*]{} either a [*synchronous*]{} or [*asynchronous*]{} communication network. It is well known that perfectly-secure [*synchronous*]{} MPC protocol is possible as long as adversary can corrupt any $t_s < n/3$ parties. On the other hand, perfectly-secure [*asynchronous*]{} MPC protocol can tolerate up to $t_a < n/4$ corrupt parties. A natural question is does there exist a [*single*]{} MPC protocol for the setting where the parties are [*not aware*]{} of the exact network type and which can tolerate up to $t_s < n/3$ corruptions in a synchronous network and up to $t_a < n/4$ corruptions in an [*asynchronous*]{} network. We design such a [*best-of-both-worlds*]{}" +"---\nabstract: 'We study the conjectured relationship between the implicit\u00a0regularization in neural\u00a0networks, trained with gradient-based methods, and rank\u00a0minimization of their weight\u00a0matrices. Previously, it was proved that for linear\u00a0networks (of depth\u00a0$2$ and vector-valued\u00a0outputs), gradient\u00a0flow\u00a0(GF) w.r.t. the square\u00a0loss acts as a rank\u00a0minimization\u00a0heuristic. However, understanding to what extent this generalizes to nonlinear\u00a0networks is an open\u00a0problem. In this paper, we focus on nonlinear\u00a0ReLU\u00a0networks, providing several new positive and negative results. On the negative side, we prove (and demonstrate empirically) that, unlike the linear\u00a0case, GF on ReLU\u00a0networks may no\u00a0longer tend to minimize\u00a0ranks, in a rather\u00a0strong sense (even approximately, for \u201cmost\u201d datasets of size\u00a0$2$). On the positive side, we reveal that ReLU\u00a0networks of sufficient\u00a0depth are provably biased towards low-rank\u00a0solutions in several reasonable settings.'\nauthor:\n- |\n Nadav Timor Gal Vardi Ohad Shamir\\\n Weizmann Institute of Science\\\n `{nadav.timor,gal.vardi,ohad.shamir}@weizmann.ac.il`\nbibliography:\n- 'bib.bib'\ntitle: |\n Implicit Regularization Towards Rank Minimization\\\n in ReLU Networks\n---\n\nIntroduction {#Introduction}\n============\n\nA central puzzle in the theory\u00a0of\u00a0deep\u00a0learning is how neural\u00a0networks generalize even when trained without any explicit\u00a0regularization, and when there are far more" +"---\nabstract: 'The issues of bias-correction and robustness are crucial in the strategy of divide-and-conquer (DC), especially for asymmetric nonparametric models with massive data. It is known that quantile-based methods can achieve the robustness, but the quantile estimation for nonparametric regression has non-ignorable bias when the error distribution is asymmetric. This paper explores a global bias-corrected DC by quantile-matched composite for nonparametric regressions with general error distributions. The proposed strategies can achieve the bias-correction and robustness, simultaneously. Unlike common DC quantile estimations that use an identical quantile level to construct a local estimator by each local machine, in the new methodologies, the local estimators are obtained at various quantile levels for different data batches, and then the global estimator is elaborately constructed as a weighted sum of the local estimators. In the weighted sum, the weights and quantile levels are well-matched such that the bias of the global estimator is corrected significantly, especially for the case where the error distribution is asymmetric. Based on the asymptotic properties of the global estimator, the optimal weights are attained, and the corresponding algorithms are then suggested. The behaviors of the new methods are further illustrated by various numerical examples from simulation experiments and" +"---\nabstract: 'We introduce a new method for inverse design of nanophotonic devices which guarantees that resulting designs satisfy strict length scale constraints \u2014 including minimum width and spacing constraints required by commercial semiconductor foundries. The method adopts several concepts from machine learning to transform the problem of topology optimization with strict length scale constraints to an unconstrained stochastic gradient optimization problem. Specifically, we introduce a conditional generator for feasible designs and adopt a straight-through estimator for backpropagation of gradients to a latent design. We demonstrate the performance and reliability of our method by designing several common integrated photonic components.'\nauthor:\n- 'Martin F. Schubert'\n- 'Alfred K. C. Cheung'\n- 'Ian A. D. Williamson'\n- Aleksandra Spyra\n- 'David\u00a0H.\u00a0Alexander'\ntitle: Inverse design of photonic devices with strict foundry fabrication constraints\n---\n\n[^1]\n\nIntroduction\n============\n\nIntegrated photonic devices have led to game changing new capabilities in applications ranging from high-speed communications [@Marpaung2019-jx] to next-generation quantum computing platforms [@Arrazola2021-ms] and machine learning hardware accelerators [@Wetzstein2020-lz]. These platforms stand to benefit from photonic components with improved performance, lower losses, larger bandwidths, and more compact footprints. However, achieving such multi-faceted specifications is extremely challenging through conventional intuition-backed design methodologies. In contrast," +"---\nabstract: 'A key challenge of supervised learning is the availability of human-labeled data. We evaluate a big data processing pipeline to auto-generate labels for remote sensing data. It is based on rasterized statistical features extracted from surveys such as e.g. LiDAR measurements. Using simple combinations of the rasterized statistical layers, it is demonstrated that multiple classes can be generated at accuracies of $\\sim0.9$.As proof of concept, we utilize the big geo-data platform IBM PAIRS to dynamically generate such labels in dense urban areas with multiple land cover classes. The general method proposed here is platform independent, and it can be adapted to generate labels for other satellite modalities in order to enable machine learning on overhead imagery for land use classification and object detection.'\nauthor:\n- \n- \n- \nbibliography:\n- 'Paper\\_IEEEBigData2021\\_AlbrechtMariannoKlein\\_AutoGeoLabel.bib'\ntitle: |\n [*AutoGeoLabel*]{}:\\\n Automated Label Generation\\\n for Geospatial Machine Learning\n---\n\nGeospatial analysis, Laser radar, Big data applications, Weak supervision\n\nIntroduction & Motivation {#sec:Intro}\n=========================\n\nDriven by the availability of massive amounts of data, image classification achieves accuracy of over 90% with thousands of classes, today [@yalniz]. To a large extent, the performance of modern deep learning models is driven by the volume of data, and the availability" +"---\nabstract: 'We study the map learned by a family of autoencoders trained on MNIST, and evaluated on ten different data sets created by the random selection of pixel values according to ten different distributions. Specifically, we study the eigenvalues of the Jacobians defined by the weight matrices of the autoencoder at each training and evaluation point. For high enough latent dimension, we find that each autoencoder reconstructs all the evaluation data sets as similar *generalized characters*, but that this reconstructed *generalized character* changes across autoencoder. Eigenvalue analysis shows that even when the reconstructed image appears to be an MNIST character for all out of distribution data sets, not all have latent representations that are close to the latent representation of MNIST characters. All told, the eigenvalue analysis demonstrated a great deal of geometric instability of the autoencoder both as a function on out of distribution inputs, and across architectures on the same set of inputs.'\nauthor:\n- Susama Agarwala\n- Benjamin Dees\n- Corey Lowman\nbibliography:\n- 'The\\_OOD.bib'\ntitle: Geometric instability of out of distribution data across autoencoder architecture\n---\n\nThe distributions of training, test and validation data is often different than the distribution of the data on which" +"---\nabstract: |\n Let $k$ be an algebraically closed field of prime characteristic $p$. Let $kGe$ be a block of a group algebra of a finite group $G$, with normal defect group $P$ and abelian $p'$ inertial quotient $L$. Then we show that $kGe$ is a matrix algebra over a quantised version of the group algebra of a semidirect product of $P$ with a certain subgroup of $L$. To do this, we first examine the associated graded algebra, using a Jennings\u2013Quillen style theorem.\n\n As an example, we calculate the associated graded of the basic algebra of the non-principal block in the case of a semidirect product of an extraspecial $p$-group $P$ of exponent $p$ and order $p^3$ with a quaternion group of order eight with the centre acting trivially. In the case $p=3$ we give explicit generators and relations for the basic algebra as a quantised version of $kP$. As a second example, we give explicit generators and relations in the case of a group of shape $2^{1+4}:3^{1+2}$ in characteristic two.\naddress:\n- |\n David Benson\\\n Institute of Mathematics\\\n Fraser Noble Building\\\n University of Aberdeen\\\n King\u2019s College\\\n Aberdeen AB24 3UE\\\n United Kingdom\n- |\n Radha Kessar and Markus Linckelmann\\\n School" +"---\nabstract: 'Anyons have been extensively investigated as information carriers in topological quantum computation. However, how to characterize the information flow in quantum networks composed by anyons is less understood, which motivates us to study quantum communication protocols in anyonic systems. Here we propose a general topologically protected protocol for quantum teleportation based on the Ising anyon model, and prove that with our protocol an unknown anyonic state of any number of Ising anyons can be teleported from Alice to Bob. Our protocol naturally generalizes quantum state teleportation from systems of locally distinguishable particles to systems of Ising anyons, which may promote our understandings of anyonic quantum entanglement as a quantum resource. In addition, our protocol is expected to be realized with the Majorana zero modes, one of possible physical realizations for the Ising anyon in experiments.'\nauthor:\n- 'Cheng-Qian Xu'\n- 'D. L. Zhou'\nbibliography:\n- 'IATbib.bib'\ntitle: Quantum teleportation using Ising anyons\n---\n\nIntroduction\n============\n\nAnyon\u00a0[@PhysRevLett.49.957; @PhysRevLett.48.1144], as a kind of excitations different from boson and fermion living in two-dimensional system, has attracted the attention of theorists and experimentalists for its potential applications in fault-tolerant topological quantum computation due to its non-Abelian braiding and topologically robustness\u00a0[@kitaev2003fault;" +"---\nabstract: 'We propose a novel reconfigurable intelligent surface (RIS) encoded information transmission scheme for a line-of-sight environment. A RIS fed with data modulates the information on impinging waves emitted from an external source in the states of polarization (SoP) of the scattered waves by performing a novel differential polarization shift keying. In particular, the information is encoded in the change of the SoP over two successive scattering slots. The proposed scheme is immune to the SoP fluctuations in the wireless channel which allows for non-coherent detection at the receiver.'\nauthor:\n- 'Emad\u00a0Ibrahim, Rickard\u00a0Nilsson, and\u00a0Jaap\u00a0van\u00a0de\u00a0Beek [^1]'\nbibliography:\n- 'references.bib'\ntitle: Differential Polarization Shift Keying Through Reconfigurable Intelligent Surfaces\n---\n\nReconfigurable intelligent surface, modulation, differential polarization shift keying.\n\nIntroduction\n============\n\n[^2] Reconfigurable intelligent surface (RIS) has been introduced as attractive energy-efficient hardware for wireless communications applications. A RIS is a thin planar surface of multiple reflecting units each of which has a tunable interaction with the incident waves. Conventionally, the RIS units induce independently controllable reflection coefficients to the scattered waves on them. Therefore, the RIS is capable of re-engineering continuously the wireless channel between the transmitter and receiver [@b1].\n\nOne of the promising use" +"---\nabstract: 'ession-ased ecommendations (SBRs) capture items\u2019 dependencies from the sessions to recommend the next item. In recent years, raph eural etworks (GNN) based SBRs have become the mainstream of SBRs benefited from the superiority of GNN in modeling complex dependencies. Based on a strong assumption of *adjacent dependency*, any two adjacent items in a session are necessarily dependent in most GNN-based SBRs. However, we argue that due to the uncertainty and complexity of user behaviors, adjacency does not necessarily indicate dependency. However, the above assumptions do not always hold in actual recommendation scenarios, so it can easily lead to two drawbacks: (1) *false dependencies* occur in the session because there are adjacent but not really dependent items, and (2) the missing of *true dependencies* occur in the session because there are non-adjacent but actually dependent items. These drawbacks significantly affect item representation learning, degrading the downstream recommendation performance. To address these deficiencies, we propose a novel eview-refined nter-item raph eural etwork (RI-GNN), which utilizes topic information extracted from the reviews of items to improve dependencies between items. Experiments on two public real-world datasets demonstrate that RI-GNN outperforms SOTA methods[^1].'\nauthor:\n- Qian Zhang\n- Wenpeng Lu\n- Chong Feng" +"---\nabstract: 'In today\u2019s technology environment, information is abundant, dynamic, and heterogeneous in nature. Automated filtering and prioritization of information is based on the distinction between whether the information adds substantial value toward one\u2019s goal or not. Contextual multi-armed bandit has been widely used for learning to filter contents and prioritize according to user interest or relevance. Learn-to-Rank technique optimizes the relevance ranking on items, allowing the contents to be selected accordingly. We propose a novel approach to top-K rankings under the contextual multi-armed bandit framework. We model the stochastic reward function with a neural network to allow non-linear approximation to learn the relationship between rewards and contexts. We demonstrate the approach and evaluate the the performance of learning from the experiments using real world data sets in simulated scenarios. Empirical results show that this approach performs well under the complexity of a reward structure and high dimensional contextual features.'\nauthor:\n- 'Jade Freeman$^{1}$ and Michael Rawson$^{2}$ [^1][^2]'\nbibliography:\n- 'reference.bib'\ntitle: '**Top-K Ranking Deep Contextual Bandits for Information Selection Systems** '\n---\n\nINTRODUCTION\n============\n\nIn the current era of perpetual influx of information, decision makers and information analysts are faced with challenges under fixed time and resources. Consider a" +"---\nabstract: 'We consider primal-dual-based reinforcement learning (RL) in episodic constrained Markov decision processes (CMDPs) with non-stationary objectives and constraints, which plays a central role in ensuring the safety of RL in time-varying environments. In this problem, the reward/utility functions and the state transition functions are both allowed to vary arbitrarily over time as long as their cumulative variations do not exceed certain known variation budgets. Designing safe RL algorithms in time-varying environments is particularly challenging because of the need to integrate the constraint violation reduction, safe exploration, and adaptation to the non-stationarity. To this end, we identify two alternative conditions on the time-varying constraints under which we can guarantee the safety in the long run. We also propose the eriodically estarted ptimistic rimal-ual roximal olicy ptimization (PROPD-PPO) algorithm that can coordinate with both two conditions. Furthermore, a dynamic regret bound and a constraint violation bound are established for the proposed algorithm in both the linear kernel CMDP function approximation setting and the tabular CMDP setting under two alternative conditions. This paper provides the first provably efficient algorithm for non-stationary CMDPs with safe exploration.'\nauthor:\n- 'Yuhao Ding, Javad Lavaei'\n- Author Name\n- 'Yuhao Ding,^1^ Javad Lavaei, ^2^'\nbibliography:" +"---\nabstract: '\\[sec:abstract\\] Graph convolutional networks (GCNs) allow us to learn topologically-aware node embeddings, which can be useful for classification or link prediction. However, they are unable to capture long-range dependencies between nodes without adding additional layers\u2014which in turn leads to over-smoothing and increased time and space complexity. Further, the complex dependencies between nodes make mini-batching challenging, limiting their applicability to large graphs. We propose a Scalable Multi-resolution Graph Representation Learning (SMGRL) framework that enables us to learn multi-resolution node embeddings efficiently. Our framework is model-agnostic and can be applied to any existing GCN model. We dramatically reduce training costs by training only on a reduced-dimension coarsening of the original graph, then exploit self-similarity to apply the resulting algorithm at multiple resolutions. The resulting multi-resolution embeddings can be aggregated to yield high-quality node embeddings that capture both long- and short-range dependencies. Our experiments show that this leads to improved classification accuracy, without incurring high computational costs.'\nauthor:\n- |\n Reza Namazi namazir@mcmaster.ca\\\n McMaster University Elahe Ghalebi elahe.ghalebi@vectorinstitute.ai\\\n Vector Institute for Artificial Intelligence Sinead A. Williamson sinead@austin.utexas.edu\\\n University of Texas at Austin (Currently at Apple) Hamidreza Mahyar mahyarh@mcmaster.ca\\\n McMaster University\nbibliography:\n- 'main.bib'\ntitle: 'SMGRL: Scalable Multi-resolution Graph Representation Learning'\n---" +"---\nabstract: 'This article describes the setup and performance of the near and far detectors in the Double Chooz experiment. The electron antineutrinos of the Chooz nuclear power plant were measured in two identically designed detectors with different average baselines of about 400\u00a0m and 1050\u00a0m from the two reactor cores. Over many years of data taking the neutrino signals were extracted from interactions in the detectors with the goal of measuring a fundamental parameter in the context of neutrino oscillation, the mixing angle $\\theta_{13}$. The central part of the Double Chooz detectors was a main detector comprising four cylindrical volumes filled with organic liquids. From the inside towards the outside there were volumes containing gadolinium-loaded scintillator, gadolinium-free scintillator, a buffer oil and, optically separated, another liquid scintillator acting as veto system. Above this main detector an additional outer veto system using plastic scintillator strips was installed. The technologies developed in Double Chooz were inspiration for several other antineutrino detectors in the field. The detector design allowed implementation of efficient background rejection techniques including use of pulse shape information provided by the data acquisition system. The Double Chooz detectors featured remarkable stability, in particular for the detected photons, as" +"---\nabstract: 'In this paper, we investigate the evolution of autoencoders near their initialization. In particular, we study the distribution of the eigenvalues of the Jacobian matrices of autoencoders early in the training process, training on the MNIST data set. We find that autoencoders that have not been trained have eigenvalue distributions that are qualitatively different from those which have been trained for a long time ($>$100 epochs). Additionally, we find that even at early epochs, these eigenvalue distributions rapidly become qualitatively similar to those of the fully trained autoencoders. We also compare the eigenvalues at initialization to pertinent theoretical work on the eigenvalues of random matrices and the products of such matrices.'\nauthor:\n- Benjamin Dees\n- Susama Agarwala\n- Corey Lowman\nbibliography:\n- 'AutoEncoderInitial.bib'\ntitle: Eigenvalues of Autoencoders in Training and at Initialization\n---\n\nIntroduction\n============\n\nWhile there is a large body of literature studying the geometric properties of input space (for instance, see [@ID; @moreID; @UMAP; @tSNE; @Peterson]), there is less work in understanding the geometric characterizations of that a neural network has learned [@gag; @xu2021how], and to our knowledge, none comparing the geometric properties of a neural network at early stages of training to the geometry" +"---\nabstract:\n- |\n This thesis proposes a framework based on a notion of combinatorial cell complex (cc) whose cells are defined simply as finite sets of vertices. The cells of a cc are subject to four axioms involving a rank function that assigns a rank (or a dimension) to each cell. Our framework focuses on classes of cc admitting an inclusion-reversing duality map. We introduce a combinatorial notion of cobordism that allows us to single out a category whose morphisms are cobordisms having a causal structure. Our aim is to offer an approach to look for a combinatorial notion of quantum field theory having a built-in duality operation acting on the underlying space and not relying on any manifold structure.\n\n The introduction includes links with certain fields in Theoretical and Mathematical Physics related to Quantum Gravity and motivating our framework. We start by introducing cc and the duality map on a class of cc with empty boundary called closed cc. We then focus on the problem of reconstructing a certain class of cc from their cells of rank lower than or equal to 2. Such cc are in particular duals to simplicial complexes with no boundary and their reconstruction" +"---\nabstract: 'Equipping artificial agents with useful exploration mechanisms remains a challenge to this day. Humans, on the other hand, seem to manage the trade-off between exploration and exploitation effortlessly. In the present article, we put forward the hypothesis that they accomplish this by making optimal use of limited computational resources. We study this hypothesis by meta-learning reinforcement learning algorithms that sacrifice performance for a shorter description length (defined as the number of bits required to implement the given algorithm). The emerging class of models captures human exploration behavior better than previously considered approaches, such as Boltzmann exploration, upper confidence bound algorithms, and Thompson sampling. We additionally demonstrate that changing the description length in our class of models produces the intended effects: reducing description length captures the behavior of brain-lesioned patients while increasing it mirrors cognitive development during adolescence.'\nauthor:\n- |\n Marcel Binz\\\n MPI for Biological Cybernetics\\\n T\u00fcbingen, Germany\\\n `marcel.binz@tue.mpg.de` Eric Schulz\\\n MPI for Biological Cybernetics\\\n T\u00fcbingen, Germany\\\n `eric.schulz@tue.mpg.de`\nbibliography:\n- 'bib.bib'\ntitle: 'Modeling Human Exploration Through Resource-Rational Reinforcement Learning'\n---\n\nIntroduction\n============\n\nKnowing how to efficiently balance between exploring unfamiliar parts of an environment and exploiting currently available knowledge is an essential ingredient of any intelligent organism. In" +"---\nabstract: |\n We consider edge modification problems towards block and strictly chordal graphs, where one is given an undirected graph $G = (V,E)$ and an integer $k \\in \\mathbb{N}$ and seeks to *edit* (add or delete) at most $k$ edges from $G$ to obtain a block graph or a strictly chordal graph. The *completion* and *deletion* variants of these problems are defined similarly by only allowing edge additions for the former and only edge deletions for the latter. Block graphs are a well-studied class of graphs and admit several characterizations, *e.g.* they are diamond-free chordal graphs. Strictly chordal graphs, also referred to as *block duplicate graphs*, are a natural generalization of block graphs where one can add true twins of cut-vertices. Strictly chordal graphs are exactly dart and gem-free chordal graphs. We prove the NP-completeness for most variants of these problems and provide:\n\n - $O(k^2)$ vertex-kernels for and\n\n - $O(k^3)$ vertex-kernels for and\n\n - an $O(k^4)$ vertex-kernel for\n\nauthor:\n- Ma\u00ebl Dumas\n- Anthony Perez\n- Mathis Rocton\n- Ioan Todinca\nbibliography:\n- 'mybib.bib'\ntitle: 'Polynomial kernels for edge modification problems towards block and strictly chordal graphs [^1] '\n---\n\nIntroduction\n============\n\nParameterized algorithms are among the most natural" +"---\nabstract: 'Identifying key nodes is crucial for accelerating or impeding dynamic spreading in a network. Community-aware centrality measures tackle this problem by exploiting the community structure of a network. Although there is a growing trend to design new community-aware centrality measures, there is no systematic investigation of the proposed measures\u2019 effectiveness. This study performs an extensive comparative evaluation of prominent community-aware centrality measures using the Susceptible-Infected-Recovered (SIR) model on real-world online social networks. Overall, results show that K-shell with Community and Community-based Centrality measures are the most accurate in identifying influential nodes under a single-spreader problem. Additionally, the epidemic transmission rate doesn\u2019t significantly affect the behavior of the community-aware centrality measures.'\nauthor:\n- 'Stephany Rajeh\\*'\n- Marinette Savonnet\n- Eric Leclercq\n- Hocine Cherifi\nbibliography:\n- 'biblio.bib'\ntitle: 'Comparing Community-aware Centrality Measures in Online Social Networks'\n---\n\nIntroduction\n============\n\nWith the plethora of data flowing into online social networks, representing the main entities and their interactions is essential. Networks offer an ideal representation of such complex systems to investigate their structure and dynamics. Identifying influential nodes is crucial for many applications such as designing lucrative marketing campaigns, targeting terrorist attacks, controlling epidemic spreading, detecting financial risks, and extracting salient" +"---\nabstract: 'Classifying EEG responses to naturalistic acoustic stimuli is of theoretical and practical importance, but standard approaches are limited by processing individual channels separately on very short sound segments (a few seconds or less). Recent developments have shown classification for music stimuli ($\\sim$ 2 mins) by extracting spectral components from EEG and using convolutional neural networks (CNNs). This paper proposes an efficient method to map raw EEG signals to individual songs listened for end-to-end classification. EEG channels are treated as a dimension of a \\[$\\textit{Channel} \\times \\textit{Sample}$\\] image tile, and images are classified using CNNs. Our experimental results (\u00a088.7%) compete with state-of-the-art methods (85.0%), yet our classification task is more challenging by processing longer stimuli that were similar to each other in perceptual quality, and were unfamiliar to participants. We also adopt a transfer learning scheme using a pre-trained ResNet-50, confirming the effectiveness of transfer learning despite image domains being unrelated from each other.'\naddress: 'University of California,\u00a0Merced$^{1}$, Ericsson Inc.$^{2}$'\ntitle: 'Image-based EEG classification of Brain Responses to Song Recordings'\n---\n\nraw EEG classification, NMED-T, ResNet, Music, CNN\n\nIntroduction {#s:intro}\n============\n\nBrain Computer Interface (BCI) research seeks to interpret information retained in brain responses that relates to" +"---\nabstract: 'We propose Neural-FST Class Language Model (NFCLM) for end-to-end speech recognition, a novel method that combines neural network language models (NNLMs) and finite state transducers (FSTs) in a mathematically consistent framework. Our method utilizes a background NNLM which models generic background text together with a collection of domain-specific entities modeled as individual FSTs. Each output token is generated by a mixture of these components; the mixture weights are estimated with a separately trained neural decider. We show that NFCLM significantly outperforms NNLM by 15.8% relative in terms of Word Error Rate. NFCLM achieves similar performance as traditional NNLM and FST shallow fusion while being less prone to overbiasing and 12 times more compact, making it more suitable for on-device usage.'\naddress: 'Facebook AI, USA\\'\nbibliography:\n- 'refs.bib'\ntitle: 'Neural-FST Class Language Model for End-to-End Speech Recognition'\n---\n\nclass language model, shallow fusion, end-to-end speech recognition, named entities\n\nIntroduction {#sec:introduction}\n============\n\nEnd-to-end (E2E) automatic speech recognition (ASR) models are becoming increasingly popular, especially for on-device applications, due to their compact size and competitive transcription accuracy\u00a0[@HeSainathPrabhavalkarEtAl19; @KimLeeGowdaEtAl19]. Unlike conventional hybrid ASR systems\u00a0[@MorganBourlard95], which consist of separately trained acoustic (AM) and language models (LM), E2E models are trained jointly" +"---\nabstract: 'Entanglement distribution over quantum networks has the promise of realizing fundamentally new technologies. Entanglement between separated quantum processing nodes has been achieved on several experimental platforms in the past decade. To move towards metropolitan-scale quantum network test beds, the creation and transmission of indistinguishable single photons over existing telecom infrastructure is key. Here we report the interference of photons emitted by remote, spectrally detuned NV center-based network nodes, using quantum frequency conversion to the telecom L-band. We find a visibility of $0.79 \\pm 0.03$ and an indistinguishability between converted NV photons around $0.9$ over the full range of the emission duration, confirming the removal of the spectral information present. Our approach implements fully separated and independent control over the nodes, time-multiplexing of control and quantum signals, and active feedback to stabilize the output frequency. Our results demonstrate a working principle that can be readily employed on other platforms and shows a clear path towards generating metropolitan scale, solid-state entanglement over deployed telecom fibers.'\nauthor:\n- 'A. J. Stolk'\n- 'K. L. van der Enden'\n- 'M.-C. Roehsner'\n- 'A. Teepe'\n- 'S.O.F. Faes'\n- 'S. Cadot'\n- 'J. van Rantwijk'\n- 'I. te Raa'\n- 'R.A.J. Hagen'\n-" +"---\nabstract: |\n We describe realistic observing scenarios for early warning detection of binary neutron star mergers with the current generation of ground-based gravitational-wave detectors as these approach design sensitivity. Using Fisher analysis, we estimate that Advanced LIGO and Advanced Virgo will detect one signal before merger in their fourth observing run provided they maintain a 70% duty cycle. 60% of all observations and 8% of those detectable 20 seconds before merger will be localized to $\\lesssim 100 \\thinspace \\mathrm{deg}^2$. If KAGRA is able to achieve a 25 Mpc horizon, these prospects increase to $\\lesssim 2$ early detections with 70% of all BNS localized to $\\lesssim 100 \\thinspace\n \\mathrm{deg}^2$ by merger. As the AHKLV network approaches design sensitivity over the next $\\sim10$ years, we expect up to 1 (14) detections made 100 (10) seconds before merger. Although adding detectors to the HLV network impacts the detection rate at $\\lesssim 50\\%$ level, it improves localization prospects and increases the completeness of compact binary surveys. Given uncertainties in sensitivities, participating detectors, and duty cycles, we consider 103 future detector configurations so electromagnetic observers can tailor preparations towards their preferred models.\nauthor:\n- Ryan Magee\n- Ssohrab Borhanian\nbibliography:\n- 'ms.bib'\ntitle: Realistic" +"---\nabstract: 'We propose a novel framework to analyse the velocity of money in terms of the contribution (MicroVelocity) of each individual agent, and to uncover the distributional determinants of aggregate velocity. Leveraging on complete publicly available transactions data stored in blockchains from four cryptocurrencies, we empirically find that MicroVelocity i) is very heterogeneously distributed and ii) strongly correlates with agents\u2019 wealth. We further document the emergence of high-velocity intermediaries, thereby challenging the idea that these systems are fully decentralised. Further, our framework and results provide policy insights for the development and analysis of digital currencies.'\nauthor:\n- Carlo Campajola\n- 'Marco D\u2019Errico[^1]'\n- 'Claudio J. Tessone'\nbibliography:\n- 'biblio.bib'\ndate: 'February 1, 2022'\ntitle: 'MicroVelocity: rethinking the Velocity of Money for digital currencies'\n---\n\n**Keywords**: Velocity of Money, Cryptocurrency, Blockchain, Heterogeneous agents\n\n**JEL Classification**: C81, D31, E41\n\nIntroduction {#introduction .unnumbered}\n============\n\nOne of the earliest and most widespread concepts across societies is that of money. As humans we are inherently brought to organise ourselves into communities and exchange the goods and services we produce; this leads to the necessity to invent a medium of exchange that can be store of value - to be able to defer purchases to" +"---\nabstract:\n- 'In this paper, an efficient motion planning approach with grid-based generalized Voronoi diagrams (G$ \\mathbf{^2} $VD) is newly proposed for mobile robots. Different from existing approaches, the novelty of this work is twofold: 1) a new state lattice-based path searching approach is proposed, in which the search space is reduced to a Voronoi corridor to further improve the search efficiency, along with a Voronoi potential field constructed to make the searched path keep a reasonable distance from obstacles to provide sufficient optimization margin for the subsequent path smoothing; 2) an efficient quadratic programming-based path smoothing approach is presented, wherein the clearance to obstacles is considered in the form of the penalty of the deviation from the safe reference path to improve the path clearance of hard-constrained path smoothing approaches. We validate the efficiency and smoothness of our approach in various challenging simulation scenarios and outdoor environments. It is shown that the computational efficiency is improved by 17.1% in the path searching stage, and path smoothing with the proposed approach is 25.3 times faster than an advanced sparse-banded structure-based path smoothing approach.'\n- 'This paper is motivated by the challenges of motion planning problems of mobile robots. An" +"---\nabstract: 'The main purpose of this article was to create a model and simulate the profitability conditions of an interactive presentation system (IPS) with the recommender system (RS) used in the kiosk. 90 million simulations have been run in Python with SymPy to address the problem of discount recommendation offered to the clients according to their usage of the IPS.'\nauthor:\n- Marcin Lewicki\n- Tomasz Kajdanowicz\n- Piotr Br\u00f3dka\n- Janusz Sobecki\nbibliography:\n- 'sample.bib'\ntitle: 'Dynamic pricing and discounts by means of interactive presentation systems in stationary point of sales[^1]'\n---\n\nIntroduction\n============\n\nNowadays, with the constantly increasing competition from the Internet stores, convincing a consumer to buy something from a stationary point of sale (PoS) is becoming a lot harder. However, with the help of new technologies: tablet recommendation algorithms, automatic customer profiling, and machine learning, even a stationary PoS can present an offer which is tailored to specific consumer needs. When these Pos form a network, then we also could profit from the application of Internet technologies to exchange online specific marketing information. Another advantage of modern technology is the possibility to build models which allow, using the knowledge about a consumer behaviour, to recommend" +"---\nabstract: 'We present the most complete to date interferometric study of the centimeter wavelength methanol masers detected in G358.93$-$0.03 at the burst and post-burst epochs. A unique, NIR/(sub)mm-dark and FIR-loud MYSO accretion burst was recently discovered in G358.93$-$0.03. The event was accompanied by flares of an unprecedented number of rare methanol maser transitions. The first images of three of the newly-discovered methanol masers at 6.18, 12.23, and 20.97 GHz are presented in this work. The spatial structure evolution of the methanol masers at 6.67, 12.18, and 23.12 GHz is studied at two epochs. The maser emission in all detected transitions resides in a region of $\\sim$0.2$^{\\prime\\prime}$ around the bursting source and shows a clear velocity gradient in the north-south direction, with red-shifted features to the north and blue-shifted features to the south. A drastic change in the spatial morphology of the masing region is found: a dense and compact \u201cspiral\u201d cluster detected at epoch I evolved into a disperse, \u201cround\u201d structure at epoch II. During the transition from the first epoch to the second, the region traced by masers expanded. The comparison of our results with the complementary VLA, VLBI, SMA, and ALMA maser data is conducted. The obtained" +"---\nabstract: |\n Motivated by recent developments in the theory of bounded variation functions on nested fractals, this paper studies the exact asymptotics of functionals related to the total variation measure associated with unions of $n$-complexes. The oscillatory behavior observed implies the non-uniqueness of BV measures in this setting.\n\n **Keywords:** BV measures; heat semigroup; geometric functionals, fractals.\\\n **MSC2010 classification:** MSC 26A45; MSC 31E05; MSC 31C25; MSC 28A80.\nauthor:\n- 'Patricia Alonso Ruiz[^1], Fabrice Baudoin[^2]'\nbibliography:\n- 'BVfractals.bib'\ntitle: Oscillations of BV measures on nested fractals\n---\n\nIntroduction\n============\n\nFunctions of bounded variation (BV) and their bounded variation measures are tightly connected to the geometry of the underlying space. Already in the 1920s, Caccioppoli characterized the perimeter measure of Euclidean sets as the BV measure associated with the corresponding indicator functions, c.f.\u00a0[@Cac52]. He observed that the perimeter of any measurable $E\\subseteq\\mathbb{R}^d$ coincides with the total variation norm of $\\mathbf{1}_E$, i.e. $$\\label{E:def_perim_Eucl}\n\\|D\\mathbf{1}_E\\|(\\mathbb{R}^d):=\\sup\\Big\\{\\int_E{\\rm div}\\phi\\,dx\\colon \\phi\\in C_c(\\mathbb{R}^d),\\|\\phi\\|_\\infty\\leq 1\\Big\\}={\\rm Perimeter}\\,(E).$$ This observation led to a general definition of sets with finite perimeter as being those for which the left hand side of\u00a0 is finite. Sets of finite perimeter are thus also referred to as Caccioppoli sets.\n\nThe above characterization provided a natural way" +"---\nabstract: 'The interstellar gas in spiral galaxies can constitute a significant fraction of the baryon mass and it has been demonstrated that the sum of stellar and gas components correlates well with the kinematic signature of the total mass content, the widths of [H[i]{} ]{}line profiles. The correlation of baryonic mass with [H[i]{} ]{}line widths is used here to obtain distances for 9984 galaxies extending to $\\sim 0.05c$. The sample is [H[i]{} ]{}flux limited and a correction is required to account for an [H[i]{} ]{}selection bias. The absolute scale is established by 64 galaxies with known distances from studies of Cepheid variables and/or the magnitudes of stars at the tip of the red giant branch. The calibration of the baryonic relationship results in a determination of the Hubble constant of $H_0=75.5\\pm2.5$\u00a0. The error estimate is statistical. This material will be combined with contributions from other methodologies in a subsequent article where systematic uncertainties will be investigated.'\nauthor:\n- |\n Ehsan Kourkchi,$^{1}$[^1] R. Brent Tully,$^{1}$[^2] H\u00e9l\u00e8ne M. Courtois, Alexandra Dupuy,$^{2}$ Daniel Guinet$^{2}$\\\n $^{1}$Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA\\\n $^{2}$University of Lyon, UCB Lyon 1, IUF, CNRS/IN2P3, IP2I Lyon, UMR5822, F-69622 Villeurbanne. France" +"---\nabstract: 'We present a novel model that may provide an interpretation for a class of non-repeating FRBs \u2014 short ($<1~\\rm{s}$), bright ($0.1 - 1000~\\rm{Jy}$) bursts of MHz-GHz frequency radio waves. The model has three ingredients \u2014 compact object, a progenitor with effective magnetic field strength around $10^{10}~{\\rm Gauss}$, and high frequency (MHz-GHz) gravitational waves (GWs). At resonance, the energy conversion from GWs to electromagnetic waves occurs when GWs pass through the magnetosphere of such compact objects due to the Gertsenshtein-Zel\u2019dovich effect. This conversion produces bursts of electromagnetic waves in the MHz-GHz range, leading to FRBs. Our model has three key features: (i) predict peak-flux, (ii) can naturally explain the pulse width, and (iii) coherent nature of FRB. We thus conclude that the neutron star/magnetar could be the progenitor of FRBs. Further, our model offers a novel perspective on the indirection detection of GWs at high-frequency beyond detection capabilities. Thus, transient events like FRBs are a rich source for the current era of multi-messenger astronomy.'\nauthor:\n- |\n Ashu Kushwaha [[](https://orcid.org/0000-0001-9910-5010)]{}$^{1}$ [^1], Sunil Malik [[](https://orcid.org/0000-0003-4147-626X)]{}$^{1,2,3}$ [^2], S. Shankaranarayanan [[](https://orcid.org/0000-0003-2560-8066)]{}$^{1}$ [^3]\\\n $^{1}$Department of Physics, Indian Institute of Technology Bombay, Mumbai 400076, India\\\n $^{2}$Institute fur Physik und Astronomie Universitat Potsdam, Golm Haus" +"---\nabstract: |\n We investigate the gravitational waves (GWs) at low frequencies produced by neutrinos that are emitted anisotropically from the proto-neutron star (PNS) during its cooling phase that lasts for about a minute. We are particularly interested in the deci-Hz range, to which some satellite-borne detectors are expected to have good sensitivities. We first give a formulation based on the spherical-harmonic expansion of the neutrino luminosity to obtain the gravitational waveform as well as the characteristic strain. In the absence of multi-dimensional simulations of PNS cooling, from which we can extract reliable data on the neutrino luminosities as a function of solid angle, we construct them by hand. In the first model, the time evolution is approximated by piece-wise exponential functions (PEFs); in the second model we employ the time profile obtained in a 1D cooling simulation for all harmonic components for simplicity; In both cases, we consider not only axisymmetric components but also non-axisymmetric ones; as the third model, we consider axisymmetric neutrino emissions, the axis of which is misaligned with the rotation axis and, as a result, rotates with the PNS. We find from the first model that the decay times in PEF at late phases can" +"---\nauthor:\n- 'R.\u00a0Franz [^1]'\n- 'G.\u00a0Picogna'\n- 'B.\u00a0Ercolano'\n- 'S.\u00a0Casassus'\n- 'T.\u00a0Birnstiel'\n- 'Ch.\u00a0Rab'\n- 'S.\u00a0P[\u00e9]{}rez'\nbibliography:\n- 'Literature.bib'\ndate: 'Received 30 Nov 2021 / Accepted 25 Jan 2022'\ntitle: |\n Dust entrainment in photoevaporative winds:\\\n Synthetic observations of transition disks\n---\n\n[X-ray- and extreme-ultraviolet- (XEUV-) driven photoevaporative winds acting on protoplanetary disks around young T-Tauri stars may strongly impact disk evolution, affecting both gas and dust distributions. Small dust grains in the disk are entrained in the outflow and may produce a detectable signal. In this work, we investigate the possibility of detecting dusty outflows from transition disks with an inner cavity.]{} [We compute dust densities for the wind regions of XEUV-irradiated transition disks and determine whether they can be observed at wavelengths $0.7 \\lesssim \\lambda_\\mathrm{obs} \\, [\\mu\\mathrm{m}] \\lesssim 1.8$ with current instrumentation.]{} [We simulated dust trajectories on top of 2D hydrodynamical gas models of two transition disks with inner holes of 20 and 30AU, irradiated by both X-ray and EUV spectra from a central T-Tauri star. The trajectories and two different settling prescriptions for the dust distribution in the underlying disk were used to calculate wind density maps for individual" +"---\nabstract: 'We prove a classification of additive polynomial superfunctors, which allows us to compute some extensions of a superfunctor of the form $F \\circ A$ where $F$ is a classical polynomial functor and $A$ is additive. We get a formula which relates these extensions to the classical ones of $F$. A possible generalisation is conjectured at the end.'\nauthor:\n- Iacopo Giordano\nbibliography:\n- 'bibliographie.bib'\ntitle: Additive polynomial superfunctors and cohomology\n---\n\n[^1]\n\nIntroduction\n============\n\nThe category ${\\mathcal{P}}$ of strict polynomial functors was introduced by Friedlander and Suslin in [@FS], where they use it to prove the cohomological finite-generation of a finite group scheme. These functors are a powerful tool to perform explicit ${\\mathrm{Ext}}$-computations for polynomial $GL_n$-representations and for modules over classical Schur algebras. Further ${\\mathrm{Ext}}$-computations in ${\\mathcal{P}}$ were performed later by a number of authors, in particular to compute generic cohomology, see e.g. [@FFSS; @TouzeENS; @TouzeUnivSS; @Chalupnik] and [@TouzeSurvey] for a survey.\n\nIn [@Axtell], Axtell introduced a ${\\mathbb{Z}/2\\mathbb{Z}}$-graded version of strict polynomial functors, adapted to the context of super representation theory. They are called *strict polynomial superfunctors* and they have already been successfully used by Drupieski to prove the cohomological finite-generation for finite supergroup schemes [@Drupieski]. However, the" +"---\nabstract: 'We propose a novel uplink communication method, coined *random orthogonalization*, for federated learning (FL) in a massive multiple-input and multiple-output (MIMO) wireless system. The key novelty of random orthogonalization comes from the tight coupling of FL model aggregation and two unique characteristics of massive MIMO \u2013 channel hardening and favorable propagation. As a result, random orthogonalization can achieve natural over-the-air model aggregation without requiring transmitter side channel state information, while significantly reducing the channel estimation overhead at the receiver. Theoretical analyses with respect to both communication and machine learning performances are carried out. In particular, an explicit relationship among the convergence rate, the number of clients and the number of antennas is established. Experimental results validate the effectiveness and efficiency of random orthogonalization for FL in massive MIMO.'\nauthor:\n- \nbibliography:\n- 'wireless.bib'\n- 'Shen.bib'\n- 'FedLearn.bib'\n- 'ref.bib'\ntitle: Random Orthogonalization for Federated Learning in Massive MIMO Systems\n---\n\nFederated Learning; Convergence Analysis; Massive MIMO.\n\nIntroduction\n============\n\nCommunication overhead is widely considered one of the primary bottlenecks for federated learning (FL) [@mcmahan2017fl; @konecny2016fl], as a FL task consists of multiple learning rounds, each of which requires uplink and downlink model exchange between clients and the server. Compared" +"---\nabstract: 'We present a statistical analysis on the variability of the incompressible energy cascade rate in the solar wind around Mars, making use of an exact relation for fully developed turbulence and more than five year of Mars Atmosphere and Volatile EvolutioN (MAVEN) observations. Using magnetic field and plasma data, we compute the energy cascade rate in the magnetohydrodynamics (MHD) scales in the pristine solar wind. From our statistical results we conclude that the incompressible energy cascade rate decreases as the Martian heliocentric distance increases, for each of the three explored Martian years. Moreover, we show that the presence of proton cyclotron waves, associated with the extended Martian hydrogen exosphere, do not have a significant effect in the nonlinear cascade of energy at the MHD scales.'\nauthor:\n- Norberto Romanelli\n- Nahuel Andr\u00e9s\n- 'Gina A. DiBraccio'\ntitle: Variability of the Incompressible Energy Cascade Rate in Solar Wind Turbulence Around Mars\n---\n\nIntroduction\n============\n\nTurbulence is a unique multi-scale physical process present across the Universe, from the current of a river to the intergalactic medium [@Po2018; @Al2018]. For fully developed turbulence, the plasma flow contains kinetic and magnetic fluctuations populating a wide range of spatial and temporal scales. In" +"---\nabstract: 'In this paper we provide a rigorous convergence analysis for the renowned particle swarm optimization method by using tools from stochastic calculus and the analysis of partial differential equations. Based on a time-continuous formulation of the particle dynamics as a system of stochastic differential equations, we establish convergence to a global minimizer of a possibly nonconvex and nonsmooth objective function in two steps. First, we prove consensus formation of an associated mean-field dynamics by analyzing the time-evolution of the variance of the particle distribution. We then show that this consensus is close to a global minimizer by employing the asymptotic Laplace principle and a tractability condition on the energy landscape of the objective function. These results allow for the usage of memory mechanisms, and hold for a rich class of objectives provided certain conditions of well-preparation of the hyperparameters and the initial datum. In a second step, at least for the case without memory effects, we provide a quantitative result about the mean-field approximation of particle swarm optimization, which specifies the convergence of the interacting particle system to the associated mean-field limit. Combining these two results allows for global convergence guarantees of the numerical particle swarm optimization method" +"---\nabstract: 'By decomposing velocity dispersion into non-spin and spin-induced, mean flow and dispersion are analytically solved for axisymmetric rotating and growing halos. The polar flow can be neglected and azimuthal flow is directly related to dispersion. The fictitious (\u201cReynolds\u201d) stress acts on mean flow to enable energy transfer from mean flow to random motion and maximize system entropy. For large halos (high peak height $\\nu$ at early stage of halo life) with constant concentration, there exists a self-similar radial flow (outward in core and inward in outer region). Halo mass, size and specific angular momentum increase linearly with time via fast mass accretion. Halo core spins faster than outer region. Large halos rotate with an angular velocity proportional to Hubble parameter and spin-induced dispersion is dominant. All specific energies (radial/rotational/kinetic/potential) are time-invariant. Both halo spin ($\\sim$0.031) and anisotropic parameters can be analytically derived. For \u201csmall\u201d halos with stable core and slow mass accretion (low peak height $\\nu$ at late stage of halo life), radial flow vanishes. Small halos rotate with constant angular velocity and non-spin axial dispersion is dominant. Small halos are more spherical in shape, incompressible, and isotropic. Radial and azimuthal dispersion are comparable and greater than polar" +"---\nabstract: 'The formation of deuterons in heavy-ion collisions at relativistic energies is investigated by employing two recently advanced models \u2013 the Minimum Spanning Tree (MST) method and the coalescence model by embedding them in the PHQMD and the UrQMD transport approaches. While the coalescence mechanism combines nucleons into deuterons at the kinetic freeze-out hypersurface, the MST identifies the clusters during the different stages of time evolution. We find that both clustering procedures give very similar results for the deuteron observables in the UrQMD as well as in the PHQMD environment. Moreover, the results agree well with the experimental data on deuteron production in Pb+Pb collisions at $\\sqrt{s_{NN}} = 8.8$ GeV (selected for the comparison of the methods and models in this study). A detailed investigation shows that the coordinate space distribution of the produced deuterons differs from that of the free nucleons and other hadrons. Thus, deuterons are not destroyed by additional rescattering.'\nauthor:\n- 'Viktar Kireyeu$^{1,2}$'\n- 'Jan Steinheimer$^{3}$'\n- 'J\u00f6rg Aichelin$^{3,4}$'\n- 'Marcus Bleicher$^{2,5,6,7}$'\n- 'Elena Bratkovskaya$^{2,5,6}$'\nbibliography:\n- 'main.bib'\ntitle: |\n Deuteron Production in Ultra-Relativistic Heavy-Ion Collisions:\\\n A Comparison of the Coalescence and the Minimum Spanning Tree Procedure\n---\n\nIntroduction\n============\n\nThe observation of light baryonic" +"---\nabstract: 'The quality of speech coded by transform coding is affected by various artefacts especially when bitrates to quantize the frequency components become too low. In order to mitigate these coding artefacts and enhance the quality of coded speech, a post-processor that relies on a-priori information transmitted from the encoder is traditionally employed at the decoder side. In recent years, several data-driven post-postprocessors have been proposed which were shown to outperform traditional approaches. In this paper, we propose PostGAN, a GAN-based neural post-processor that operates in the sub-band domain and relies on the U-Net architecture and a learned affine transform. It has been tested on the recently standardized low-complexity, low-delay bluetooth codec (LC3) for wideband speech at the lowest bitrate (16\u00a0). Subjective evaluations and objective scores show that the newly introduced post-processor surpasses previously published methods and can improve the quality of coded speech by around 20 MUSHRA points.'\naddress: |\n Fraunhofer IIS, Erlangen, Germany\\\n srikanth.korse@iis.fraunhofer.de\\\nbibliography:\n- 'refs19.bib'\ntitle: ' PGAN: A GAN-B P-P E Q C S '\n---\n\n**Index Terms**: Deep Neural Network (DNN), Speech Coding, Coded Speech Enhancement, Post-Filter, Post-Processor, Generative Adversarial Networks (GAN)\n\nIntroduction {#sec:Introduction}\n============\n\n#### {#section .unnumbered}\n\nThe recently standardized low-complexity," +"---\nabstract: 'We develop a microscopic theory for the two-dimensional spectroscopy of one-dimensional topological superconductors. We consider a ring geometry as a realization of the Kitaev chain with periodic boundary conditions. We show numerically and analytically that the cross-peak structure of the 2D spectra carries unique signatures of the topological phases of the chain. Our work reveals how 2D spectroscopy can identify topological phases in bulk properties, bypassing energy-specific differences caused by topologically protected or trivial boundary modes that are otherwise hard to distinguish.'\nauthor:\n- Felix Gerken\n- Thore Posske\n- Shaul Mukamel\n- Michael Thorwart\ntitle: 'Unique Signatures of Topological Phases in Two-Dimensional THz Spectroscopy'\n---\n\nTopological phases of matter have attracted considerable attention following the discovery of topologically non-trivial magnetic and electronic phenomena like the Berezinskii-Kosterlitz-Thouless transition [@Berezinskii1970; @Berezinskii1972; @Kosterlitz1972; @Kosterlitz1973] and the integer and fractional quantum Hall effect [@vonKlitzing1980; @TsuiStoermer1982]. Some topological systems, such as superconducting quantum wires [@Kitaev2001], spin liquids [@Kitaev2006] and vortices on surfaces of topological superconductors [@FuKane2008] are predicted to host anyons such as spatially isolated Majorana zero-energy boundary modes that are of interest to quantum information processing [@Nayak2008; @Alicea2011]. Despite experimental evidence of zero-energy modes [@Kouwenhoven2012], their topological origin remains inconclusive [@Kouwenhoven2021]." +"---\nabstract: 'A recent study have shown that it is possible to have enhancement, in contrast to an expected suppression, in tunneling density of states (TDOS) in a Luttinger liquid (LL) which is solely driven by the non-local density-density interactions. Also, it is well known that a LL in proximity to a superconductor (SC) shows enhancement in TDOS in the vicinity of the junction in the zero energy limit. In this paper, we study the interplay of nonlocal density-density interaction and superconducting correlations in the TDOS in the vicinity of the SC-LL junction, where the LL maybe realized on the edge of an integer or a fractional quantum Hall state. We show that the interplay of superconducting proximity effect and non-local interactions can give rise to enhancement in TDOS in the weak interaction limit, beyond what was previously observed. We also show that, in the full parameter regime comprising both, the local and the non-local interaction, the region of enhanced TDOS for LL junction with \u201csuperconducting\u201d boundary condition and that of \u201cnon-superconducting charge conserving\u201d boundary condition (discussed in Phys. Rev. B 104, 045402 (2021)) are mutually exclusive. We show that this fact can be understood in terms of symmetry relation" +"---\nabstract: 'In this article, we study stochastic homogenization of non-homogeneous Gaussian free fields $\\Xi^{g,{\\bf a}} $ and bi-Laplacian fields $\\Xi^{b,{\\bf a}}$. They can be characterized as follows: for $f=\\delta$ the solution $u$ of $\\nabla \\cdot \\mathbf{a} \\nabla u =f$, ${\\bf a}$ is a uniformly elliptic random environment, is the covariance of $\\Xi^{g,{\\bf a}}$. When $f$ is the white noise, the field $\\Xi^{b,{\\bf a}}$ can be viewed as the distributional solution of the same elliptic equation. Our results characterize the scaling limit of such fields on both, a sufficiently regular domain $D\\subset \\mathbb{R}^d$, or on the discrete torus. Based on stochastic homogenization techniques applied to the eigenfunction basis of the Laplace operator $\\Delta$, we will show that such families of fields converge to an appropriate multiple of the GFF resp. bi-Laplacian. The limiting fields are determined by their respective homogenized operator $\\operatorname{\\bar{\\mathbf{a}}}\\Delta$, with constant $\\operatorname{\\bar{\\mathbf{a}}}$ depending on the law of the environment ${\\bf a}$. The proofs are based on the results found in [@Armstrong2019] and [@gloria2014optimal].'\naddress:\n- Mathematical Institute\n- 'Budapestlaan 6, 3584 CD Utrecht, The Netherlands'\nauthor:\n- Leandro Chiarini\n- 'Wioletta M. Ruszel'\nbibliography:\n- 'library.bib'\ntitle: Stochastic homogenization of Gaussian fields on random media\n---\n\nIntroduction" +"---\nabstract: |\n In this paper we propose a new methodology for testing the parametric forms of the mean and variance functions based on weighted residual empirical processes and their martingale transformations in regression models. The dimensions of the parameter vectors can be divergent as the sample size goes to infinity. We then study the convergence of weighted residual empirical processes and their martingale transformation under the null and alternative hypotheses in the diverging dimension setting. The proposed tests based on weighted residual empirical processes can detect local alternatives distinct from the null at the fastest possible rate of order $n^{-1/2}$ but are not asymptotically distribution-free. While the tests based on martingale transformed weighted residual empirical processes can be asymptotically distribution-free, yet, unexpectedly, can only detect the local alternatives converging to the null at a much slower rate of order $n^{-1/4}$, which is somewhat different from existing asymptotically distribution-free tests based on martingale transformations. As the tests based on the residual empirical process are not distribution-free, we propose a smooth residual bootstrap and verify the validity of its approximation in diverging dimension settings. Simulation studies and a real data example are conducted to illustrate the effectiveness of our tests.\\\n [**Key" +"---\nabstract: |\n We present several results in extremal graph and hypergraph theory of topological nature. First, we show that if $\\alpha>0$ and $\\ell=\\Omega(\\frac{1}{\\alpha}\\log\\frac{1}{\\alpha})$ is an odd integer, then every graph $G$ with $n$ vertices and at least $n^{1+\\alpha}$ edges contains an $\\ell$-subdivision of the complete graph $K_t$, where $t=n^{\\Theta(\\alpha)}$. Also, this remains true if in addition the edges of $G$ are properly colored, and one wants to find a rainbow copy of such a subdivision. In the sparser regime, we show that properly edge colored graphs on $n$ vertices with average degree $(\\log n)^{2+o(1)}$ contain rainbow cycles, while average degree $(\\log n)^{6+o(1)}$ guarantees rainbow subdivisions of $K_t$ for any fixed $t$, thus improving recent results of Janzer and Jiang et al., respectively. Furthermore, we consider certain topological notions of cycles in pure simplicial complexes (uniform hypergraphs). We show that if $G$ is a $2$-dimensional pure simplicial complex ($3$-graph) with $n$ $1$-dimensional and at least $n^{1+\\alpha}$ 2-dimensional faces, then $G$ contains a triangulation of the cylinder and the M\u00f6bius strip with $O(\\frac{1}{\\alpha}\\log\\frac{1}{\\alpha})$ vertices. We present generalizations of this for higher dimensional pure simplicial complexes as well.\n\n In order to prove these results, we consider certain (properly edge colored) graphs and" +"---\nbibliography:\n- 'main.bib'\n---\n\n$\\phantom{.}$\n\n[LTH 1294, MPP-2022-8\\\n]{}\n\n[ ]{}\n\n[Wednesday, 24 November 2021 \u2013 Friday, 26 November 2021]{}\n\n[*Editors*]{}\\\nAndrzej Kup[\u015b]{}[\u0107]{} (Uppsala), Graziano Venanzoni (Pisa)\n\n![image](logo-3.png){width=\"5cm\"}\n\nABSTRACT\n\nThe mini-proceedings of the STRONG2020 Virtual Workshop \u201cSpace-like and Time-like determination of the Hadronic Leading Order contribution to the Muon $g-2$\u201d, November 24\u201326 2021, are presented. This is the first workshop of the STRONG2020 WP21: JRA3-PrecisionSM: Precision Tests of the Standard Model (). The workshop was devoted to review of the working group activitity on: $(\\it i)$ Radiative Corrections and Monte Carlo tools for low-energy hadronic cross sections in $e^+ e^-$ collisions; ($\\it ii$) Annotated database for $e^+e^-$ into hadrons processes at low energy; ([*iii*]{}) Radiative Corrections and Monte Carlo tools for $\\mu$\u2013$e$ elastic scattering.\n\nThe web page of the conference:\n\n\n\ncontains the presentations.\n\n[$\\phantom{=}$]{}\n\nIntroduction\n============\n\nA.\u00a0Kup[\u015b]{}[\u0107]{}$^1$ and G.\u00a0Venanzoni$^2$\n\n$^1$Department of Physics and Astronomy Uppsala University, Sweden\\\n$^2$INFN, Sezione di Pisa, Pisa, Italy\\\n\nThe importance of continuous and close collaboration between the experimental and theoretical groups is crucial in the quest for precision in hadronic physics. This is the reason why the Working Group on \u201cRadiative Corrections and Monte Carlo Generators for Low Energies\u201d (Radio MonteCarLow," +"---\nabstract: 'The Coon amplitude is a deformation of the Veneziano amplitude with logarithmic Regge trajectories and an accumulation point in the spectrum, which interpolates between string theory and field theory. With string theory, it is the only other solution to duality constraints explicitly known and it constitutes an important data point in the modern S-matrix bootstrap. Yet, its basics properties are essentially unknown. In this paper we fill this gap and derive the conditions of positivity and the low energy expansion of the amplitude. On the positivity side, we discover that the amplitude switches from a regime where it is positive in all dimensions to a regime with critical dimensions, that connects to the known $d=26,10$ when the deformation is removed. En passant, we find that the Veneziano amplitude can be extended to massive scalars of masses up to $m^2=1/3$, where it has critical dimension $6.3$. On the low-energy side, we compute the first few coupligs of the theory in terms of $q$-deformed analogues of the standard Riemann zeta values of the string expansion. We locate their location in the EFT-hedron, and find agreement with a recent conjecture that theories with accumulation points populate this space. We also discuss" +"---\nabstract: 'How to fairly apportion congressional seats to states has been debated for centuries. We present an alternative perspective on apportionment, centered not on states but \u201cfamilies\u201d of state, sets of states with \u201cdivisor-method\u2019\u2019 quotas with the same integer part. We develop \u201cimpartial\" and \u201cunbiased\" apportionment methods. Impartial methods apportion the same number of seats to families of states containing the same total population, whether a family consists of many small-population states or a few large-population states. Unbiased methods apportion seats so that if states are drawn repeatedly from the same distribution, the expected number of seats apportioned to each family equals the expected divisor-method quota for that family.'\nauthor:\n- Ross Hyman\n- Nicolaus Tideman\ntitle: A New Perspective on Impartial and Unbiased Apportionment\n---\n\nIntroduction\n============\n\nEvery ten years, the U.S. House of Representatives is reapportioned according to the census for that decade. The Constitution specifies that \u201cRepresentatives shall be apportioned among the several States according to their respective numbers\u201d but prescribes no method to accomplish this. It is not a simple matter of multiplying the total number of seats in the House by each state\u2019s fraction of the population, since the resulting numbers of seats will" +"---\nabstract: 'A black hole (BH) travelling through a uniform, gaseous medium is described by Bondi-Hoyle-Lyttleton (BHL) accretion. If the medium is magnetized, then the black hole can produce relativistic outflows. We performed the first 3D, general-relativistic magnetohydrodynamics simulations of BHL accretion onto rapidly rotating black holes using the code `H-AMR`, where we mainly varied the strength of a background magnetic field that threads the medium. We found that the ensuing accretion continuously drags to the BH the magnetic flux, which accumulates near the event horizon until it becomes dynamically important. Depending on the strength of the background magnetic field, the BHs can sometimes launch relativistic jets with high enough power to drill out of the inner accretion flow, become bent by the headwind, and escape to large distances. While for stronger background magnetic fields the jets are continuously powered, at weaker field strengths they are intermittent, turning on and off depending on the fluctuating gas and magnetic flux distributions near the event horizon. We find that our jets reach extremely high efficiencies of $\\sim100-300\\%$, even in the absence of an accretion disk. We also calculated the drag forces exerted by the gas onto to the BH, finding that the" +"---\nabstract: 'A famous conjecture of Stanley states that his chromatic symmetric function distinguishes trees. As a quasisymmetric analogue, we conjecture that the chromatic quasisymmetric function of Shareshian and Wachs and of Ellzey distinguishes directed trees. This latter conjecture would be implied by an affirmative answer to a question of Hasebe and Tsujie about the $P$-partition enumerator distinguishing posets whose Hasse diagrams are trees. They proved the case of rooted trees and our results include a generalization of their result.'\naddress:\n- 'LaBRI, CNRS, Universit\u00e9 de Bordeaux, 351 cours de la Lib\u00e9ration, 33405 Talence, France'\n- 'African Institute for Mathematical Sciences, 6 Melrose Road, Muizenberg 7945, South Africa'\n- 'Department of Mathematics, Bucknell University, Lewisburg, PA 17837, USA'\nauthor:\n- 'Jean-Christophe Aval'\n- Karimatou Djenabou\n- 'Peter R. W. McNamara'\nbibliography:\n- 'distinguishing\\_trees.bib'\ntitle: Quasisymmetric functions distinguishing trees\n---\n\nIntroduction\n============\n\nAs an extension of the chromatic polynomial $\\chi_G(k)$ of a graph $G = (V,E)$, Stanley [@Sta95] introduced the chromatic symmetric function $X_G({\\mathbf{x}})$ defined by $$\\label{equ:stanley}\nX_G({\\mathbf{x}}) = \\sum_\\kappa x_1^{\\# \\kappa^{-1}(1)} x_2^{\\# \\kappa^{-1}(2)} \\cdots$$ where the sum is over all proper colorings $\\kappa : V \\to \\{1,2,\\ldots\\}$. Observe that setting $x_i=1$ for $1\\leq i \\leq k$ and $x_i=0$ otherwise yields" +"---\nabstract: 'This paper presents DRE-CUSUM, an unsupervised density-ratio estimation (DRE) based approach to determine statistical changes in time-series data when no knowledge of the pre-and post-change distributions are available. The core idea behind the proposed approach is to split the time-series at an arbitrary point and estimate the ratio of densities of distribution (using a parametric model such as a neural network) before and after the split point. The DRE-CUSUM change detection statistic is then derived from the cumulative sum (CUSUM) of the logarithm of the estimated density ratio. We present a theoretical justification as well as accuracy guarantees which show that the proposed statistic can reliably detect statistical changes, irrespective of the split point. While there have been prior works on using density ratio based methods for change detection, to the best of our knowledge, this is the first unsupervised change detection approach with a theoretical justification and accuracy guarantees. The simplicity of the proposed framework makes it readily applicable in various practical settings (including high-dimensional time-series data); we also discuss generalizations for online change detection. We experimentally show the superiority of DRE-CUSUM using both synthetic and real-world datasets over existing state-of-the-art unsupervised algorithms (such as Bayesian online" +"---\nabstract: 'Transformer, benefiting from global (long-range) information modeling using self-attention mechanism, has been successful in natural language processing and computer vision recently. Convolutional Neural Networks, capable of capturing local features, are difficult to model explicit long-distance dependencies from global feature space. However, both local and global features are crucial for dense prediction tasks, especially for 3D medical image segmentation. In this paper, we present the further attempt to exploit Transformer in 3D CNN for 3D medical image volumetric segmentation and propose a novel network named TransBTSV2 based on the encoder-decoder structure. Different from TransBTS [@wang2021transbts], the proposed TransBTSV2 is not limited to brain tumor segmentation (BTS) but focuses on general medical image segmentation, providing a stronger and more efficient 3D baseline for volumetric segmentation of medical images. As a hybrid CNN-Transformer architecture, TransBTSV2 can achieve accurate segmentation of medical images without any pre-training, possessing the strong inductive bias as CNNs and powerful global context modeling ability as Transformer. With the proposed insight to redesign the internal structure of Transformer block and the introduced Deformable Bottleneck Module to capture shape-aware local details, a highly efficient architecture is achieved with superior performance. Extensive experimental results on four medical image datasets (BraTS" +"---\nabstract: 'Using large pre-trained models for image recognition tasks is becoming increasingly common owing to the well acknowledged success of recent models like vision transformers and other CNN-based models like VGG and Resnet. The high accuracy of these models on benchmark tasks has translated into their practical use across many domains including safety-critical applications like autonomous driving and medical diagnostics. Despite their widespread use, image models have been shown to be fragile to changes in the operating environment, bringing their robustness into question. There is an urgent need for methods that systematically characterise and quantify the capabilities of these models to help designers understand and provide guarantees about their safety and robustness. In this paper, we propose Vision Checklist, a framework aimed at interrogating the capabilities of a model in order to produce a report that can be used by a system designer for robustness evaluations. This framework proposes a set of perturbation operations that can be applied on the underlying data to generate test samples of different types. The perturbations reflect potential changes in operating environments, and interrogate various properties ranging from the strictly quantitative, e.g., robustness to dropped patches, to more qualitative, e.g., robustness to texture and" +"---\nabstract: 'In recent years, increasing deployment of face recognition technology in security-critical settings, such as border control or law enforcement, has led to considerable interest in the vulnerability of face recognition systems to attacks utilising legitimate documents, which are issued on the basis of digitally manipulated face images. As automated manipulation and attack detection remains a challenging task, conventional processes with human inspectors performing identity verification remain indispensable. These circumstances merit a closer investigation of human capabilities in detecting manipulated face images, as previous work in this field is sparse and often concentrated only on specific scenarios and biometric characteristics. This work introduces a web-based, remote visual discrimination experiment on the basis of principles adopted from the field of psychophysics and subsequently discusses interdisciplinary opportunities with the aim of examining human proficiency in detecting different types of digitally manipulated face images, specifically face swapping, morphing, and retouching. In addition to analysing appropriate performance measures, a possible metric of detectability is explored. Experimental data of 306 probands indicate that detection performance is widely distributed across the population and detection of certain types of face image manipulations is much more challenging than others.'\nauthor:\n- \nbibliography:\n- 'bibliography.bib'\ntitle: Psychophysical Evaluation" +"[THE GINI INDEX IN ALGEBRAIC COMBINATORICS AND REPRESENTATION THEORY]{}\\\nby\\\nGrant Kopitzke\\\nA Dissertation Submitted in\\\nPartial Fulfillment of the\\\nRequirements for the Degree of\\\nDoctor of Philosophy\\\nin Mathematics\\\nat\\\nThe University of Wisconsin-Milwaukee\\\nMay 2021\\\n\n\\\nTHE GINI INDEX IN ALGEBRAIC COMBINATORICS AND REPRESENTATION THEORY\\\nby\\\nGrant Kopitzke\\\nThe University of Wisconsin-Milwaukee, 2021\\\nUnder the Supervision of Dr. Jeb Willenbring\\\n\nThe Gini index is a number that attempts to measure how equitably a resource is distributed throughout a population, and is commonly used in economics as a measurement of inequality of wealth or income. The Gini index is often defined as the area between the \u201cLorenz curve\u201d of a distribution and the line of equality, normalized to be between zero and one. In this fashion, we will define a Gini index on the set of integer partitions and prove some combinatorial results related to it; culminating in the proof of an identity for the expected value of the Gini index. These results comprise the principle contributions of the author, some of which have been published in [@Kopitzke] .\\\nWe will then discuss symmetric polynomials, and show that the Gini index can be understood as the degrees of certain" +"---\nauthor:\n- 'B. Cseh[^1], B. Vil\u00e1gos, M. P. Roriz, C. B. Pereira, V. D\u2019Orazi, A. I. Karakas, B. So\u00f3s, N. A. Drake, S. Junqueira,'\n- 'M. Lugaro'\nbibliography:\n- 'bibliography.bib'\ndate: 'Received ; accepted '\ntitle: 'Barium stars as tracers of $s$-process nucleosynthesis in AGB stars I. 28 stars with independently derived AGB mass\\'\n---\n\n[Barium (Ba) stars are polluted by material enriched in the $slow$ neutron capture ($s$-process) elements synthesised in the interior of their former asymptotic giant branch (AGB) companion star, which is now a white dwarf.]{} [We compare individual Ba star abundance patterns to AGB nucleosynthesis model predictions to verify if the AGB model mass is compatible with independently derived AGB mass previously estimated using binary parameters and Gaia parallax data.]{} [We selected a sample of 28 Ba stars for which both self-consistent spectroscopic observation and analysis are available and stellar mass determinations, via positioning the star on the HR diagram and comparing with evolutionary tracks. For this sample stars we considered both previously (Y, Zr, Ce, and Nd) and recently derived (Rb, Sr, Nb, Mo, Ru, La, Sm, and Eu) elemental abundances. Then, we performed a detailed comparison of these $s$-process elemental abundances to different" +"---\naddress: |\n $^{1}$ Faculty of Modern Languages and Literature, Adam Mickiewicz University in Pozna\u0144, Poland; lipowska@amu.edu.pl\\\n $^{2}$ Faculty of Physics, Adam Mickiewicz University in Pozna\u0144, Poland; lipowski@amu.edu.pl \n---\n\nIntroduction\n============\n\nEvolution and structure of language is often analysed using computational modelling [@cangelosi_2002; @nolfi_2010; @ulizia_2020]. A particularly appealing research paradigm is inspired by the idea that language might have spontaneously appeared in a population of communicating individuals, possibly with some adaptive features [@pinker_1990]. This standpoint prompted numerous analysis of multi-agent models, which mimic such communication and try to infer the properties of the emerging language and its possible further evolution [@steels_2012experiments; @gong2014modelling; @kirby2014iterated].\n\nIn certain models of this kind, language emergence and evolution is studied using the signaling game [@lewis2002convention], where communicating agents must decide which signal (i.e., a word) to send or how to interpret the signal they have received. To cope with this, agents very often use some form of the reinforcement learning [@skyrms2010signals; @lenaerts2005evolutionary; @barrett2006numerical; @franke2016evolution; @muhlenbernd2012simulating; @liplipplos; @vaneecke2020reconceptualising]. Language that emerges in such models may provide a unique form-meaning mapping (in a signaling game terminology, it is a signaling system), but there are also other possibilities. In some cases, synonyms or homonyms can emerge, destroying thus" +"---\nabstract: 'We estimate the parameters of the donor of the accreting black-hole binary MAXI J1820+070. The measured values of the binary period, rotational and radial velocities and constraints on the orbital inclination imply the donor is a subgiant with the mass of $M_2\\approx 0.49^{+0.10}_{-0.10}{{\\rm M}_{\\sun}}$ and the radius of $R_2\\approx 1.19^{+0.08}_{-0.08}{{\\rm R}_{\\sun}}$. We re-analyze the previously obtained optical spectrum from the Gran Telescopio Canarias, and found it yields a strict lower limit on the effective temperature of $T>4200$ K. We compile optical and infrared fluxes observed during the quiescence of this system. From the minima $r$ and $i$-band fluxes found in Pan-STARSS1 Data Release 2 pre-discovery imaging and for a distance of $D\\approx3$kpc, reddening of $E(B$\u2013$V)=0.23$ and $R_2\\approx{1.11R_\\odot}$, we find $T\\lesssim4230$K, very close to the above lower limit. For a larger distance, the temperature can be higher, up to about 4500K (corresponding to a K5 spectral type, preferred by previous studies) at $D=3.5$kpc, allowed by the Gaia parallax. We perform evolutionary calculations for the binary system and compare them to the observational constraints. Our model fitting the above temperature and radius constraints at $D\\approx 3$kpc has the mass of $0.4M_\\odot$, $T\\approx4200$ K and solar metallicity. Two alternative models require" +"---\nauthor:\n- |\n Federico Lelli\\\n \\\nbibliography:\n- 'GasDynamics.bib'\ntitle: '**Gas dynamics in dwarf galaxies as testbeds for dark matter and galaxy evolution**'\n---\n\n**Dwarf galaxies are ideal laboratories to test dark matter models and alternative theories because their dynamical mass (from observed kinematics) largely outweighs their baryonic mass (from gas and stars). In most star-forming dwarfs, cold atomic gas forms regularly rotating disks extending beyond the stellar component, thus probing the gravitational potential out to the outermost regions. Here I review several aspects of gas dynamics in dwarf galaxies, such as rotation curves, mass models, and noncircular motions. Star-forming dwarfs extend the dynamical laws of spiral galaxies to lower masses, surface densities, and accelerations. The three main dynamical laws of rotation-supported galaxies point to three distinct acceleration scales, which play different physical roles but display the same value, within uncertainties. The small scatter around these dynamical laws implies a tight coupling between baryons and dark matter in galaxies, which will be better understood with next-generation surveys that will enlarge current sample sizes by orders of magnitude.**\n\nIntroduction\n============\n\nGas dynamics plays a key role in the formation and evolution of galaxies. During the history of our Universe, cosmic" +"---\nabstract: 'Learning mapping between two function spaces has attracted considerable research attention. However, learning the solution operator of partial differential equations (PDEs) remains a challenge in scientific computing. Therefore, in this study, we propose a novel *pseudo-differential integral operator* (PDIO) inspired by a pseudo-differential operator, which is a generalization of a differential operator and characterized by a certain symbol. We parameterize the symbol by using a neural network and show that the neural-network-based symbol is contained in a smooth symbol class. Subsequently, we prove that the PDIO is a bounded linear operator, and thus is continuous in the Sobolev space. We combine the PDIO with the neural operator to develop a *pseudo-differential neural operator* (PDNO) to learn the nonlinear solution operator of PDEs. We experimentally validate the effectiveness of the proposed model by using Burgers\u2019 equation, Darcy flow, and the Navier-Stokes equation. The results reveal that the proposed PDNO outperforms the existing neural operator approaches in most experiments.'\nauthor:\n- |\n Jin Young Shin\\\n Department of Mathematics\\\n POSTECH\\\n Pohang, 37673, Republic of Korea\\\n `sjy6006@postech.ac.kr`\\\n Jae Yong Lee\\\n Department of Mathematics\\\n POSTECH\\\n Pohang, 37673, Republic of Korea\\\n `jaeyong@postech.ac.kr`\\\n Hyung Ju Hwang\\\n Department of Mathematics\\\n POSTECH\\\n Pohang, 37673, Republic of Korea\\" +"---\nabstract: 'Segmentation of the left ventricle in cardiac magnetic resonance imaging MRI scans enables cardiologists to calculate the volume of the left ventricle and subsequently its ejection fraction. The ejection fraction is a measurement that expresses the percentage of blood leaving the heart with each contraction. Cardiologists often use ejection fraction to determine one\u2019s cardiac function. We propose multiscale template matching technique for detection and an elliptical active disc for automated segmentation of the left ventricle in MR images. The elliptical active disc optimizes the local energy function with respect to its five free parameters which define the disc. Gradient descent is used to minimize the energy function along with Green\u2019s theorem to optimize the computation expenses. We report validations on 320 scans containing 5,273 annotated slices which are publicly available through the Multi-Centre, Multi-Vendor, and Multi-Disease Cardiac Segmentation (M&Ms) Challenge. We achieved successful localization of the left ventricle in 89.63% of the cases and a Dice coefficient of 0.873 on diastole slices and 0.770 on systole slices. The proposed technique is based on traditional image processing techniques with a performance on par with the deep learning techniques.'\nauthor:\n- |\n Garvit Chhabra\\\n Department of Electrical and Electronics Engineering\\" +"---\nbibliography:\n- 'LargeDLatticesReferences.bib'\n---\n\n**Lattice Black Branes at Large $D$**\n\n1.6cm\n\n**David Licht$^{a,b}$, Raimon Luna$^{a,c,d}$ and Ryotaku Suzuki$^{a,e,f}$**\n\n0.5cm\n\n*$^{a}$Departament de F[\u00ed]{}sica Qu\u00e0ntica i Astrof\u00edsica, Institut de Ci\u00e8ncies del Cosmos,*\n\n*Universitat de Barcelona, Mart\u00ed i Franqu\u00e8s 1, E-08028 Barcelona, Spain*\n\n*$^{b}$Department of Physics, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel*\n\n*$^{c}$CENTRA, Departamento de F\u00edsica, Instituto Superior T\u00e9cnico - IST,*\n\n*Universidade de Lisboa - UL, Avenida Rovisco Pais 1, 1049 Lisboa, Portugal*\n\n*$^{d}$Departamento de Astronom\u00eda y Astrof\u00edsica, Universitat de Val\u00e8ncia,*\n\n*Dr. Moliner 50, 46100, Burjassot (Val\u00e8ncia), Spain*\n\n*$^{e}$Department of Physics, Osaka City University*\n\n*Sugimoto 3-3-138, Osaka 558-8585, Japan*\n\n*$^{f}$Mathematical Physics Laboratory, Toyota Technological Institute*\n\n*Hisakata 2-12-1, Nagoya 468-8511, Japan*\n\n1.cm\n\n**Abstract**\n\n0.2cm\n\nWe explore the phase space of non-uniform black branes compactified on oblique lattices with a large number of dimensions. We find the phase diagrams for different periodicities and angles, and determine the thermodynamically preferred phases for each lattice configuration. In a range of angles, we observe that some phases become metastable.\n\nIntroduction {#sec:introduction}\n============\n\nPeriodic deformations of black strings and black branes provide a natural playground to explore the rich phenomena of black holes in higher dimensions. First identified in [@Gregory:1993vy; @Gregory:1994bj], the Gregory-Laflamme (GL) instability introduced" +"---\nabstract: 'Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks, such as unnoticeable perturbations of adjacent matrix and node features. Thus, it is requisite to learn robust representations in graph neural networks. To improve the robustness of graph representation learning, we propose a novel **Graph** **A**dversarial **C**ontrastive **L**earning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning. In this framework, we maximize the mutual information between local and global representations of a perturbed graph and its adversarial augmentations, where the adversarial graphs can be generated in either supervised or unsupervised approaches. Based on the Information Bottleneck Principle, we theoretically prove that our method could obtain a much tighter bound, thus improving the robustness of graph representation learning. Empirically, we evaluate several methods on a range of node classification benchmarks and the results demonstrate GraphACL could achieve comparable accuracy over previous supervised methods.'\nauthor:\n- Jiayan Guo\n- Shangyang Li\n- Yue Zhao\n- Yan Zhang\nbibliography:\n- 'main.bib'\ntitle: Learning Robust Representation through Graph Adversarial Contrastive Learning\n---\n\nIntroduction\n============\n\nGraph neural networks (GNNs) have enabled significant advances on graph-structured data\u00a0[@kipf2016semi; @velivckovic2017graph] and are widely used in many applications" +"---\nabstract: 'A balanced generalized de Bruijn sequence with parameters $(n,l,k)$ is a cyclic sequence of $n$ bits such that (a) the number of 0\u2019s equals the number of 1\u2019s, and (b) each substring of length $l$ occurs at most $k$ times. We determine necessary and sufficient conditions on $n,l$, and $k$ for the existence of such a sequence.'\nbibliography:\n- 'references.bib'\ntitle: On the Existence of Balanced Generalized de Bruijn Sequences\n---\n\nStatement of the main theorem\n=============================\n\nDe Bruijn sequences, named after Nicolaas Govert de Bruijn (who first wrote about them in 1946) but first explored systematically by Camille Flye Sainte-Marie in 1894, are well-studied in mathematical literature. A (binary) **de Bruijn sequence of order $m$** is a cyclic sequence where every possible $m$-bit substring occurs exactly once. It is well known ([@deBruijn], see also [@Hall]) that there are $2^{2^{m-1} - m}$ distinct de Bruijn sequences of order $m$. In this paper, we generalize de Bruijn sequences in the following manner:\n\nA **generalized de Bruijn sequence** with parameters $(n,l,k)$ is a cyclic sequence of $n$ bits such that each substring of length $l$ occurs at most $k$ times. Such a sequence is called [*balanced*]{} if the number of" +"---\nabstract: 'Graphs (i.e., networks) have become an integral tool for the representation and analysis of relational data. Advances in data gathering have lead to multi-relational data sets which exhibit greater depth and scope. In certain cases, this data can be modeled using a hypergraph. However, in practice analysts typically reduce the dimensionality of the data (whether consciously or otherwise) to accommodate a traditional graph model. In recent years spectral hypergraph theory has emerged to study the eigenpairs of the adjacency hypermatrix of a uniform hypergraph. We show how analyzing multi-relational data, via a hypermatrix associated to the aforementioned hypergraph, can lead to conclusions different from those when the data is projected down to its co-occurrence matrix. In particular, we provide an example of a uniform hypergraph where the most central vertex (\u00e0 la eigencentrality) changes depending on the order of the associated matrix. To the best of our knowledge this is the first known hypergraph to exhibit this property.'\nauthor:\n- |\n Gregory J. Clark, Felipe Thomaz, and Andrew Stephen\\\n Sa\u00efd Business School\\\n University of Oxford\\\n `gregory.clark@sbs.ox.ac.uk `\\\nbibliography:\n- 'main.bib'\ntitle: On the Effect of Data Dimensionality on Eigenvector Centrality\n---\n\nIntroduction\n============\n\nThere is a class of" +"---\nabstract: 'Confining the propagating wavepackets of an atom interferometer inside a waveguide can substantially reduce the size of the device while preserving high sensitivity. We have realized a two-dimensional Sagnac atom interferometer in which Bose-condensed $^{87}$Rb atoms propagate within a tight waveguide formed by a collimated laser beam, a matter wave analog of the fiber optic gyro (FOG). The condensate is split, reflected, and recombined with a series of Bragg pulses while the waveguide moves transversely so that the wavepacket trajectories enclose an area. Delta-kick cooling is used to prepare low-density atomic wavepackets with a temperature of . The low density reduces the impact of interatomic interactions, while the low temperature limits the expansion of the wavepacket during the interferometer cycle. The effective enclosed area is with an average fringe contrast of 20% and underlying contrast up to 60%. The main source of the reduced average contrast is phase noise caused by mechanical vibrations of the optical components. We present the first measurement of Allan deviation for such an atom rotation sensor, showing that the interferometer phase noise falls with averaging time $\\tau$ as $\\tau^{-1/2}$ for $\\tau$ up to 10,000 seconds. The statistical noise falls below the Earth rotation" +"---\nabstract: |\n Offering incentives (e.g., coupons at Amazon, discounts at Uber and video bonuses at Tiktok) to user is a common strategy used by online platforms to increase user engagement and platform revenue. Despite its proven effectiveness, these marketing incentives incur an inevitable cost and might result in a low ROI (Return on Investment) if not used properly. On the other hand, different users respond differently to these incentives, for instance, some users never buy certain products without coupons, while others do anyway. Thus, how to select the right amount of incentives (i.e. treatment) to each user under budget constraints is an important research problem with great practical implications. In this paper, we call such problem as a *budget-constrained treatment selection* (BTS) problem.\n\n The challenge is how to efficiently solve BTS problem on a Large-Scale dataset and achieve improved results over the existing techniques. We propose a novel tree-based treatment selection technique under budget constraints, called *Large-Scale Budget-Constrained Causal Forest* (LBCF) algorithm, which is also an efficient treatment selection algorithm suitable for modern *distributed computing* systems. A novel offline evaluation method is also proposed to overcome an intrinsic challenge in assessing solutions\u2019 performance for BTS problem in randomized control" +"epsf\n\n[**Moment-based multi-resolution HWENO scheme for hyperbolic conservation laws**]{}\n\nJiayin Li[^1], Chi-Wang Shu[^2] and Jianxian Qiu[^3]\n\n**Abstract**\n\n=1.7pc\n\nIn this paper, a high-order moment-based multi-resolution Hermite weighted essentially non-oscillatory (HWENO) scheme is designed for hyperbolic conservation laws. The main idea of this scheme is derived from our previous work \\[J. Comput. Phys., 446 (2021) 110653\\], in which the integral averages of the function and its first order derivative are used to reconstruct both the function and its first order derivative values at the boundaries. However, in this paper, only the function values at the Gauss-Lobatto points in the one or two dimensional case need to be reconstructed by using the information of the zeroth and first order moments. In addition, an extra modification procedure is used to modify those first order moments in the troubled-cells, which leads to an improvement of stability and an enhancement of resolution near discontinuities. To obtain the same order of accuracy, the size of the stencil required by this moment-based multi-resolution HWENO scheme is still the same as the general HWENO scheme and is more compact than the general WENO scheme. Moreover, the linear weights can also be any positive numbers as long as their" +"---\nabstract: 'Gradient-free/zeroth-order methods for black-box convex optimization have been extensively studied in the last decade with the main focus on oracle complexity. In this paper, besides the oracle complexity, we focus also on iteration complexity, and propose a generic approach that, based on optimal first-order methods, allows to obtain in a black-box fashion new zeroth-order algorithms for non-smooth convex optimization problems. Our approach not only leads to optimal oracle complexity, but also allows to obtain iteration complexity similar to first-order methods, which, in turn, allows to exploit parallel computations to accelerate the convergence of our algorithms. We also elaborate on extensions for stochastic optimization problems, saddle-point problems, and distributed optimization.'\nauthor:\n- Alexander Gasnikov\n- Anton Novitskii\n- Vasilii Novitskii\n- Farshed Abdukhakimov\n- Dmitry Kamzolov\n- Aleksandr Beznosikov\n- Martin Tak\u00e1\u010d\n- Pavel Dvurechensky\n- Bin Gu\nbibliography:\n- 'example\\_paper.bib'\ntitle: 'The Power of First-Order Smooth Optimization for Black-Box Non-Smooth Problems'\n---\n\nProblem Formulation\n===================\n\nWe consider optimization problem $$\\label{problem}\n \\min_{x\\in Q\\subseteq \\mathbb{R}^d} f(x)$$ in the setting of a zeroth-order oracle. This means that an oracle returns the value $f(x)$ at a requested point $x$ [@conn2009introduction], possibly with some adversarial noise that is uniformly bounded by a small" +"---\nabstract: 'We study the problem of group testing with non-identical, independent priors. So far, the pooling strategies that have been proposed in the literature take the following approach: a hand-crafted test design along with a decoding strategy is proposed, and guarantees are provided on how many tests are sufficient in order to identify all infections in a population. In this paper, we take a different, yet perhaps more practical, approach: we fix the decoder and the number of tests, and we ask, given these, what is the *best* test design one could use? We explore this question for the Definite Non-Defectives (DND) decoder. We formulate a (non-convex) optimization problem, where the objective function is the expected number of errors for a particular design. We find approximate solutions via gradient descent, which we further optimize with informed initialization. We illustrate through simulations that our method can achieve significant performance improvement over traditional approaches.'\nauthor:\n- \nbibliography:\n- 'bibliography.bib'\ntitle: Improving Group Testing via Gradient Descent\n---\n\nIntroduction\n============\n\nGroup testing has recently attracted significant attention in the context of COVID\u00a0([@art1; @art2; @art4; @Cov-GpTest-1; @Cov-GpTest-2; @kucirka2020-PCR]), and several countries (including India, Germany, US, and China) have already deployed preliminary group-testing" +"---\nabstract: 'We simulate the nonequilibrium steady state *cis-trans* photoisomerization of retinal chromophore in rhodopsin based on a two-state-two-mode model coupled to a thermal environment. By analyzing the systematic trends within an inhomogeneously broadened ensemble of systems, we find that the steady state reaction quantum yield (QY) correlates strongly with the excess energy above the crossing point of the system, in agreement with the prediction of the short time dynamical wavepacket picture. However, the nontrivial dependence of the QY on the system-environment interaction indicates that a pure dynamical picture is insufficient and that environment-induced partial internal energy redistribution takes place before the reaction concludes. These results imply that a proper treatment of the photoisomerization reaction, particularly its high QY, must account for the redistribution and dissipation of energy beyond the dynamical wavepacket motion that is typically employed in the literature and that is appropriate only in the transient regime.'\nauthor:\n- Chern Chuang\n- Paul Brumer\nbibliography:\n- 'LH1RC.bib'\ntitle: 'Steady State Photoisomerization Quantum Yield of Model Rhodopsin: Insights from Wavepacket Dynamics?'\n---\n\n![image](figs/TOCgraphic.png){width=\"10cm\"} For Table of Contents Only\n\nIntroduction\n============\n\nMuch of the detailed information regarding biologically significant light induced processes (such as vision and photosynthesis) arises from modern" +"---\nabstract: 'We introduce *ApolloRL* [^1], an open platform for research in reinforcement learning for autonomous driving. The platform provides a complete closed-loop pipeline with training, simulation, and evaluation components. It comes with 300 hours of real-world data in driving scenarios and popular baselines such as Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC) agents. We elaborate in this paper on the architecture and the environment defined in the platform. In addition, we discuss the performance of the baseline agents in the ApolloRL environment.'\nauthor:\n- 'Fei Gao[^2], Peng Geng, Jiaqi Guo[^3], Yuan Liu, Dingfeng Guo, Yabo Su, Jie Zhou, Xiao Wei, Jin Li, Xu Liu'\nbibliography:\n- 'ref.bib'\ntitle: '**ApolloRL: a Reinforcement Learning Platform for Autonomous Driving**'\n---\n\nIntroduction\n============\n\n![The scenarios in simulation[]{data-label=\"fig:env\"}](env){width=\"80.00000%\"}\n\nAutonomous driving [@av] has received much attention due to its potential huge impact to our world, *e.g.*, increased driving safety [@av-safety], improved transportation efficiency [@Hancock7684], and reduced commuting time [@Steck2018HowAD]. The autonomous driving technology has advanced significantly over the past few years, thanks to the research progress in artificial intelligence. However, the autonomous driving problem still remains challenging in multiple aspects. For example,\n\n- *Imperfect information*: Autonomous driving is an imperfect information problem. There are" +"---\nabstract: 'An optimized compact stellarator with four simple coils is obtained from direct optimization via coil shape. The new stellarator consists of two interlocking coils and two vertical field coils similar to those of the Columbia Non-neutral Torus (CNT)\\[Pedersen et al. Phys. Rev. Lett. 88, 205002 (2002)\\]. The optimized configuration has global magnetic well and a low helical ripple level comparable to that of Wendelstein 7-X (W7-X)\\[Wolf et al. Nucl. Fusion 57, 102020 (2017)\\]. The two interlocking coils have a smooth three-dimensional shape much simpler than those of advanced stellarators such as W7-X. This result opens up possibilities of future stellarator reactors with simplified coils.'\nauthor:\n- Guodong Yu\n- Zhichen Feng\n- Peiyou Jiang\n- GuoYong Fu\ntitle: Existence of an optimized stellarator with simple coils\n---\n\n[^1]\n\nThe two main approaches of magnetic fusion energy (MCF) are tokamak\u2019s and stellarator\u2019s. Tokamak is currently the dominant approach with advantages of axisymmetric geometry and achieved plasma parameters significantly better than those of other MCF devices. However, stellarators have recently enjoyed a renaissance as recent results of the advanced stellarator Wendelstein 7-X (W7-X)[@Wolf2017] demonstrated the reduced neoclassical energy transport[@Dinklage2018][@Beidler2021]. It is expected that plasma performance of W7-X can reach a" +"---\nabstract: 'Given a directed network $ G $, we are interested in studying the qualitative features of $ G $ which govern how perturbations propagate across $ G $. Various classical centrality measures have been already developed and proven useful to capture qualitative features and behaviors for undirected networks. In this paper, we use topological data analysis (TDA) to adapt measures of centrality to capture both directedness and non-local propagating behaviors in networks. We introduce a new metric for computing centrality in directed weighted networks, namely the *quasi-centrality* measure. We compute these metrics on trade networks to illustrate that our measure successfully captures propagating effects in the network and can also be used to identify sources of shocks that can disrupt the topology of directed networks. Moreover, we introduce a method that gives a hierarchical representation of the topological influences of nodes in a directed network.'\nauthor:\n- 'Fenghuan He[^1]'\nbibliography:\n- 'prelim.bib'\ntitle: A Topological Centrality Measure for Directed Networks\n---\n\n**Key Words**: Complex Networks, Centrality, Directed Networks, Trade Networks, Topological Data Analysis, Persistent Homology, Hierarchical Clustering\n\nIntroduction\n============\n\nNetworks are a useful abstraction for many real-world systems, representing interactions between objects within a system. Network analysis examines" +"---\nabstract: 'Schr\u00f6dinger equation belongs to the most fundamental differential equations in quantum physics. However, the exact solutions are extremely rare, and many analytical methods are applicable only to the cases with small perturbations or weak correlations. Solving the many-body Schr\u00f6dinger equation in the continuous spaces with the presence of strong correlations is an extremely important and challenging issue. In this work, we propose the functional tensor network (FTN) approach to solve the many-body Schr\u00f6dinger equation. Provided the orthonormal functional bases, we represent the coefficients of the many-body wave-function as tensor network. The observables, such as energy, can be calculated simply by tensor contractions. Simulating the ground state becomes solving a minimization problem defined by the tensor network. An efficient gradient-decent algorithm based on the automatically differentiable tensors is proposed. We here take matrix product state (MPS) as an example, whose complexity scales only linearly with the system size. We apply our approach to solve the ground state of coupled harmonic oscillators, and achieve high accuracy by comparing with the exact solutions. Reliable results are also given with the presence of three-body interactions, where the system cannot be decoupled to isolated oscillators. Our approach is simple and with well-controlled error," +"---\nabstract: 'We report on a universal method to measure the genuine indistinguishability of $n$-photons \u2013 a crucial parameter that determines the accuracy of optical quantum computing. Our approach relies on a low-depth cyclic multiport interferometer with $N = 2n$ modes, leading to a quantum interference fringe whose visibility is a direct measurement of the genuine $n$-photon indistinguishability. We experimentally demonstrate this technique for a 8-mode integrated interferometer fabricated using femtosecond laser micromachining and four photons from a quantum dot single-photon source. We measure a four-photon indistinguishability up to $0.81\\pm 0.03$. This value decreases as we intentionally alter the photon pairwise indistinguishability. The low-depth and low-loss multiport interferometer design provides an efficient and scalable path to evaluate the genuine indistinguishability of resource states of increasing photon number.'\nauthor:\n- Mathias Pont\n- Riccardo Albiero\n- 'Sarah\u00a0E. Thomas'\n- Nicol\u00f2 Spagnolo\n- Francesco Ceccarelli\n- Giacomo Corrielli\n- Alexandre Brieussel\n- Niccolo Somaschi\n- H\u00ealio Huet\n- Abdelmounaim Harouri\n- Aristide Lema\u00eetre\n- Isabelle Sagnes\n- Nadia Belabas\n- Fabio Sciarrino\n- Roberto Osellame\n- Pascale Senellart\n- Andrea Crespi\ntitle: 'Quantifying [*n*]{}-photon indistinguishability with a cyclic integrated interferometer'\n---\n\n[^1]\n\n[^2]\n\n[^3]\n\n[^4]\n\nIntroduction {#sec:intro}\n============\n\nOptical quantum computing" +"---\nabstract: 'Neuromorphic computing exploits the dynamical analogy between many physical systems and neuron biophysics. Superconductor systems, in particular, are excellent candidates for neuromorphic devices due to their capacity to operate in great speeds and with low energy dissipation compared to their silicon counterparts. In this study we revisit a prior work on Josephson Junction-based \u201cneurons\" in order to identify the exact dynamical mechanisms underlying the system\u2019s neuron-like properties and reveal new complex behaviors which are relevant for neurocomputation and the design of superconducting neuromorphic devices. Our work lies at the intersection of superconducting physics and theoretical neuroscience, both viewed under a common framework, that of nonlinear dynamics theory.'\nauthor:\n- 'D. Chalkiadakis'\n- 'J. Hizanidis'\ntitle: Dynamical properties of neuromorphic Josephson junctions\n---\n\nIntroduction\n============\n\nNeuromorphic computing is a rapidly advancing field that uses neuroscience-inspired concepts in order to implement circuits of physical neurons. The ultimate goal of neuromorphic computing is the development of powerful algorithms and high-speed, energy-efficient hardware for information processing and the potential acquirement of insight into cognition (for a recent review see\u00a0[@MAR20] and references within). The motivation behind the attempt to mimic the brain is its extremely impressive capabilities and advantages as a computing" +"---\nabstract: 'We refine the Lyapunov-Schmidt analysis from our recent paper [@acws] to study the geometric center of mass of the asymptotic foliation by area-constrained Willmore surfaces of initial data for the Einstein field equations. If the scalar curvature of the initial data vanishes at infinity, we show that this geometric center of mass agrees with the Hamiltonian center of mass. By contrast, we show that the positioning of large area-constrained Willmore surfaces is sensitive to the distribution of the energy density. In particular, the geometric center of mass may differ from the Hamiltonian center of mass if the scalar curvature does not satisfy additional asymptotic symmetry assumptions.'\naddress:\n- ' University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria ORCiD: [0000-0001-7993-9536](https://orcid.org/0000-0001-7993-9536)'\n- ' University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria ORCiD: [0000-0003-1676-0824](https://orcid.org/0000-0003-1676-0824)'\nauthor:\n- Michael Eichmair\n- Thomas Koerber\ntitle: The Willmore center of mass of initial data sets \n---\n\nIntroduction\n============\n\nLet $(M,g)$ be an asymptotically flat Riemannian $3$-manifold. Such Riemannian manifolds are used to model initial data of isolated gravitational systems for the Einstein field equations. The scalar curvature of $(M,g)$ provides a lower bound for the local energy density of" +"---\nauthor:\n- Sibasish Banerjee\n- Pietro Longhi\n- Mauricio Romo\nbibliography:\n- 'biblio.bib'\ntitle: 'A-branes, foliations and localization '\n---\n\nIntroduction\n============\n\nA recurring theme in research at the interface of physics and mathematics is the subject of BPS states. In physics BPS states are associated with symmetry-protected sectors of a gauge or string theory, whereas in mathematics they arise in various guises in the domains of geometry, algebra, low-dimensional topology, and beyond.\n\nIn this work we focus on a class of BPS states modeled by Lagrangian $A$-branes in a class of Calabi-Yau threefolds. Our main goal is to define a notion of \u2018counting\u2019 stable $A$-branes that is motivated by physics, and that is meaningful from the viewpoint of mathematics.\n\nLet $X$ be a hypersurface in ${\\mathbb{C}}^2\\times ({\\mathbb{C}}^*)^2$ defined by $uv = F(x,y)$ for a certain Laurent polynomial $F(x,y)$, and let $\\Omega^{3,0}$ denote a normalized holomorphic three-form on $X$. At zero string coupling, an $A$-brane is characterized by a choice of special Lagrangian $L\\subset X$ calibrated by $\\Omega^{3,0}$, together with a choice of flat abelian local system ${\\mathcal{L}}\\to L$. In this work we restrict attention to cases where $L$ is a primitive cycle in $H_3(X,{\\mathbb{Z}})$. Each of these geometric" +"---\nabstract: 'We discuss a cellular automaton simulating the process of reaching Heider balance in a fully connected network. The dynamics of the automaton is defined by a deterministic, synchronous and global update rule. The dynamics has a very rich spectrum of attractors including fixed points and limit cycles, the length and number of which change with the size of the system. In this paper we concentrate on a class of limit cycles that preserve energy spectrum of the consecutive states. We call such limit cycles perfect. Consecutive states in a perfect cycle are separated from each other by the same Hamming distance. Also the Hamming distance between any two states separated by $k$ steps in a perfect cycle is the same for all such pairs of states. The states of a perfect cycle form a very symmetric trajectory in the configuration space. We argue that the symmetry of the trajectories is rooted in the permutation symmetry of vertices of the network and a local symmetry of a certain energy function measuring the level of balance/frustration of triads.'\nauthor:\n- Zdzis\u0142aw\u00a0Burda\n- 'Ma\u0142gorzata J. Krawczyk'\n- Krzysztof Ku\u0142akowski\ntitle: Perfect cycles in the synchronous Heider dynamics in complete network" +"---\nabstract: 'Experiments on collisions of isolated electrons guided along the edges in quantum Hall setups can mimic mixing of photons with the important distinction that electrons are charged fermions. In the so-called electronic Hong-Ou-Mandel (HOM) setup uncorrelated pairs of electrons are injected towards a beamsplitter. If the two electron wave packets were identical, Fermi statistics would force the electrons to scatter to different detectors, yet this quantum antibunching may be confounded by Coulomb repulsion. Here we model an electronic HOM experiment using a quadratic 2D saddle point potential for the beamsplitter and unscreened Coulomb interaction between the two injected electrons subjected to a strong out-of-plane magnetic field. We show that classical equations of motion for the drift dynamics of electrons\u2019 guiding centers take on the form of Hamilton equations for canonically conjugated variables subject to the saddle point potential and the Coulomb potential where the dynamics of the center-of-mass coordinate and the relative coordinate separate. We use these equations to determine collision outcomes in terms of a few experimentally tuneable parameters: the initial energies of the uncorrelated electrons, relative time delay of injection and the shape of the saddle point potential. A universal phase diagram of deterministic bunching and" +"---\nabstract: 'Scattering of matter waves through slits has been explored using the Feynman Path Integral formalism. We explicitly plot the near-zero probability densities to analyse the behaviour near the slit. Upon doing so, intriguing patterns emerge, most notably the braid-like structure in the case of double slits, whose complexity increases as one increases the number of slits. Furthermore, the plot shows the existence of a transition region, where the distribution of near-zero probability points changes from the braided to the fringe-like structure, which has been analysed by explicitly expressing the wavefunction as a hypergeometric function. These patterns are analysed while considering the continuity equation and its consequences for the regions with zero probability density.'\nauthor:\n- |\n Hardeep Singh\\\n Department of Physical sciences\\\n UM-DAE Centre for Excellence in Basic Sciences\\\n Mumbai, India\\\n `hardeep.chhabra18@gmail.com`\\\n A. Bhagwat\\\n Department of Physical sciences\\\n UM-DAE Centre for Excellence in Basic Sciences\\\n Mumbai, India\\\n `ameeya@cbs.ac.in`\\\ntitle: A Study on the Scattering of Matter Waves through Slits\n---\n\nIntroduction {#sec: Introduction}\n============\n\nThe famed double-slit experiment, introduced originally by Thomas Young in his lectures at the Royal Society in 1802 [@young1802], is unarguably one of the most beautiful experiments ever performed in the history of science." +"=1\n\nIntroduction\n============\n\nBackground and motivation\n-------------------------\n\nIn [@voiculescu2014free], Voiculescu introduced an extension of free probability; a new notion of independence, motivated by computations of joint distributions of *left and right* creation and annihilation operators acting on the reduced free product of pointed Hilbert spaces. These operators prototype *bifree independence* (introduced in [@charlesworth2015combinatorics; @voiculescu2014free]) between pairs of random variables, in the same way that left (*or* right) operators on their own prototype free independence. This operator algebraic root of bifreeness is supplemented by a combinatorial one\u00a0[@charlesworth2015combinatorics], to which the poset of noncrossing *bipartitions* and M\u00f6bius inversion are central, extending the now well-developed combinatorial approach to freeness established by Speicher [@nica2006lectures; @Speicher1994freecumulants]. Since its inception, bifreeness developed into a *theory of noncommutative probability for pairs of random variables*, also called *two-faced* (left is one face, right is the other) random variables, with a steadily increasing set of two-faced independences.\n\nRecall that Ben Ghorbal and Sch\u00fcrmann [@BenGhorbalSchurmann] established a set of axioms defining the concept of independence in noncommutative probability. Muraki proved that there are only five such independences, namely free, monotone, anti-monotone, Boolean, and tensor independence\u00a0[@Muraki2003]. The axiomatic framework has been adapted to the two-faced (and more generally multi-faced-multi-state)" +"---\nabstract: 'Correlations driven by the constraints of local charge conservation have been shown to provide insight into the chemical evolution and diffusivity of the high-temperature matter created in ultra-relativistic heavy ion collisions. Two-particle correlations driven by final-state interactions have allowed the extraction of critical femtoscopic space-time information about the expansion and dissolution of the same collisions. Whereas correlations from final-state interactions mainly appear at small relative momenta, a few tens of MeV/$c$, charge-balance correlations extend over a range of hundreds of MeV/$c$. In nearly all previous analyses, this separation of scales is used to focus solely on one class or the other. The purpose of this study is to quantitatively understand the degree to which correlations from final-state interactions distort the interpretation of charge-balance correlations and vice versa.'\nauthor:\n- Scott Pratt and Karina Martirosova\ntitle: 'The Interplay of Femtoscopic and Charge-Balance Correlations'\n---\n\nIntroduction {#sec:intro}\n============\n\nCharge balance correlations are rather simple to understand. For each observed charge, there exists either an additional opposite charge or one fewer charges of the same sign. Because charge is locally conserved, the balancing charge should be found nearby in coordinate space, and because of collective flow, this correlation is mapped onto" +"---\nauthor:\n- Sreekanth Harikumar\n- Marek Biesiada\nbibliography:\n- 'ms.bib'\ndate: 'Received: date / Revised version: date'\ntitle: 'Moffat\u2019s Modified Gravity tested on X-COP galaxy clusters'\n---\n\n[leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore\n\nIntroduction {#intro}\n============\n\nA century after Albert Einstein developed General (GR) we made the first direct detection of gravitational waves [@Abbott_2016] confirming the validity of Einstein\u2019s equations in the strong and highly dynamical regime. This detection can also be called a milestone in fundamental physics as it is one of the direct tests of GR confirming its validity. General Relativity has passed all tests in the solar system and binary pulsar scales and it has now become an unavoidable tool for astrophysicists. During this period we have also witnessed a rise of many modified gravity theories as an alternative to the GR and over the years one have witnessed considerable blows in experimental tests. The recent resurgence in exploring and proposing new theories were driven by the need to address the issues like dark energy, dark matter and inflation apart from the difficulties in developing a quantum theory of gravity. The nature of" +"---\nabstract: 'In this paper we present our hardware design and control approaches for a mobile manipulation platform used in Challenge 2 of the MBZIRC 2020 competition. In this challenge, a team of UAVs and a single UGV collaborate in an autonomous, wall-building scenario, motivated by construction automation and large-scale robotic 3D printing. The robots must be able, autonomously, to detect, manipulate, and transport bricks in an unstructured, outdoor environment. Our control approach is based on a state machine that dictates which controllers are active at each stage of the Challenge. In the first stage our UGV uses visual servoing and local controllers to approach the target object without considering its orientation. The second stage consists of detecting the object\u2019s global pose using OpenCV-based processing of RGB-D image and point-cloud data, and calculating an alignment goal within a global map. The map is built with Google Cartographer and is based on onboard LIDAR, IMU, and GPS data. Motion control in the second stage is realized using the ROS Move Base package with Time-Elastic Band trajectory optimization. Visual servo algorithms guide the vehicle in local object-approach movement and the arm in manipulating bricks. To ensure a stable grasp of the brick\u2019s" +"---\nabstract: 'We describe the design of a surface-electrode ion trap junction, which is a key element for large-scale ion trap arrays. A bi-objective optimization method is used for designing the electrodes, which maintains the total pseudo-potential curvature while minimizing the axial pseudo-potential gradient along the ion transport path. To facilitate the laser beam delivery for parallel operations in multiple trap zones, we implemented integrated optics on each arm of this X-junction trap. The layout of the trap chip for commercial foundry fabrication is presented. This work suggests routes to improving ion trap junction performance in scalable implementations. Together with integrated optical addressing, this contributes to modular trapped-ion quantum computing in interconnected 2-dimensional arrays.'\naddress: 'Institute for Quantum Electronics, ETH Z\u00fcrich, Otto-Stern-Weg 1, 8093 Z\u00fcrich, Switzerland'\nauthor:\n- 'Chi Zhang, Karan K Mehta and Jonathan P Home'\nbibliography:\n- 'reference.bib'\ntitle: 'Optimization and implementation of a surface-electrode ion trap junction'\n---\n\n=1\n\nJanuary 2022\n\n[*Keywords*]{}: quantum computing, scalable ion traps, quantum CCD architecture, integrated optics\n\nIntroduction\n============\n\nTrapped ions are a leading platform for quantum computation. All the elementary building blocks for a quantum computer have been realized in these systems with high quality, including efficient and robust state preparation" +"---\nabstract: |\n The threshold theorem is a fundamental result in the theory of fault-tolerant quantum computation stating that arbitrarily long quantum computations can be performed with a polylogarithmic overhead provided the noise level is below a constant level. A recent work by Fawzi, Grospellier and Leverrier (FOCS 2018) building on a result by Gottesman (QIC 2013) has shown that the space overhead can be asymptotically reduced to a constant independent of the circuit provided we only consider circuits with a length bounded by a polynomial in the width. In this work, using a minimal model for quantum fault tolerance, we establish a general lower bound on the space overhead required to achieve fault tolerance.\n\n For any non-unitary qubit channel ${\\mathcal{N}}$ and any quantum fault tolerance schemes against $\\mathrm{i.i.d.}$ noise modeled by ${\\mathcal{N}}$, we prove a lower bound of $\\max{\\ensuremath{ \\left\\lbrace \\mathrm{Q}({\\mathcal{N}})^{-1}n,\\alpha_{\\mathcal{N}}\\log T \\right\\rbrace }}$ on the number of physical qubits, for circuits of length $T$ and width $n$. Here, $\\mathrm{Q}({\\mathcal{N}})$ denotes the quantum capacity of ${\\mathcal{N}}$ and $\\alpha_{\\mathcal{N}}>0$ is a constant only depending on the channel ${\\mathcal{N}}$. In our model, we allow for qubits to be replaced by fresh ones during the execution of the circuit and we allow" +"---\nabstract: 'This paper comprises a review of our recent works on fractional chiral modes that emerge due to edge reconstruction in integer and fractional quantum Hall (QH) phases. The new part added is an analysis of edge reconstruction of the $\\nu = 2/5$ phase. QH states are topological phases of matter featuring chiral gapless modes at the edge. These edge modes may propagate downstream or upstream, and may support either charge or charge-neutral excitations. From topological considerations, particle-like QH states are expected to support only downstream charge modes. However the interplay between the electronic repulsion and the boundary confining potential may drive certain quantum phase transitions (called reconstructions) at the edge, which are associated to the nucleation of additional pairs of counter-propagating modes. Employing variational methods, here we study edge reconstruction in the prototypical particle-like phases at $\\nu = 1, 1/3$ and $2/5$ as a function of the slope of the confining potential. Our analysis shows that subsequent renormalization of the edge modes, driven by disorder-induced tunnelling and intermode interactions, may lead to the emergence of upstream neutral modes. These predictions may be tested in suitably designed transport experiments. Our results are also consistent with previous observations of upstream" +"---\nabstract: 'Atrioventricular valve regurgitation is a significant cause of morbidity and mortality in patients with acquired and congenital cardiac valve disease. Image-derived computational modeling of atrioventricular valves has advanced substantially over the last decade and holds particular promise to inform valve repair in small and heterogeneous populations which are less likely to be optimized through empiric clinical application. While an abundance of computational biomechanics studies have investigated mitral and tricuspid valve disease in adults, few studies have investigated application to vulnerable pediatric and congenital heart populations. Further, to date, investigators have primarily relied upon a series of commercial applications that are neither designed for image-derived modeling of cardiac valves, nor freely available to facilitate transparent and reproducible valve science. To address this deficiency, we aimed to build an open-source computational framework for the image-derived biomechanical analysis of atrioventricular valves. In the present work, we integrated an open-source valve modeling platform, SlicerHeart, and an open-source biomechanics finite element modeling software, FEBio, to facilitate image-derived atrioventricular valve model creation and finite element analysis. We present a detailed verification and sensitivity analysis to demonstrate the fidelity of this modeling in application to 3D echocardiography-derived pediatric mitral and tricuspid valve models. Our analyses" +"---\nabstract: 'Gravitational-wave (GW) detections of merging neutron star\u2013black hole (NSBH) systems probe astrophysical neutron star (NS) and black hole (BH) mass distributions, especially at the transition between NS and BH masses. Of particular interest are the maximum NS mass, minimum BH mass, and potential mass gap between them. While previous GW population analyses assumed all NSs obey the same maximum mass, if rapidly spinning NSs exist, they can extend to larger maximum masses than nonspinning NSs. In fact, several authors have proposed that the $\\sim2.6\\,M_\\odot$ object in the event GW190814 \u2013 either the most massive NS or least massive BH observed to date \u2013 is a rapidly spinning NS. We therefore infer the NSBH mass distribution jointly with the NS spin distribution, modeling the NS maximum mass as a function of spin. Using 4 LIGO\u2013Virgo NSBH events including GW190814, if we assume that the NS spin distribution is uniformly distributed up to the maximum (breakup) spin, we infer the maximum non-spinning NS mass is $2.7^{+0.5}_{-0.4}\\,M_\\odot$ (90% credibility), while assuming only nonspinning NSs, the NS maximum mass must be $>2.53 M_\\odot$ (90% credibility). The data support the mass gap\u2019s existence, with a minimum BH mass at $5.4^{+0.7}_{-1.0} M_\\odot$. With future" +"---\nabstract: 'We consider two families of Pascal-like triangles that have all ones on the left side and ones separated by $m-1$ zeros on the right side. The $m=1$ cases are Pascal\u2019s triangle and the two families also coincide when $m=2$. Members of the first family obey Pascal\u2019s recurrence everywhere inside the triangle. We show that the $m$-th triangle can also be obtained by reversing the elements up to and including the main diagonal in each row of the $(1/(1-x^m),x/(1-x))$ Riordan array. Properties of this family of triangles can be obtained quickly as a result. The $(n,k)$-th entry in the $m$-th member of the second family of triangles is the number of tilings of an $(n+k)\\times1$ board that use $k$ $(1,m-1)$-fences and $n-k$ unit squares. A $(1,g)$-fence is composed of two unit square sub-tiles separated by a gap of width $g$. We show that the entries in the antidiagonals of these triangles are coefficients of products of powers of two consecutive Fibonacci polynomials and give a bijective proof that these coefficients give the number of $k$-subsets of $\\{1,2,\\ldots,n-m\\}$ such that no two elements of a subset differ by $m$. Other properties of the second family of triangles are also obtained" +"---\nabstract: 'We consider the existence and spectral stability of nonlinear discrete localized solutions representing light pulses propagating in a twisted multi-core optical fiber. By considering an even number, $N$, of waveguides, we derive asymptotic expressions for solutions in which the bulk of the light intensity is concentrated as a soliton-like pulses confined to a single waveguide. The leading order terms obtained are in very good agreement with results of numerical computations. Furthermore, as in the model without temporal dispersion, when the twist parameter, $\\phi$, is given by $\\phi = \\pi/N$, these standing waves exhibit optical suppression, in which a single waveguide remains unexcited, to leading order. Spectral computations and numerical evolution experiments suggest that these standing wave solutions are stable for values of the coupling parameter less than a critical value, at which point a spectral instability results from the collision of an internal eigenvalue with the eigenvalues at the origin. This critical value has a maximum when $\\phi = \\pi/N$.'\naddress:\n- 'Department of Mathematics, Southern Methodist University, Dallas, TX 75275, USA'\n- 'Department of Mathematics, University of Kansas, Lawrence, KS 66045, USA'\n- 'Department of Mathematics, Southern Methodist University, Dallas, TX 75275, USA'\n- 'Department of Mathematics," +"---\nabstract: 'Reversible logic can provide lower switching energy costs relative to all irreversible logic, including those developed by industry in semiconductor circuits, however, more research is needed to understand what is possible. Superconducting logic, an exemplary platform for both irreversible and reversible logic, uses flux quanta to represent bits, and the reversible implementation may switch state with low energy dissipation relative to the energy of a flux quantum. Here we simulate reversible shift register gates that are ballistic: their operation is powered by the input bits alone. A storage loop is added relative to previous gates as a key innovation, which bestows an asynchronous property to the gate such that input bits can arrive at different times as long as their order is clearly preserved. The shift register represents bit states by flux polarity, both in the stored bit as well as the ballistic input and output bits. Its operation consists of the elastic swapping of flux between the stored and the moving bit. This is related to a famous irreversible shift register, developed prior to the advent of superconducting flux quanta logic (which used irreversible gates). In the base design of our ballistic shift register (BSR) there is" +"---\nabstract: 'Nowadays, most methods for end-to-end contextual speech recognition bias the recognition process towards contextual knowledge. Since all-neural contextual biasing methods rely on phrase-level contextual modeling and attention-based relevance modeling, they may suffer from the confusion between similar context-specific phrases, which hurts predictions at the token level. In this work, we focus on mitigating confusion problems with fine-grained contextual knowledge selection (FineCoS). In FineCoS, we introduce fine-grained knowledge to reduce the uncertainty of token predictions. Specifically, we first apply phrase selection to narrow the range of phrase candidates, and then conduct token attention on the tokens in the selected phrase candidates. Moreover, we re-normalize the attention weights of most relevant phrases in inference to obtain more focused phrase-level contextual representations, and inject position information to help model better discriminate phrases or tokens. On LibriSpeech and an in-house 160,000-hour dataset, we explore the proposed methods based on an all-neural biasing method, collaborative decoding (ColDec). The proposed methods further bring at most 6.1% relative word error rate reduction on LibriSpeech and 16.4% relative character error rate reduction on the in-house dataset.'\naddress: |\n $^{1}$Institute of Automation, Chinese Academy of Sciences\\\n $^{2}$School of Artificial Intelligence, University of Chinese Academy of Sciences\\\n $^{3}$Bytedance" +"---\nabstract: 'Recently, dynamically typed languages, such as Python, have gained unprecedented popularity. Although these languages alleviate the need for mandatory type annotations, types still play a critical role in program understanding and preventing runtime errors. An attractive option is to infer types automatically to get static guarantees without writing types. Existing inference techniques rely mostly on static typing tools such as `PyType` for direct type inference; more recently, neural type inference has been proposed. However, neural type inference is data hungry, and depends on collecting labeled data based on static typing. Such tools, however, are poor at inferring user defined types. Furthermore, type annotation by developers in these languages is quite sparse. In this work, we propose novel techniques for generating high quality types using 1) information retrieval techniques that work on well documented libraries to extract types and 2) usage patterns by analyzing a large repository of programs. Our results show that these techniques are more precise and address the weaknesses of static tools, and can be useful for generating a large labeled dataset for type inference by machine learning methods. F1 scores are 0.52-0.58 for our techniques, compared to static typing tools which are at 0.06, and" +"---\nbibliography:\n- 'refs.bib'\n---\n\n=15.5pt\n\n[**The Universe as a Quantum Encoder** ]{}\n\n**Jordan Cotler and Andrew Strominger**\n\n[**Abstract**]{}\n\nQuantum mechanical unitarity in our universe is challenged both by the notion of the big bang, in which nothing transforms into something, and the expansion of space, in which something transforms into more something. This motivates the hypothesis that quantum mechanical time evolution is always isometric, in the sense of preserving inner products, but not necessarily unitary. As evidence for this hypothesis we show that in two spacetime dimensions (i) there is net entanglement entropy produced in free field theory by a moving mirror or expanding geometry, (ii) the Lorentzian path integral for a finite elements lattice discretization gives non-unitary isometric time evolution, and (iii) tensor network descriptions of AdS$_3$ induce a non-unitary but isometric time evolution on an embedded two-dimensional de Sitter braneworld. In the last example time evolution is a quantum error-correcting code.\n\nIntroduction\n============\n\nThe following three long-cherished beliefs about the physical universe,\n\n1. Quantum states on different time slices are related by unitary transformations;\n\n2. The universe is expanding;\n\n3. There are no degrees of freedom on length scales shorter than the Planck scale;\n\nare in considerable" +"---\nabstract: 'We study multiplicative statistics for the eigenvalues of unitarily-invariant Hermitian random matrix models. We consider one-cut regular polynomial potentials and a large class of multiplicative statistics. We show that in the large matrix limit several associated quantities converge to limits which are universal in both the potential and the family of multiplicative statistics considered. In turn, such universal limits are described by the integro-differential Painlev\u00e9 II equation, and in particular they connect the random matrix models considered with the narrow wedge solution to the KPZ equation at any finite time.'\naddress:\n- 'Department of Mathematics, Massachusetts Institute of Technology, USA.'\n- 'Instituto de Ci\u00eancias Matem\u00e1ticas e de Computa\u00e7\u00e3o, Universidade de S\u00e3o Paulo (ICMC - USP), S\u00e3o Carlos, S\u00e3o Paulo, Brazil.'\nauthor:\n- Promit Ghosal\n- 'Guilherme L.\u00a0F.\u00a0Silva'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Universality for multiplicative statistics of Hermitian random matrices and the integro-differential Painlev\u00e9 II equation'\n---\n\nIntroduction\n============\n\nRandom matrix theory has proven over time to be a powerful modern tool in mathematics and physics. With widespread applications in different areas such as engineering, statistical mechanics, probability, number theory, to mention only a few, its theory is rich and has been under intense development in the" +"---\nabstract: |\n We apply computational Game Theory to a unification of physics-based models that represent decision-making across a number of agents within both cooperative and competitive processes. Here the competitors try to both positively influence their own returns, while negatively affecting those of their competitors. Modelling these interactions with the so-called Boyd-Kuramoto-Lanchester (BKL) complex dynamical system model yields results that can be applied to business, gaming and security contexts. This paper studies a class of decision problems on the BKL model, where a large set of coupled, switching dynamical systems are analysed using game-theoretic methods.\n\n Due to their size, the computational cost of solving these BKL games becomes the dominant factor in the solution process. To resolve this, we introduce a novel Nash Dominant solver, which is both numerically efficient and exact. The performance of this new solution technique is compared to traditional exact solvers, which traverse the entire game tree, as well as to approximate solvers such as Myopic and Monte Carlo Tree Search (MCTS). These techniques are assessed, and used to gain insights into both nonlinear dynamical systems and strategic decision making in adversarial environments.\nauthor:\n- |\n Andrew C. Cullen\\\n School of Computer and Information Systems\\" +"---\nabstract: 'Two languages are considered mutually intelligible if their native speakers can communicate with each other, while using their own mother tongue. How does the fact that humans perceive a language pair as mutually intelligible affect the ability to learn a translation model between them? We hypothesize that the amount of data needed to train a neural machine translation model is anti-proportional to the languages\u2019 mutual intelligibility. Experiments on the Romance language group reveal that there is indeed strong correlation between the area under a model\u2019s learning curve and mutual intelligibility scores obtained by studying human speakers.'\nauthor:\n- |\n Avital Friedland[^1] Jonathan Zeltser$^*$ Omer Levy\\\n Tel Aviv University\\\n `{avitalfried,jonathanz1}@mail.tau.ac.il`\nbibliography:\n- 'anthology.bib'\n- 'references.bib'\ntitle: 'Are Mutually Intelligible Languages Easier to Translate?'\n---\n\nIntroduction\n============\n\nWhen speakers of two different languages can effectively communicate each in their own tongue, these languages are considered *mutually intelligible*. Intelligibility is often assymetric, and considered to be a continuous notion [@gooskens], with some languages exhibiting higher intelligibility (e.g. Bulgarian and Macedonian) while others are only partially understood in the oral form (German and Yiddish) or written media (Russian and Ukrainian). Does mutual intelligibility, as perceived by humans, make translation an easier task" +"---\nabstract: 'Studying corruption presents unique challenges. Recent work in the spirit of computational social science exploits newly available data and methods to give a fresh perspective on this important topic. In this chapter we highlight some of these works, describing how they provide insights into classic social scientific questions about the structure and dynamics of corruption in society from micro to macro scales. We argue that corruption is fruitfully understood as a collective action problem that happens between embedded people and organizations. Computational methods like network science and agent-based modeling can give insights into such situations. We also present various (big) data sources that have been exploited to study corruption. We conclude by highlighting work in adjacent fields, for instance on the problems of collusion, tax evasion, organized crime, and the darkweb, and promising avenues for future work.'\nauthor:\n- Isabela Villamil\n- J\u00e1nos Kert\u00e9sz\n- Johannes Wachs\nbibliography:\n- 'main.bib'\ntitle: 'Computational Approaches to the Study of Corruption[^1]'\n---\n\n------------------------------------------------------------------------\n\n***Keywords:*** [Corruption, Networks, Big Data, Procurement, Crime]{}\n\n------------------------------------------------------------------------\n\nIntroduction\n============\n\nCorruption is an important and stubborn problem. It slows economic growth [@mauro1995corruption], increases inequality [@gupta1998does], and slows innovation [@rodriguez2014quality]. At the same time we know that high inequality" +"---\nabstract: 'Following earlier works of Dereli and collaborators, we study a three dimensional toy model where we extend the topologically massive gravity with electrodynamics by the most general $RF^2$-type non-minimal coupling terms. Here $R$ denotes the possible curvature terms and $F$ denotes the electromagnetic 2-form. We derive the variational field equations and look for exact solutions on constant negative curvature space-times with a constant, self-dual electromagnetic field. The notion of self-dual electromagnetic fields in three dimensions is introduced by Dereli and collaborators in the study of exact solutions of models with gravity-electromagnetism couplings. We note the conditions that the parameters of the model have to satisfy for these self-dual solutions to exist.'\naddress: 'Department of Physics, Ko\u00e7 University, 34450, Sariyer, \u0130stanbul, Turkey'\nauthor:\n- 'Kivan\u00e7 \u0130. \u00dcnl\u00fct\u00fcrk$^1$ and Cem Yeti\u015fmi\u015fo\u011flu$^2$'\ntitle: 'A model of non-minimally coupled gravitation and electromagnetism in (1+2) dimensions'\n---\n\n\u0142\n\nIntroduction\n============\n\nThree dimensions has the virtue of providing many interesting toy models. One of the main reasons these toy models are investigated is that these toy models help us gain insight to their more complicated four dimensional analogues. For instance, there is a plethora of three dimensional gravitational models which are studied to better" +"---\nabstract: 'Biconical-type antennas featuring high directivity have been designed, created, and tested in anechoic chamber. Results in the range between 1 and 5 GHz are presented in this article. In particular, two different configurations have been tested, with and without dielectric lenses, both involving rapid prototyping tools (3D printing) for the dielectric and the antenna support. A very high directivity is nowadays demanded by efficient and sustainable point-to-point communications or energy transfer protocols, to avoid releasing energy in neighboring areas and preserve data transfer security. As demonstrated here, special biconical type antennas featuring a 3D printed polylactic acid (PLA) dielectric lens can achieve a good directivity, with a corresponding emission lobe centered around $8.4$ degrees, featuring a FWHM of $6.4$ degrees. Dielectric lens-free antennas, featuring an unconventional shape, can also achieve a good directivity, with a corresponding emission lobe centered around $10.0$ degrees, featuring a FWHM of $14.2$ degrees. The preliminary results shown here explore some of the aspects of the vast configuration space (which include fabrication techniques, dielectric materials, conductive supports, etc.) and open the route for further optimization studies. The aim would be to adjust the various degrees of freedom in order to achieve what can be" +"---\nabstract: 'Understanding the search dynamics of multiobjective evolutionary algorithms (MOEAs) is still an open problem. This paper extends a recent network-based tool, search trajectory networks (STNs), to model the behavior of MOEAs. Our approach uses the idea of decomposition, where a multiobjective problem is transformed into several single-objective problems. We show that STNs can be used to model and distinguish the search behavior of two popular multiobjective algorithms, MOEA/D and NSGA-II, using 10 continuous benchmark problems with 2 and 3 objectives. Our findings suggest that we can improve our understanding of MOEAs using STNs for algorithm analysis.'\nauthor:\n- Yuri Lavinas\n- Claus Aranha\n- Gabriela Ochoa\nbibliography:\n- 'bib.bib'\ntitle: Search Trajectories Networks of Multiobjective Evolutionary Algorithms\n---\n\n=1000 =1000\n\nIntroduction\n============\n\nMost real-world optimization problems involve multiple conflicting objectives. This has prompted the development of a variety of multiobjective evolutionary algorithms (MOEAs), which can be classified into three broad categories, based on dominance [@deb2002fast], indicators [@beume2007sms] and decomposition [@zhang2007moea]. There has been significant progress in improving MOEAs in all categories, and these algorithms are widely used in practice. However, algorithm development and improvement are mostly guided by intuition and empirical performance comparisons. We argue that there is" +"---\nabstract: 'Recent strides have been made developing dust evolution models for galaxy formation simulations but these approaches vary in their assumptions and degree of complexity. Here we introduce and compare two separate dust evolution models (labelled \u2018Elemental\u2019 and \u2018Species\u2019), based on recent approaches, incorporated into the [[GIZMO]{}]{}\u00a0code and coupled with FIRE-2 stellar feedback and ISM physics. Both models account for turbulent dust diffusion, stellar production of dust, dust growth via gas-dust accretion, and dust destruction from time-resolved supernovae, thermal sputtering in hot gas, and astration. The \u201cElemental\u201d model tracks the evolution of generalized dust species and utilizes a simple, \u2018tunable\u2019 dust growth routine, while the \u201cSpecies\u201d model tracks the evolution of specific dust species with set chemical compositions and incorporates a physically motivated, two-phase dust growth routine. We test and compare these models in an idealized Milky Way-mass galaxy and find that while both produce reasonable galaxy-integrated dust-to-metals (D/Z) ratios and predict gas-dust accretion as the main dust growth mechanism, a chemically motivated model is needed to reproduce the observed scaling relation between individual element depletions and D/Z with column density and local gas density. We also find the inclusion of theoretical metallic iron and O-bearing dust species" +"---\nabstract: 'Stochastic actor-oriented models (SAOM) are a broadly applied modelling framework for analysing network dynamics using network panel data. They have been extended to address co-evolution of multiple networks as well as networks and behaviour. This paper extends the SAOM to the analysis of multiple network panels through a random coefficient multilevel model, estimated with a Bayesian approach. This is illustrated by a study of the dynamic interdependence of friendship and minor delinquency, represented by the combination of a one-mode and a two-mode network, using a sample of 81 school classes in the first year of secondary school.'\naddress:\n- 'University of Melbourne, Melbourne, Australia.'\n- 'University of Oxford, Oxford, United Kingdom; University of Groningen, Groningen, The Netherlands.'\nauthor:\n- Johan Koskinen\n- 'Tom A.B. Snijders'\nbibliography:\n- 'MultilevelSAOM\\_2022.bib'\ntitle: Multilevel Longitudinal Analysis of Social Networks\n---\n\nIntroduction\n============\n\nSocial network research deals with analysing the dependencies among people or other social units, dependencies induced by the relational ties that bind them together [@WassermanFaust94; @brandes2013network; @robins2015doing]. These dependencies can best be studied in a dynamic approach, where the existence of a given configuration of ties leads to the creation, or supports the maintenance, of other ties. While many of" +"---\nabstract: |\n Detailed dynamical systems models used in life sciences may include dozens or even hundreds of state variables. Models of large dimension are not only harder from the numerical perspective (e.g., for parameter estimation or simulation), but it is also becoming challenging to derive mechanistic insights from such models. Exact model reduction is a way to address this issue by finding a self-consistent lower-dimensional projection of the corresponding dynamical system. A recent algorithm CLUE allows one to construct an exact linear reduction of the smallest possible dimension such that the fixed variables of interest are preserved. However, CLUE is restricted to systems with polynomial dynamics. Since rational dynamics occurs frequently in the life sciences (e.g., Michaelis-Menten or Hill kinetics), it is desirable to extend CLUE to the models with rational dynamics.\n\n In this paper, we present an extension of CLUE to the case of rational dynamics and demonstrate its applicability on examples from literature. Our implementation is available in version 1.5 of CLUE[^1].\nauthor:\n- 'Antonio Jim\u00e9nez-Pastor[^2], Joshua Paul Jacob[^3], Gleb Pogudin[^4]'\nbibliography:\n- 'main.bib'\ntitle: Exact linear reduction for rational dynamical systems\n---\n\n=1\n\n[ **Keywords:** exact reduction, dynamical systems, constrained lumping ]{}\n\n[1cm]{}[1cm]{}\n\nIntroduction {#sec:introduction}\n============" +"---\nabstract: 'Thanks to ground-based infrared and sub-mm observations the study of the dusty torus of nearby AGN has greatly advanced in the last years. With the aim of further investigating the nuclear mid-infrared emission of the archetypal Seyfert 2 galaxy NGC 1068, here we present a fitting to the N- and Q-band Michelle/Gemini spectra. We initially test several available SED libraries, including a smooth, clumpy and two phase torus models, and a clumpy disk plus wind model. We find that the spectra of NGC1068 cannot be reproduced with any of these models. Although, the smooth torus models describe the spectra of NGC1068 if we allow to vary some model parameters among the two spectral bands. Motivated by this result, we produced new SEDs using the radiative transfer code [SKIRT]{}. We use two concentric tori that allow us to test a more complex geometry. We test different values for the inner and outer radii, half opening angle, radial and polar exponent of the power-law density profile, opacity, and viewing angle. Furthermore, we also test the dust grains size and different optical and calorimetric properties of silicate grains. The best fitting model consists of two concentric components with outer radii of" +"---\nabstract: 'Many problems arising in computational science and engineering can be described in terms of approximating a smooth function of $d$ variables, defined over an unknown *domain of interest* $\\Omega\\subset \\mathbb{R}^d$, from sample data. Here both the underlying dimensionality of the problem (in the case $d\\gg 1$) as well as the lack of domain knowledge\u2014with $\\Omega$ potentially irregular and/or disconnected\u2014are confounding factors for sampling-based methods. Na\u00efve approaches for such problems often lead to wasted samples and inefficient approximation schemes. For example, uniform sampling can result in upwards of 20% wasted samples in some problems considered herein. In applications such as surrogate model construction in computational [*uncertainty quantification*]{} (UQ), the high cost of computing samples necessitates a more efficient sampling procedure. Over the last several years methods for computing such approximations from sample data have been studied in the case of irregular domains, and the advantages of computing sampling measures depending on an approximation space $P$ of $\\dim(P)=N$ have been shown. More specifically, such approaches confer advantages such as stability and well-conditioning, with an asymptotically optimal sample complexity scaling $\\mathcal{O}(N\\log(N))$. The recently-proposed [*adaptive sampling for general domains*]{} (ASGD) strategy is one such technique to construct these sampling measures. The main" +"---\nauthor:\n- 'Gabriel Cuomo,'\n- 'Zohar Komargodski,'\n- M\u00e1rk Mezei\n- 'and Avia Raviv-Moshe'\nbibliography:\n- 'Biblio.bib'\ntitle: 'Spin Impurities, Wilson Lines and Semiclassics'\n---\n\n[abstract[We consider line defects with large quantum numbers in conformal field theories. First, we consider spin impurities, both for a free scalar triplet and in the Wilson-Fisher $O(3)$ model. For the free scalar triplet, we find a rich phase diagram that includes a perturbative fixed point, a new nonperturbative fixed point, and runaway regimes. To obtain these results, we develop a new semiclassical approach. For the Wilson-Fisher model, we propose an alternative description, which becomes weakly coupled in the large spin limit. This allows us to chart the phase diagram and obtain numerous rigorous predictions for large spin impurities in $2+1$ dimensional magnets. Finally, we also study $1/2$-BPS Wilson lines in large representations of the gauge group in rank-1 $\\mathcal{N}=2$ superconformal field theories. We contrast the results with the qualitative behavior of large spin impurities in magnets.]{}]{}\n\nIntroduction and summary\n========================\n\nThe study of line defects (i.e. one-dimensional defects) in critical conformal bulk theories is of fundamental importance to the study of Quantum Field Theory (QFT). Line defects have a variety of applications ranging" +"---\nabstract: 'This work theoretically studies stochastic neural networks, a main type of neural network in use. We prove that as the width of an optimized stochastic neural network tends to infinity, its predictive variance on the training set decreases to zero. Our theory justifies the common intuition that adding stochasticity to the model can help regularize the model by introducing an averaging effect. Two common examples that our theory can be relevant to are neural networks with dropout and Bayesian latent variable models in a special limit. Our result thus helps better understand how stochasticity affects the learning of neural networks and potentially design better architectures for practical problems.'\nauthor:\n- |\n Liu Ziyin$^1$, Hanlin Zhang$^2$, Xiangming Meng$^1$, Yuting Lu$^1$, Eric Xing$^2$, Masahito Ueda$^1$\\\n $^1$*The University of Tokyo*\\\n $^2$*Carnegie Mellon University*\ntitle: Stochastic Neural Networks with Infinite Width are Deterministic\n---\n\nIntroduction\n============\n\nApplications of neural networks have achieved great success in various fields. A major extension of the standard neural networks is to make them stochastic, namely, to make the output a random function of the input. In a broad sense, stochastic neural networks include neural networks trained with dropout [@srivastava2014dropout; @gal2016dropout], Bayesian networks [@mackay1992bayesian], variational autoencoders (VAE)" +"---\nabstract: 'We consider the notions of operator-valued infinitesimal (OVI) free independence, OVI Boolean independence, and OVI monotone independence. For each notion of OVI independence, we introduce the corresponding infinitesimal transforms, and then we show that the transforms satisfy certain multiplicative property. Additionally, we extend the concept of $t$-coefficients to the infinitesimal framework and investigate its properties. Finally, we present an application involving complex Wishart matrices utilizing our infinitesimal free multiplicative formula.'\naddress: 'New York University Abu Dhabi, Division of Science, Mathematics, Abu Dhabi, UAE'\nauthor:\n- 'Pei-Lun Tseng'\nbibliography:\n- 'main.bib'\ntitle: 'Operator-Valued Infinitesimal Multiplicative Convolutions'\n---\n\nIntroduction\n============\n\nIn non-commutative probability theory, various concepts of independence have been explored. In [@Mur03], Muraki demonstrated that there are only five types of independence, which are tensor, free [@voi85], Boolean [@spe93], monotone [@Mur2000], and anti-monotone [@Mur2000] that exhibit specific universal properties.\n\nThe notion of free independence was introduced by Voiculescu in 1985 [@voi85]. It plays an important role in the studying of the asymptotic behavior of random matrices. To be precise, it provides us a way to study the distribution of the eigenvalues of polynomials in several large random matrices. Over time, numerous extensions and generalizations of free probability have emerged." +"---\nabstract: 'Driven by the rapid growth of Internet of Things applications, tremendous data need to be collected by sensors and uploaded to the servers for further process. As a promising solution, mobile crowd sensing enables controllable sensing and transmission processes for multiple types of data in a single device. In this paper, a typical user is considered that is required to sense and transmit data to a server, while it is assumed to remain busy and incapable of sensing data during an interval. An optimization problem is formulated to minimize the energy consumption of data sensing and transmission by controlling the sensing and transmission rates over time, subject to the constraints on the sensing data sizes, transmission data sizes, data casualty, and sensing busy time. This problem is highly challenging, due to the coupling between the rates as well as the existence of the busy time. To deal with this problem, we first show that it can be equivalently decomposed into two subproblems, corresponding to a search for the amount of data size that needs to be sensed before the busy time (referred to as the height), as well as the sensing and transmission rate control given the height." +"---\nabstract: 'We investigate the ground-state properties of weakly repulsive one-dimensional bosons in the presence of an attractive zero-range impurity potential. First, we derive mean-field solutions to the problem on a finite ring for the two asymptotic cases: (i) all bosons are bound to the impurity and (ii) all bosons are in a scattering state. Moreover, we derive the critical line that separates these regimes in the parameter space. In the thermodynamic limit, this critical line determines the maximum number of bosons that can be bound by the impurity potential, forming an artificial atom. Second, we validate the mean-field results using the flow equation approach and the multi-layer multi-configuration time-dependent Hartree method for atomic mixtures. While beyond-mean-field effects destroy long-range order in the Bose gas, the critical boson number is unaffected. Our findings are important for understanding such artificial atoms in low-density Bose gases with static and mobile impurities.'\naddress:\n- '\u00a0Technische Universit\u00e4t Darmstadt, Department of Physics, 64289 Darmstadt, Germany'\n- '\u00a0ITAMP, Center for Astrophysics $|$ Harvard $\\&$ Smithsonian, Cambridge, MA 02138 USA'\n- '\u00a7\u00a0Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA'\n- '$\\|$\u00a0ExtreMe Matter Institute EMMI and Helmholtz Forschungsakademie Hessen f\u00fcr FAIR (HFHF), GSI" +"---\nabstract: 'In this paper, we consider the federated learning (FL) problem in the presence of communication errors. We model the link between the devices and the central node (CN) by a packet erasure channel, where the local parameters from devices are either erased or received correctly by CN with probability $\\epsilon$ and $1-\\epsilon$, respectively. We proved that the FL algorithm in the presence of communication errors, where the CN uses the past local update if the fresh one is not received from a device, converges to the same global parameter as that the FL algorithm converges to without any communication error. We provide several simulation results to validate our theoretical analysis. We also show that when the dataset is uniformly distributed among devices, the FL algorithm that only uses fresh updates and discards missing updates might converge faster than the FL algorithm that uses past local updates.'\nauthor:\n- 'Mahyar\u00a0Shirvanimoghaddam,\u00a0, Ayoob\u00a0Salari,,\u00a0, Yifeng\u00a0Gao, Aradhika\u00a0Guha[^1]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'ref.bib'\ntitle: Federated Learning with Erroneous Communication Links\n---\n\nConvexity, federated learning, gradient descent, short packet communications, smoothness.\n\nIntroduction\n============\n\nof things (IoT) applications and services have become popular in recent years due to major technological" +"---\nabstract: 'The annotation of disease severity for medical image datasets often relies on collaborative decisions from multiple human graders. The intra-observer variability derived from individual differences always persists in this process, yet the influence is often underestimated. In this paper, we cast the intra-observer variability as an uncertainty problem and incorporate the label uncertainty information as guidance into the disease screening model to improve the final decision. The main idea is dividing the images into simple and hard cases by uncertainty information, and then developing a multi-stream network to deal with different cases separately. Particularly, for hard cases, we strengthen the network\u2019s capacity in capturing the correct disease features and resisting the interference of uncertainty. Experiments on a fundus image-based glaucoma screening case study show that the proposed model outperforms several baselines, especially in screening hard cases.'\naddress: |\n $^{1}$ Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China\\\n $^{2}$ Monash eResearch Centre, Monash University, Melbourne, Australia\\\n $^{3}$ Centre for Eye Research, Melbourne University, East Melbourne, Victoria, Australia\\\nbibliography:\n- 'mybibliography.bib'\ntitle: 'Label uncertainty-guided multi-stream model for disease screening'\n---\n\nLabel uncertainty, disease screening.\n\nIntroduction\n============\n\nDeep learning (DL) models for image-based disease screening heavily rely on large-scale datasets consisting" +"---\nauthor:\n- 'Chris Akers,'\n- 'Thomas Faulkner,'\n- Simon Lin\n- and Pratik Rath\nbibliography:\n- 'mybibliography.bib'\ntitle: The Page Curve for Reflected Entropy\n---\n\nIntroduction {#sec:intro}\n============\n\nThe black hole information problem has served as a beacon guiding us in the quest to understand quantum gravity [@Hawking:1974sw; @Mathur:2009hf; @Almheiri:2012rt; @Almheiri:2013hfa]. Although a complete resolution still eludes us and might require a better understanding of the UV-complete theory of quantum gravity, significant progress has been made in recent years simply by taking the gravitational path integral seriously [@Penington:2019kki; @Almheiri:2019qdq].\n\nA commendable milestone in this endeavour is the calculation of the \u201cPage Curve\u201d using the semiclassical theory [@Penington:2019npb; @Almheiri:2019psf]. In fact, in the so-called West Coast Model [@Penington:2019npb], a toy model of black hole evaporation consisting of Jackiw-Teitelboim (JT) gravity coupled to end-of-the-world (ETW) branes, the detailed curve including effects near the phase transition were computed.\n\n![The Lorentzian description of the state we consider in the West Coast Model, a JT gravity black hole with an ETW brane. The ETW brane carries two sub-flavours, denoted black and green, that are entangled (dashed, coloured lines) with radiation systems $R_1$ and $R_2$ respectively. The extremal surface is denoted in purple and the" +"---\nabstract: 'Graph isomorphism testing is usually approached via the comparison of graph invariants. Two popular alternatives that offer a good trade-off between expressive power and computational efficiency are combinatorial (i.e., obtained via the Weisfeiler-Leman (WL) test) and spectral invariants. While the exact power of the latter is still an open question, the former is regularly criticized for its limited power, when a standard configuration of uniform pre-coloring is used. This drawback hinders the applicability of Message Passing Graph Neural Networks (MPGNNs), whose expressive power is upper bounded by the WL test. Relaxing the assumption of uniform pre-coloring, we show that one can increase the expressive power of the WL test ad infinitum. Following that, we propose an efficient pre-coloring based on spectral features that provably increases the expressive power of the vanilla WL test. The above claims are accompanied by extensive synthetic and real data experiments. The code to reproduce our experiments is available at .'\n---\n\nIntroduction\n============\n\nDeep learning (DL) has become a method of choice for any machine learning task encountered in modern computer vision, natural language processing, and signal and image processing. It has been particularly successful when dealing with Euclidean-structured data such as audio" +"---\nabstract: |\n This paper considers the problem of supervised learning with linear methods when both features and labels can be corrupted, either in the form of heavy tailed data and/or corrupted rows. We introduce a combination of coordinate gradient descent as a learning algorithm together with robust estimators of the partial derivatives. This leads to robust statistical learning methods that have a numerical complexity *nearly identical* to non-robust ones based on empirical risk minimization. The main idea is simple: while robust learning with gradient descent requires the computational cost of robustly estimating the whole gradient to update all parameters, a parameter can be updated immediately using a robust estimator of a single partial derivative in coordinate gradient descent. We prove upper bounds on the generalization error of the algorithms derived from this idea, that control both the optimization and statistical errors with and without a strong convexity assumption of the risk. Finally, we propose an efficient implementation of this approach in a new `Python` library called `linlearn`, and demonstrate through extensive numerical experiments that our approach introduces a new interesting compromise between robustness, statistical performance and numerical efficiency for this problem.\n\n *Keywords.* Robust methods; Heavy-tailed data; Outliers; Robust gradient" +"---\naddress: |\n Google Research\\\n 1600 Amphiteatre Pkwy, Mountain View, CA 94043, USA\\\n {rueckert, srinivasksun, abhirast, sush, pranavkhaitan}@google.com\\\nbibliography:\n- 'main.bib'\ntitle: 'A Unified Approach to Entity-Centric Context Tracking in Social Conversations'\n---\n\nIntroduction\n============\n\nComputers and mobile phones have changed how people communicate. A large amount of today\u2019s interpersonal communication happens in messaging apps on mobile devices or on chat and discussion services on the internet. Consequently, this has piqued the interest of the research community in developing assistive technologies for human-human conversations. Representing the current status of a conversation in a succinct and semantically complete way is a central component of such technologies. At the core of this endeavor lies the task of tracking the entities mentioned in a conversation, their properties and the relationships that are being expressed about them. In this paper, we frame this task, which we call *Context Tracking*, as an online machine learning problem, where the model is expected to track the current status of the conversation at any time. This formulation extends and complements existing research in three key areas.\n\nFirst of all, in this framework the model ingests the messages of a conversation turn by turn and updates a growing repository" +"---\nabstract: 'The radiation observed in quasars and active galactic nuclei is mainly produced by a relativistic plasma orbiting close to the black hole event horizon, where strong gravitational effects are relevant. The observational data of such systems can be compared with theoretical models to infer the black hole and plasma properties. In the comparison process, ray-tracing algorithms are essential to computing the trajectories followed by the photons from the source to our telescopes. In this paper, we present `OSIRIS`: a new stable FORTRAN code capable of efficiently computing null-geodesics around compact objects, including general relativistic effects such as gravitational lensing, redshift, and relativistic boosting. The algorithm is based on the Hamiltonian formulation and uses different integration schemes to evolve null-geodesics while tracking the error in the Hamiltonian constrain to ensure physical results. We found from an error analysis that the integration schemes are all stable, and the best one maintains an error below $10^{-11}$. Particularly, to test the robustness and ability of the code to evolve geodesics in curved space-time, we compute the shadow and Einstein rings of a Kerr black hole with different rotation parameters and obtain the image of a thin Keplerian accretion disk around a Schwarzschild" +"---\nabstract: |\n Quantum theory predicts the existence of genuinely tripartite-entangled states, which cannot be obtained from local operations over any bipartite entangled states and unlimited shared randomness. Some of us recently proved that this feature is a fundamental signature of quantum theory. The state ${\\ensuremath{\\left|{\\rm GHZ}_3\\right\\rangle}}=({\\ensuremath{\\left|000\\right\\rangle}}+{\\ensuremath{\\left|111\\right\\rangle}})/\\sqrt{2}$ gives rise to tripartite quantum correlations which cannot be explained by any causal theory limited to bipartite nonclassical common causes *of any kind* (generalising entanglement) assisted with unlimited shared randomness. Hence, any conceivable physical theory which would reproduce quantum predictions will necessarily include genuinely tripartite resources.\n\n In this work, we verify that such tripartite correlations are experimentally achievable. We derive a new device-independent witness capable of falsifying causal theories wherein nonclassical resources are merely bipartite. Using a high-performance photonic ${\\ensuremath{\\left|{\\rm GHZ}_3\\right\\rangle}}$ states with fidelities of $ 0.9741\\pm0.002$, we provide a clear experimental violation of that witness by more than 26.3 standard deviation, under the locality and fair sampling assumption. We generalise our work to the ${\\ensuremath{\\left|{\\rm GHZ}_4\\right\\rangle}}$ state, obtaining correlations which cannot be explained by any causal theory limited to tripartite nonclassical common causes assisted with unlimited shared randomness.\nauthor:\n- \n- \n- \n- Ga\u00ebl Mass\u00e9\n- 'Xavier Coiteux-Roy'\n- 'Bi-Heng Liu'\n-" +"---\nabstract: |\n \\\n Supply chain security has become a growing concern in security risk analysis of the Internet of Things (IoT) systems. Their highly connected structures have significantly enlarged the attack surface, making it difficult to track the source of the risk posed by malicious or compromised suppliers. This chapter presents a system-scientific framework to study the accountability in IoT supply chains and provides a holistic risk analysis technologically and socio-economically. We develop stylized models and quantitative approaches to evaluate the accountability of the suppliers. Two case studies are used to illustrate accountability measures for scenarios with single and multiple agents. Finally, we present the contract design and cyber insurance as economic solutions to mitigate supply chain risks. They are incentive-compatible mechanisms that encourage truth-telling of the supplier and facilitate reliable accountability investigation for the buyer.\nauthor:\n- Yunfei Ge\n- Quanyan Zhu\nbibliography:\n- 'acc.bib'\ntitle: Accountability and Insurance in IoT Supply Chain\n---\n\nIntroduction {#sec:introduction}\n============\n\nSupply chains play a critical role in the security and resilience of IoT systems and affect many users, including small- and medium-sized businesses and government agencies. An attacker can exploit vulnerabilities of a vendor in the supply chain to compromise the" +"---\nabstract: 'A specific representation of the known one-loop EW correction to the relation between the pole and running ${\\overline{\\rm{MS}}}$-scheme masses of the top-quark through particle masses of the Standard Model is given within the Fleischer-Jegerlehner tadpole scheme, where the vacuum expectation value of the Higgs field is renormalized. The importance of taking into account both the EW and QCD effects in this relation in the considered case is emphasized. It is noted that the discard of the EW corrections leads to over $10\\;{{\\rm{GeV}}}$ shift in the difference between the pole and running $t$-quark masses. This magnitude exceeds essentially the modern uncertainties of the considered relation, following from the treatment of the Tevatron and LHC data where both pole and running $t$-quark masses are defined in the widespread approach when only the QCD corrections are kept in mind between them.'\nauthor:\n- 'A.\u00a0L.\u00a0Kataev[^1]'\n- 'V.\u00a0S.\u00a0Molokoedov[^2]'\ntitle: '**Notes on interplay of the QCD and EW perturbative corrections to the pole-running top-quark mass ratio**'\n---\n\nIntroduction\n============\n\nThe top-quark mass is the important theoretical parameter, which is extracted from experimental data of the Tevatron and LHC (see e.g. [@Zyla:2020zbs] and reviews [@Nason:2017cxd; @Corcella:2019tgt; @Hoang:2020iah]). Among the various definitions" +"---\nabstract: 'Red-shifted components of chromospheric emission lines in the hard X-ray impulsive phase of solar flares have recently been studied through their 30\u00a0s evolution with the high resolution of IRIS. Radiative-hydrodynamic flare models show that these redshifts are generally reproduced by electron-beam generated chromospheric condensations. The models produce large ambient electron densities, and the pressure broadening of hydrogen Balmer series should be readily detected in observations. To accurately interpret upcoming spectral data of flares with the DKIST, we incorporate non-ideal, non-adiabatic line broadening profiles of hydrogen into the RADYN code. These improvements allow time-dependent predictions for the extreme Balmer line wing enhancements in solar flares. We study two chromospheric condensation models, which cover a range of electron beam fluxes ($1-5 \\times 10^{11}$ erg s$^{-1}$ cm$^{-2}$) and ambient electron densities ($1 - 60 \\times 10^{13}$ cm$^{-3}$) in the flare chromosphere. Both models produce broadening and redshift variations within 10\u00a0s of the onset of beam heating. In the chromospheric condensations, there is enhanced spectral broadening due to large optical depths at H$\\alpha$, H$\\beta$, and H$\\gamma$, while the much lower optical depth of the Balmer series H12$-$H16 provides a translucent window into the smaller electron densities in the beam-heated layers" +"---\nabstract: 'In clinical trials, there is potential to improve precision and reduce the required sample size by appropriately adjusting for baseline variables in the statistical analysis. This is called covariate adjustment. Despite recommendations by regulatory agencies in favor of covariate adjustment, it remains underutilized leading to inefficient trials. We address two obstacles that make it challenging to use covariate adjustment. A first obstacle is the incompatibility of many covariate adjusted estimators with commonly used boundaries in group sequential designs (GSDs). A second obstacle is the uncertainty at the design stage about how much precision gain will result from covariate adjustment. We propose a method that modifies the original estimator so that it becomes compatible with GSDs, while increasing or leaving unchanged the estimator\u2019s precision. Our approach allows the use of any asymptotically linear estimator, which covers many estimators used in randomized trials. Building on this, we propose using an information adaptive design, that is, continuing the trial until the required information level is achieved. Such a design adapts to the amount of precision gain and can lead to faster, more efficient trials, without sacrificing validity or power. We evaluate estimator performance in simulations that mimic features of a completed" +"---\nabstract: 'We demonstrate the simultaneous generation of second and third harmonic signals from a telecom wavelength pump in a gallium phosphide (GaP) microdisk. Using analysis of the power scaling of both the second and third harmonic outputs and calculations of nonlinear cavity mode coupling factors, we study contributions to the third harmonic signal from direct and cascaded sum frequency generation processes. We find that despite the relatively high material absorption in gallium phosphide at the third harmonic wavelength, both of these processes can be significant, with relative magnitudes that depend closely on the detuning between the second harmonic wavelength of the cavity modes.'\nauthor:\n- Blaine McLaughlin\n- 'David P. Lake'\n- Matthew Mitchell\n- 'Paul E. Barclay'\nbibliography:\n- 'references.bib'\ntitle: 'Nonlinear optics in gallium phosphide cavities: simultaneous second and third harmonic generation'\n---\n\nIntroduction\n============\n\nIn recent years, resonant cavity structures have been used in a wide range of applications within integrated photonics. In particular, whispering gallery mode microcavities have allowed for the incorporation of nonlinear optical effects into on-chip photonics platforms [@li2018whispering]. The high degree of optical confinement provided by whispering gallery mode microcavities allows for integrated nonlinear devices with great efficiency and a high degree" +"---\nabstract: 'DI\u00a0Herculis is an eclipsing binary famous for a longstanding disagreement between theory and observation of the apsidal precession rate, which was resolved when both stars were found to be severely misaligned with the orbit. We used data from the Transiting Exoplanet Survey Satellite (TESS) to refine our knowledge of the stellar obliquities and sharpen the comparison between the observed and theoretical precession rates. The TESS data show variations with a 1.07-day period, which we interpret as rotational modulation from starspots on the primary star. This interpretation is supported by the detection of photometric anomalies during primary eclipses consistent with starspot crossings. The secondary eclipse light curve shows a repeatable asymmetry which we interpret as an effect of gravity darkening. By combining the TESS data with previously obtained data, we determined the three-dimensional spin directions of both stars. Using this information, the updated value of the theoretical apsidal precession rate (including the effects of tides, rotation, and general relativity) is $1.35^{+0.58}_{-0.50}$\u00a0arcsec/cycle. The updated value of the observed rate (after including new TESS eclipse times) is [$1.41^{+0.39}_{-0.28}$]{}\u00a0arcsec/cycle. Given the agreement between the observed and theoretical values, we fitted all the relevant data simultaneously assuming the theory is" +"---\nabstract: 'Large-scale databases with high-quality manual labels are scarce in audio domain. We thus explore a self-supervised graph approach to learning audio representations from highly limited labelled data. Considering each audio sample as a graph node, we propose a subgraph-based framework with novel self-supervision tasks to learn effective audio representations. During training, subgraphs are constructed by sampling the entire pool of available training data to exploit the relationship between the labelled and unlabeled audio samples. During inference, we use random edges to alleviate the overhead of graph construction. We evaluate our model on three benchmark audio datasets spanning two tasks: acoustic event classification and speech emotion recognition. We show that our semi-supervised model performs better or on par with fully supervised models and outperforms several competitive existing models. Our model is compact and can produce generalized audio representations robust to different types of signal noise. Our code is available at [`github.com/AmirSh15/SSL_graph_audio`](https://github.com/AmirSh15/SSL_graph_audio)'\nauthor:\n- 'Amir Shirian, Krishna Somandepalli\u00a0, Tanaya Guha,\u00a0 [^1]'\nbibliography:\n- 'biblo.bib'\ntitle: 'Self-supervised Graphs for Audio Representation Learning with Limited Labeled Data'\n---\n\n[Shell : A Sample Article Using IEEEtran.cls for IEEE Journals]{}\n\nAcoustic event classification, graph neural network, speech emotion recognition, self-supervised learning, semi-supervised learning," +"---\nabstract: 'Koopman operator theory has been successfully applied to problems from various research areas such as fluid dynamics, molecular dynamics, climate science, engineering, and biology. Applications include detecting metastable or coherent sets, coarse-graining, system identification, and control. There is an intricate connection between dynamical systems driven by stochastic differential equations and quantum mechanics. In this paper, we compare the ground-state transformation and Nelson\u2019s stochastic mechanics and demonstrate how data-driven methods developed for the approximation of the Koopman operator can be used to analyze quantum physics problems. Moreover, we exploit the relationship between Schr\u00f6dinger operators and stochastic control problems to show that modern data-driven methods for stochastic control can be used to solve the stationary or imaginary-time Schr\u00f6dinger equation. Our findings open up a new avenue towards solving Schr\u00f6dinger\u2019s equation using recently developed tools from data science.'\nauthor:\n- 'Stefan Klus[^1]'\n- 'Feliks N\u00fcske$^\\ast$'\n- Sebastian Peitz\nbibliography:\n- 'Nelson.bib'\ntitle: Koopman analysis of quantum systems\n---\n\nIntroduction\n============\n\nRelationships between the Schr\u00f6dinger equation and the Fokker\u2013Planck equation have been explored since the early days of quantum mechanics. Schr\u00f6dinger [@Schroedinger31] already wrote:\n\n> *Eine gewisse Verwandtschaft der wellenmechanischen Grundgleichung und der Fokkerschen Gleichung, sowie der an beide ankn\u00fcpfenden statistischen Begriffsbildungen" +"---\nabstract: 'Computing a Gaussian process (GP) posterior has a computational cost cubical in the number of historical points. A reformulation of the same GP posterior highlights that this complexity mainly depends on how many *unique* historical points are considered. This can have important implication in active learning settings, where the set of historical points is constructed sequentially by the learner. We show that sequential black-box optimization based on GPs (GP-Opt) can be made efficient by sticking to a candidate solution for multiple evaluation steps and switch only when necessary. Limiting the number of switches also limits the number of unique points in the history of the GP. Thus, the efficient GP reformulation can be used to exactly and cheaply compute the posteriors required to run the GP-Opt algorithms. This approach is especially useful in real-world applications of GP-Opt with high switch costs (e.g. switching chemicals in wet labs, data/model loading in hyperparameter optimization). As examples of this meta-approach, we modify two well-established GP-Opt algorithms, GP-UCB and GP-EI, to switch candidates as infrequently as possible adapting rules from batched GP-Opt. These versions preserve all the theoretical no-regret guarantees while improving practical aspects of the algorithms such as runtime, memory complexity," +"---\nabstract: 'We formulate a three-dimensional semi-classical model to address triple and double ionization in three-electron atoms driven by intense infrared laser pulses. During time propagation, our model fully accounts for the Coulomb singularities, the magnetic field of the laser pulse and for the motion of the nucleus at the same time as for the motion of the three electrons. The framework we develop is general and can account for multi-electron ionization in strongly-driven atoms with more than three electrons. To avoid unphysical autoionization arising in classical models of three or more electrons, we replace the Coulomb potential between pairs of bound electrons with effective Coulomb potentials. The Coulomb forces between electrons that are not both bound are fully accounted for. We develop a set of criteria to determine when electrons become bound during time propagation. We compare ionization spectra obtained with the model developed here and with the Heisenberg model that includes a potential term restricting an electron from closely approaching the core. Such spectra include the sum of the electron momenta along the direction of the laser field as well as the correlated electron momenta. We also compare these results with experimental ones.'\nauthor:\n- 'M. B. Peters'" +"---\nabstract: 'Adversarial examples are inputs for machine learning models that have been designed by attackers to cause the model to make mistakes. In this paper, we demonstrate that adversarial examples can also be utilized for good to improve the performance of imbalanced learning. We provide a new perspective on how to deal with imbalanced data: adjust the biased decision boundary by training with Guiding Adversarial Examples (GAEs). Our method can effectively increase the accuracy of minority classes while sacrificing little accuracy on majority classes. We empirically show, on several benchmark datasets, our proposed method is comparable to the state-of-the-art method. To our best knowledge, we are the first to deal with imbalanced learning with adversarial examples.'\naddress: 'Zhejiang University, China'\nbibliography:\n- 'refs.bib'\ntitle: 'Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning'\n---\n\n=1\n\nadversarial examples, long-tail data, imbalanced learning\n\nIntroduction {#sec:introduction}\n============\n\nIn practical, most of real-world datasets have long-tailed label distributions\u00a0[@liu2019large]. As a result, deep learning algorithms usually perform poorly or even collapse when the training data suffers from heavy class-imbalance, especially for highly skewed data\u00a0[@zhang2021bag]. Due to the imbalanced data, networks can be over-fitting to the minority classes, which leads to the deviation" +"---\nabstract: 'Simple cubic phosphorus exhibits superconductivity with a maximum $T_c$ of up to 12 K under pressure. The pressure dependence of $T_c$ cannot be consistently explained with a simple electron-phonon mechanism, which has stimulated investigations into the role of electronic correlations and plasmonic contributions. Here, we solve the gap equation of density functional theory for superconductors using different electron-electron and electron-phonon contributions to the kernel. We find that the phonon contribution alone yields an overestimation of $T_c$, while the addition of the static electronic contribution results in an underestimation. Taking into account the full frequency dependence of the screened interaction, the one-shot $GW$ approximation predicts $T_c$ values in good agreement with the experiments in the pressure range appropriate for the cubic phase. We also explore the use of quasi-particle bands in the calculation of the electronic and phononic kernels, and show that this modification significantly improves $T_c$ in the high-pressure region.'\nauthor:\n- Viktor Christiansson\n- Francesco Petocchi\n- Philipp Werner\nbibliography:\n- 'paper.bib'\ntitle: Superconductivity in black phosphorus and the role of dynamical screening\n---\n\n\\[sec:Introduction\\]Introduction\n================================\n\nBlack phosphorus at ambient conditions is a layered semiconductor with a narrow gap. It turns into a metallic simple cubic phase" +"---\nabstract: 'Recent estimates indicate that there are over 1 million runaway and homeless youth and young adults (RHY) in the United States (US). Exposure to trauma, violence, and substance abuse, coupled with a lack of community support services, puts homeless youth at high risk of being exploited and trafficked. Although access to safe housing and supportive services such as physical and mental healthcare is an effective response to youth\u2019s vulnerability towards being trafficked, the number of youth experiencing homelessness exceeds the capacity of available housing resources in most US communities. We undertake a RHY-informed, systematic, and data-driven approach to project the collective capacity required by service providers to adequately meet the needs of RHY in New York City, including those most at risk of being trafficked. Our approach involves an integer linear programming model that extends the multiple multidimensional knapsack problem and is informed by partnerships with key stakeholders. The mathematical model allows for time-dependent allocation and capacity expansion, while incorporating stochastic youth arrivals and length of stays, services provided in a periodic fashion, and service delivery time windows. Our RHY and service provider-centered approach is an important step toward meeting the actual, rather than presumed, survival needs of" +"---\nabstract: 'Enabling additive manufacturing to employ a wide range of novel, functional materials can be a major boost to this technology. However, making such materials printable requires painstaking trial-and-error by an expert operator, as they typically tend to exhibit peculiar rheological or hysteresis properties. Even in the case of successfully finding the process parameters, there is no guarantee of print-to-print consistency due to material differences between batches. These challenges make closed-loop feedback an attractive option where the process parameters are adjusted on-the-fly. There are several challenges for designing an efficient controller: the deposition parameters are complex and highly coupled, artifacts occur after long time horizons, simulating the deposition is computationally costly, and learning on hardware is intractable. In this work, we demonstrate the feasibility of learning a closed-loop control policy for additive manufacturing using reinforcement learning. We show that approximate, but efficient, numerical simulation is sufficient as long as it allows learning the behavioral patterns of deposition that translate to real-world experiences. In combination with reinforcement learning, our model can be used to discover control policies that outperform baseline controllers. Furthermore, the recovered policies have a minimal sim-to-real gap. We showcase this by applying our control policy in-vivo on" +"---\naddress: |\n $^{1}$EkStep Foundation, $^{2}$Thoughtworks Technologies India Pvt Ltd.,\\\n $^{3}$Agami , $^{4}$ Indian Institute of Technology Kanpur (IIT-K)\\\n {prathamk, aman.tiwari, astha.agarwal}@thoughtworks.com,\\\n {saurabh, smita}@agami.in, vivek@ekstep.org, ashutoshm@cse.iitk.ac.in\nbibliography:\n- 'lrec2022-example.bib'\ntitle: Corpus for Automatic Structuring of Legal Documents\n---\n\nIntroduction\n============\n\nIn populous countries (e.g., India), pending legal cases have been growing exponentially. For example, according to India\u2019s National Judicial Data Grid, as of December 2021, there are approximately 40 million cases pending in various courts of the country [@njdc-district]. India follows a common-law system; consequently, due to subjectivity involved in the legal process, it may not be possible to automate the entire judicial pipeline completely; nevertheless, many intermediate tasks can be automated to augment legal practitioners, and hence expedite the system. For example, legal documents can be processed with the help of Natural Language Processing (NLP) techniques to organize and structure the data to be amenable to automatic search and retrieval. However, legal texts are different from commonly occurring texts typically used to train NLP models. Legal documents are quite long, running into tens (sometimes hundreds) of pages. Long documents make automatic processing challenging as information is spread throughout the document [@malik-etal-2021-ildc]. Another challenge with legal documents is the use" +"---\nbibliography:\n- 'references.bib'\ntitle: |\n Image Classification using Graph Neural Network\\\n and Multiscale Wavelet Superpixels\n---\n\nVarun Vasudevan$^{a,}$[^1], Maxime Bassenne$^{b,}$^\\[note1\\]^, Md Tauhidul Islam$^{b,*}$, and Lei Xing$^b$\n\n$^a$Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA-94305, USA\n\n$^b$Department of Radiation Oncology, Stanford University, Stanford, CA-94305, USA\n\n\\*Corresponding author. Email: tauhid@stanford.edu\n\nAbstract {#abstract .unnumbered}\n========\n\nPrior studies using graph neural networks (GNNs) for image classification have focused on graphs generated from a regular grid of pixels or similar-sized superpixels. In the latter, a single target number of superpixels is defined for an entire dataset irrespective of differences across images and their intrinsic multiscale structure. On the contrary, this study investigates image classification using graphs generated from an image-specific number of multiscale superpixels. We propose WaveMesh, a new wavelet-based superpixeling algorithm, where the number and sizes of superpixels in an image are systematically computed based on its content. WaveMesh superpixel graphs are structurally different from similar-sized superpixel graphs. We use SplineCNN, a state-of-the-art network for image graph classification, to compare WaveMesh and similar-sized superpixels. Using SplineCNN, we perform extensive experiments on three benchmark datasets under three local-pooling settings: 1) no pooling, 2) GraclusPool, and 3) WavePool, a novel spatially heterogeneous pooling" +"---\nabstract: 'High-field charge transport in semiconductors is of fundamental interest and practical importance. While the *ab initio* treatment of low-field transport is well-developed, the treatment of high-field transport is much less so, particularly for multi-phonon processes that are reported to be relevant in GaAs. Here, we report a calculation of the high-field transport properties and current power spectral density (PSD) of hot electrons in GaAs from first principles including on-shell two-phonon (2ph) scattering. The on-shell 2ph scattering rates are found to qualitatively alter the high-field distribution function by increasing both the momentum and energy relaxation rates as well as contributing markedly to intervalley scattering. This finding reconciles a long-standing discrepancy regarding the strength of intervalley scattering in GaAs as inferred from transport and optical studies. The characteristic non-monotonic trend of PSD with electric field is not predicted at this level of theory. Our work shows how *ab initio* calculations of high-field transport and noise may be used as a stringent test of the electron-phonon interaction in semiconductors.'\nauthor:\n- 'Peishi S. Cheng [[](https://orcid.org/0000-0002-3513-9972)]{}'\n- 'Jiace Sun [[](https://orcid.org/0000-0002-0566-2084)]{}'\n- 'Shi-Ning Sun [[](https://orcid.org/0000-0002-5984-780X)]{}'\n- 'Alexander Y. Choi [[](https://orcid.org/0000-0003-2006-168X)]{}'\n- 'Austin J. Minnich [[](https://orcid.org/0000-0002-9671-9540)]{}'\nbibliography:\n- 'references.bib'\ntitle: 'High-field transport and hot" +"---\nabstract: 'Aligning a sequence to a *walk* in a labeled graph is a problem of fundamental importance to Computational Biology. For finding a walk in an arbitrary graph with $|E|$ edges that exactly matches a pattern of length $m$, a lower bound based on the Strong Exponential Time Hypothesis (SETH) implies an algorithm significantly faster than $\\mathcal{O}(|E|m)$ time is unlikely \\[Equi *et al.*, ICALP 2019\\]. However, for many special graphs, such as de Bruijn graphs, the problem can be solved in linear time \\[Bowe *et al.*, WABI 2012\\]. For approximate matching, the picture is more complex. When edits (substitutions, insertions, and deletions) are only allowed to the pattern, or when the graph is acyclic, the problem is again solvable in $\\mathcal{O}(|E|m)$ time. When edits are allowed to arbitrary cyclic graphs, the problem becomes NP-complete, even on binary alphabets \\[Jain *et al.*, RECOMB 2019\\]. These results hold even when edits are restricted to only substitutions. Despite the popularity of de Bruijn graphs in Computational Biology, the complexity of approximate pattern matching on de Bruijn graphs remained open. We investigate this problem and show that the properties that make de Bruijn graphs amenable to efficient exact pattern matching do not extend" +"---\nabstract: 'An consists of orthogonal boxes (e.g., unit cubes) glued face-to-face to form a path. In 1998, Biedl et al.\u00a0showed that every orthotube has a : a cutting along edges of the boxes so that the surface unfolds into a connected planar shape without overlap. We give a new algorithmic grid unfolding of orthotubes with the additional property that the rectangular faces are attached in a single path \u2014 a Hamiltonian path on the rectangular faces of the orthotube surface.'\nauthor:\n- 'Erik D. Demaine[^1]'\n- 'Kritkorn Karntikoon[^2]'\nbibliography:\n- 'references.bib'\ntitle: Unfolding Orthotubes with a Dual Hamiltonian Path\n---\n\nIntroduction {#section-introduction}\n============\n\nDoes every orthogonal polyhedron have a , that is, a cutting along edges of the induced grid (extending a plane through every face of the polyhedron) such that the remaining surface unfolds into a connected planar shape without overlap? This question remains unsolved over 20 years after this type of unfolding was introduced in 1998 [@demainez1998unfolding]; see [@o2008unfolding] for a survey and [@genus2; @DBLP:conf/cccg/DamianF18; @damian2021unfolding] for recent progress. This problem is in some sense the orthogonal nonconvex version of the older and more famous open problem of whether every convex polyhedron has an edge unfolding" +"---\nabstract: |\n In the last years, the DeepMind algorithm AlphaZero has become the state of the art to efficiently tackle perfect information two-player zero-sum games with a win/lose outcome. However, when the win/lose outcome is decided by a final score difference, AlphaZero may play score-suboptimal moves because all winning final positions are equivalent from the win/lose outcome perspective. This can be an issue, for instance when used for teaching, or when trying to understand whether there is a better move. Moreover, there is the theoretical quest for the perfect game. A naive approach would be training an AlphaZero-like agent to predict score differences instead of win/lose outcomes. Since the game of Go is deterministic, this should as well produce an outcome-optimal play. However, it is a folklore belief that \u201cthis does not work\u201d.\n\n In this paper, we first provide empirical evidence for this belief. We then give a theoretical interpretation of this suboptimality in general perfect information two-player zero-sum game where the complexity of a game like Go is replaced by the randomness of the environment. We show that an outcome-optimal policy has a different preference for uncertainty when it is winning or losing. In particular, when in a" +"---\nabstract: |\n The field of software verification has produced a wide array of algorithmic techniques that can prove a variety of properties of a given program. It has been demonstrated that the performance of these techniques can vary up to 4 orders of magnitude on the same verification problem. Even for verification experts, it is difficult to decide which tool will perform best on a given problem. For general users, deciding the best tool for their verification problem is effectively impossible.\n\n In this work, we present Graves, a selection strategy based on graph neural networks (GNNs). Graves generates a graph representation of a program from which a GNN predicts a score for a verifier that indicates its performance on the program.\n\n We evaluate Graves on a set of 10 verification tools and over 8000 verification problems and find that it improves the state-of-the-art in verification algorithm selection by 12%, or 8 percentage points. Further, it is able to verify 9% more problems than any existing verifier on our test set. Through a qualitative study on model interpretability, we find strong evidence that the Graves\u2019 model learns to base its predictions on factors that relate to" +"---\nabstract: 'The interplay of a potential and magnetic disorder in superconductors remains an active field of research for decades. Within the framework of the Usadel equation, we study the local density of states near a solitary classical magnetic impurity in a dirty superconducting film. We find that a potential disorder results in broadening of the delta-function peak in the local density of states at the Yu-Shiba-Rusinov (YSR) energy. This broadening is proportional to the square root of a normal-state spreading resistance of the film. We demonstrate that modification of multiple scattering on the magnetic impurity due to intermediate scattering on surrounding potential disorder affects crucially a profile of the local density of states in the vicinity of the YSR energy. In addition, we find that a scanning-tunneling-microscopy tip can mask an YSR feature in the local density of states. Also, we study the local density of states near a chain of magnetic impurities situated in the normal region of a dirty superconductor/normal-metal junction. We find a resonance in the local density of states near the YSR energy. The energy scale of the resonant peak is controlled by the square root of the film resistance per square in the normal" +"---\nabstract: 'Robust detection of moving vehicles is a critical task for any autonomously operating outdoor robot or self-driving vehicle. Most modern approaches for solving this task rely on training image-based detectors using large-scale vehicle detection datasets such as nuScenes or the Waymo Open Dataset. Providing manual annotations is an expensive and laborious exercise that does not scale well in practice. To tackle this problem, we propose a self-supervised approach that leverages audio-visual cues to detect moving vehicles in videos. Our approach employs contrastive learning for localizing vehicles in images from corresponding pairs of images and recorded audio. In extensive experiments carried out with a real-world dataset, we demonstrate that our approach provides accurate detections of moving vehicles and does not require manual annotations. We furthermore show that our model can be used as a teacher to supervise an audio-only detection model. This student model is invariant to illumination changes and thus effectively bridges the domain gap inherent to models leveraging exclusively vision as the predominant modality.'\nauthor:\n- 'Jannik Z\u00fcrn and Wolfram Burgard[^1]'\nbibliography:\n- 'root.bib'\ntitle: '**Self-Supervised Moving Vehicle Detection from Audio-Visual Cues** '\n---\n\nIntroduction {#sec:introduction}\n============\n\nAccurate and robust detection of moving vehicles has crucial relevance" +"---\nabstract: 'As the bound state of two oppositely charged particles, excitons emerge from optically excited semiconductors as the electronic analogue of a hydrogen atom. In the two-dimensional (2D) case, realized either in quantum well systems or truly 2D materials such as transition metal dichalcogenides, the relative motion of an exciton is described by two quantum numbers: the principal quantum number $n$, and a quantum number $j$ for the angular momentum along the perpendicular axis. Conservation of angular momentum demands that only the $j=0$ states of the excitons are optically active in a system illuminated by plane waves. Here we consider the case for spatially structured light sources, specifically for twisted light beams with non-zero orbital angular momentum per photon. Under the so-called dipole approximation where the spatial variations of the light source occur on length scales much larger than the size of the semiconductor\u2019s unit cell, we show that the photon (linear and/or angular) momentum is coupled to the center-of-mass (linear and/or angular) momentum of the exciton. Our study establishes that the selection rule for the internal states of the exciton, and thus the exciton spectrum, is independent from the spatial structure of the light source.'\nauthor:\n- 'Tobias" +"---\nauthor:\n- Glenn Bruda\ndate: January 2022\ntitle: |\n Maclaurin Integration:\\\n A weapon against infamous integrals\n---\n\n[^1]\n\nPerhaps one of the most challenging aspects of integration in calculus is that there is not a universal integration technique that works for all integrals. This paper introduces a technique by series to help solve this problem.\n\nTo introduce the formula (\\[formula\\]) and the nature of it, we consider two integrals and how they are approached classically:\n\n1. $$\\begin{aligned}\n \\int\\frac{\\sin(x)}{x}dx ~\\text{and}\n \\end{aligned}$$\n\n2. $$\\begin{aligned}\n \\int e^{e^{x}}dx.\n \\end{aligned}$$\n\nTraditional integration techniques you may have learned in your calculus courses will not be able to solve either of these. These integrals can be calculated using the non-elementary functions $\\operatorname{Si}(x)$\u00a0[@trigint] and $\\operatorname{Ei}(x)$\u00a0[@expint] respectively, or by using a power series.\n\nA great strength of this formula, which we shall introduce, is that in addition to being able to solve infamously difficult integrals (like the ones above), is its ease of use. Compared to other integration techniques such as Trigonometric substitution, Integration by Partial Fractions, or Integration by Parts, Maclaurin Integration requires by far the least amount of labor to utilize. All that is needed to solve an integral using this technique is plugging" +"---\nabstract: 'A new source model, which consists of an intrinsic state part and an extrinsic observation part, is proposed and its information-theoretic characterization, namely its rate-distortion function, is defined and analyzed. Such a source model is motivated by the recent surge of interest in the semantic aspect of information: the intrinsic state corresponds to the semantic feature of the source, which in general is not observable but can only be inferred from the extrinsic observation. There are two distortion measures, one between the intrinsic state and its reproduction, and the other between the extrinsic observation and its reproduction. Under a given code rate, the tradeoff between these two distortion measures is characterized by the rate-distortion function, which is solved via the indirect rate-distortion theory and is termed as the semantic rate-distortion function of the source. As an application of the general model and its analysis, the case of Gaussian extrinsic observation is studied, assuming a linear relationship between the intrinsic state and the extrinsic observation, under a quadratic distortion structure. The semantic rate-distortion function is shown to be the solution of a convex programming problem with respect to an error covariance matrix, and a reverse water-filling type of solution" +"---\nabstract: 'In this paper, we describe linear maps between complex Banach algebras that preserve products equal to fixed elements. This generalizes some important special cases where the fixed elements are the zero or identity element. First we show that if such map preserves products equal to a finite-rank operator, then it must also preserve the zero product. In several instances, this is enough to show that a product preserving map must be a scalar multiple of an algebra homomorphism. Second, we explore a more general problem concerning the existence of product preserving maps and the relationship between the fixed elements. Lastly, motivated by Kaplansky\u2019s problem on invertibility preservers, we show that maps preserving products equal to fixed invertible elements are either homomorphisms or antihomomorphisms multiplied on the left by a fixed element.'\naddress: 'Department of Mathematics and Statistics, Youngstown State University, Youngstown, OH 44555 U.S.A.'\nauthor:\n- Hayden Julius\nbibliography:\n- 'bib.bib'\ntitle: Fixed product preserving mappings on Banach Algebras\n---\n\nIntroduction\n============\n\nThis paper is primarily concerned with the existence and description of linear mappings between algebras taking products equal to one fixed element to products equal to another fixed element, in the sense of the following problem." +"---\nauthor:\n- Emily Adlam\nbibliography:\n- 'newlibrary12.bib'\ntitle: Two Roads to Retrocausality \n---\n\nIn recent years the quantum foundations community has seen increasing interest in the possibility of using retrocausality as a route to rejecting the conclusions of Bell\u2019s theorem and restoring locality to quantum physics[@RevModPhys.92.021002; @article; @Miller1996RealismAT]. On the other hand, it has also been argued that *accepting* and embracing nonlocality also leads to a form of retrocausality[@Adlamspooky]. It is interesting that two diametrically opposite starting points both seem to lead to the same conclusion, suggesting that the relationship between retrocausality and locality is a complex one. In this article we seek to elucidate that relationship and draw some conclusions about the most appropriate route to retrocausality.\n\nWe begin by providing a brief schema of the various ways in which violations of Bell\u2019s inequalities might lead us to consider some form of retrocausality. We then consider some possible motivations for using retrocausality to rescue locality, arguing that none of these motivations is adequate and that therefore there is no clear reason why we should prefer local retrocausal models to nonlocal retrocausal models. Next, we examine several different conceptions of retrocausality, concluding that \u2018all-at-once\u2019 retrocausality is more coherent than" +"---\nabstract: 'Time dependent reliability analysis and uncertainty quantification of structural system subjected to stochastic forcing function is a challenging endeavour as it necessitates considerable computational time. We investigate the efficacy of recently proposed DeepONet in solving time dependent reliability analysis and uncertainty quantification of systems subjected to stochastic loading. Unlike conventional machine learning and deep learning algorithms, DeepONet learns is a operator network and learns a function to function mapping and hence, is ideally suited to propagate the uncertainty from the stochastic forcing function to the output responses. We use DeepONet to build a surrogate model for the dynamical system under consideration. Multiple case studies, involving both toy and benchmark problems, have been conducted to examine the efficacy of DeepONet in time dependent reliability analysis and uncertainty quantification of linear and nonlinear dynamical systems. Results obtained indicate that the DeepONet architecture is accurate as well as efficient. Moreover, DeepONet posses zero shot learning capabilities and hence, a trained model easily generalizes to unseen and new environment with no further training.'\nauthor:\n- |\n Shailesh Garg\\\n Department of Applied Mechanics\\\n Indian Institute of Technology Delhi\\\n Hauz Khas, New Delhi 110016, India.\\\n `shaileshgarg96@gmail.com`\\\n Harshit Gupta\\\n Department of Mechnical Engineering\\\n Indian Institute" +"---\nabstract: 'Superfluid flow past a potential barrier is a well studied problem in ultracold Bose gases, however, fewer studies have considered the case of flow through a disordered potential. Here we consider the case of a superfluid flowing through a channel containing multiple point-like barriers, randomly placed to form a disordered potential. We begin by identifying the relationship between the relative position of two point-like barriers and the critical velocity of such an arrangement. We then show that there is a mapping between the critical velocity of a system with two obstacles, and a system with a large number of obstacles. By establishing an initial superflow through a point-like disordered potential, moving faster than the critical velocity, we study how the superflow is arrested through the nucleation of vortices and the breakdown of superfluidity, a problem with interesting connections to quantum turbulence and coarsening. We calculate the vortex decay rate as the width of the barriers is increased, and show that vortex pinning becomes a more important effect for these larger barriers.'\naddress: 'r.doran@newcastle.ac.uk'\nauthor:\n- 'R. Doran'\n- 'A. J. Groszek'\n- 'T. P. Billam'\nbibliography:\n- 'disordered\\_flow\\_references.bib'\ntitle: 'Critical Velocity and Arrest of a Superfluid in a" +"---\nabstract: 'We consider the problem of causal discovery (structure learning) from heterogeneous observational data. Most existing methods assume a homogeneous sampling scheme, which leads to misleading conclusions when violated in many applications. To this end, we propose a novel approach that exploits data heterogeneity to infer possibly cyclic causal structures from causally insufficient systems. The core idea is to model the direct causal effects as functions of exogenous covariates that properly explain data heterogeneity. We investigate structure identifiability properties of the proposed model. Structure learning is carried out in a fully Bayesian fashion, which provides natural uncertainty quantification. We demonstrate its utility through extensive simulations and a real-world application.'\nauthor:\n- |\n Fangting Zhou$^{1,2}$, Kejun He$^{2,\\ast}$, Yang Ni$^{1,\\ast}$\\\n $^{1}$Department of Statistics, Texas A&M University, College Station, Texas, U.S.A.\\\n $^{2}$Institute of Statistics and Big Data, Renmin University of China, Beijing, China\\\nbibliography:\n- 'reference.bib'\ntitle: Causal Discovery with Heterogeneous Observational Data\n---\n\nINTRODUCTION\n============\n\nCausal discovery is a central task in various fields including social science, artificial intelligence, and systems biology. While randomized controlled trials are the gold standard to establish causality, they can be too costly, unethical, or impossible to carry out. For example, recovering gene regulatory networks through" +"---\nabstract: |\n One of the main features of interest in analysing the light curves of stars is the underlying periodic behaviour. The corresponding observations are a complex type of time series with unequally spaced time points. The main tools for analysing these type of data rely on the periodogram-like functions, constructed with a desired feature so that the peaks indicate the presence of a potential period. In this paper, we explore a particular periodogram for the irregularly observed time series data. We identify the potential periods by implementing the saddlepoint approximation, as a faster and more accurate alternative to the simulation based methods that are currently used. The power analysis of the testing methodology is reported together with applications using light curves from the Hunting Outbursting Young Stars citizen science project.\n\n Key words: Cross-Validation, Hypotheses testing, Non-parametric statistics, Periodogram, Quadratic Forms, Saddlepoint\nauthor:\n- '$\\textrm{Efthymia Derezea}^{1}$, $\\textrm{Alfred Kume}^{1}$, $\\textrm{Dirk Froebrich}^{2}$'\nbibliography:\n- 'sample.bib'\ntitle: An application of Saddlepoint Approximation for period detection of stellar light observations\n---\n\n$1$. *School of Mathematics, Statistics and Actuarial Science, University of Kent, Canterbury CT2 7FS, UK*\\\n$2$. *School of Physical Sciences, University of Kent, Canterbury CT2 7NH, UK*\\\n\nIntroduction {#s:intr}\n============\n\nThe problem" +"---\nabstract: 'We propose a message passing neural network architecture designed to be equivariant to column and row permutations of a matrix. We illustrate its advantages over traditional architectures like multi-layer perceptrons (MLPs), convolutional neural networks (CNNs) and even Transformers, on the combinatorial optimization task of recovering a set of deleted entries of a Hadamard matrix. We argue that this is a powerful application of the principles of Geometric Deep Learning to fundamental mathematics, and a potential stepping stone toward more insights on the Hadamard conjecture using Machine Learning techniques.'\nauthor:\n- Augusto Peres$^1$\n- Eduardo Dias$^1$\n- |\n Lu\u00eds Sarmento$^1$ Hugo Penedones$^1$\\\n $^1$Inductiva Research Labs\\\n {augusto.peres, eduardo.dias, sarmento, hpenedones}@inductiva.ai\nbibliography:\n- 'refs.bib'\ndate: |\n Inductiva Research Labs\\\n `{augusto.peres, eduardo.dias, sarmento, hpenedones}@inductiva.ai` \ntitle: Equivariant neural networks for recovery of Hadamard matrices\n---\n\nIntroduction\n============\n\nHadamard matrices are matrices whose entries are either $-1$ or $+1$ and their rows are mutually orthogonal. A necessary condition for a $\\{-1, 1\\}$-matrix to be Hadamard is being of order either $n = 2$ or $n = 4k$ for $k \\in \\mathbb{N}$. There are known examples of Hadamard matrices for many orders $4k$ (see for example\u00a0[@txtHad]), as well as for all orders where $n" +"---\nauthor:\n- 'E.\u00a0Buchanan[!!]{},'\n- 'K.\u00a0Akiba,'\n- 'M.\u00a0van Beuzekom,'\n- 'P.\u00a0Collins,'\n- 'E.\u00a0Dall\u2019Occo,'\n- 'T.\u00a0Evans,'\n- 'V.\u00a0Franco Lima,'\n- 'R.\u00a0Geertsema,'\n- 'P.\u00a0Kopciewicz,'\n- 'E.\u00a0Price,'\n- 'B.\u00a0Rachwal,'\n- 'S.\u00a0Richards,'\n- 'D.\u00a0Saunders,'\n- 'H.\u00a0Schindler,'\n- 'T.\u00a0Szumlak,'\n- 'P.\u00a0Tsopelas,'\n- 'J.\u00a0Velthuis,'\n- 'and M.R.J.\u00a0Williams'\nbibliography:\n- 'main.bib'\ntitle: Spatial resolution and efficiency of prototype sensors for the LHCb VELO Upgrade\n---\n\nIntroduction\n============\n\nThe LHCb experiment is upgrading its VErtex LOcator (VELO) detector during Long Shutdown 2 of the LHC to allow the experiment to operate at an instantaneous luminosity of $\\mathcal{L} = 2 \\times10^{33}~\\cm^{-2}\\sec^{-1}$, five times higher than previous runs\u00a0[@LHCb-TDR-013]. The VELO requires very precise tracking and fast pattern recognition in order to reconstruct collisions and decay vertices in real time as the first step of the LHCb trigger decision. The VELO upgrade will replace the original detector\u2019s silicon strips with hybrid pixel detectors, which consist of planar silicon sensors bump-bonded to VeloPix\u00a0[@VELOPIX] readout ASICs (Application Specific Integrated Circuits).\n\nThe region of the detector closest to the collision point will be exposed to a total integrated fluence of $\\phi =$ [@LHCb-TDR-013]. To" +"---\nabstract:\n- |\n *Goal:* *Because a fast vaccination rollout against coronavirus disease 2019 (COVID-19) is critical to restore daily life and avoid virus mutations, it is tempting to have a relaxed vaccination-administration management system. However, a robust management system can support the enforcement of preventive measures, and in turn, reduce incidence and deaths. Here, we model a trustable and reliable management system based on blockchain for vaccine distribution by extending the Susceptible-Exposed-Infected-Recovery (SEIR) model. The model includes prevention measures such as mask-wearing, social distance, vaccination rate, and vaccination efficiency. It also considers negative social behavior, such as violations of social distance and attempts of using illegitimate vaccination proofs. By evaluating the model, we show that the proposed system can reduce up to 2.5 million cases and half a million deaths in the most demanding scenarios.*\\\n **Impact Statement:* The use of blockchain technology on the system managing vaccination distribution enables a reliable exercise of infection prevention measures and a reduction of COVID-19 incidence and the number of deaths during and after vaccination rollout.*\n- |\n *Goal:* Because a fast vaccination rollout against coronavirus disease 2019 (COVID-19) is critical to restore daily life and avoid virus mutations, it is tempting to" +"---\nabstract: 'We show that colored Khovanov homology detects classes of essential surfaces as a direct analogue of the slope conjectures for the colored Jones polynomial. We do this by identifying certain generators of the colored Khovanov chain complex with normal surfaces in the complement of the knot using an ideal triangulation from a diagram.'\naddress: 'Department of Mathematics and Statistics, University of South Alabama, Mobile AL 36688'\nauthor:\n- Christine Ruey Shan Lee\nbibliography:\n- 'references.bib'\nnocite: '[@*]'\ntitle: Normal surfaces and colored Khovanov homology\n---\n\n[^1]\n\nIntroduction\n============\n\nWe study the colored Jones polynomial, a generalization of the Jones polynomial, and its categorification, colored Khovanov homology. Fix a complex number $q$ that is not a root of unity. To a knot $K\\subset S^3$, the colored Jones polynomial assigns a sequence of Laurent polynomials $\\{J_K^n(q) \\}$ in $\\mathbb{Z}[q, q^{-1}]$ indexed by natural numbers $n \\geq 2$, where $J_K^2(q)$ is the Jones polynomial. In a different direction, based on the state sum model of the colored Jones polynomial, categorification assigns a bi-graded chain complex $\\{CKh^n_{i, j}\\}$ from which the $n$th colored Jones polynomial may be recovered from a suitable Euler characteristic of the homology groups.\n\nA goal of quantum topology" +"---\nabstract: 'The visual inspection of image and catalog data continues to be a valuable aspect of astronomical data analysis. As the scale of astronomical image and catalog data continues to grow, visualizing the data becomes increasingly difficult. In this work, we introduce [*FitsMap*]{}, a simple, lightweight tool for visualizing astronomical image and catalog data. [*FitsMap*]{} only requires a simple web server and can scale to over gigapixel images with tens of millions of sources. Further, the web-based visualizations can be viewed performantly on mobile devices. [*FitsMap*]{} is implemented in Python and is open source (https://github.com/ryanhausen/fitsmap).'\nauthor:\n- Ryan Hausen\n- 'Brant E. Robertson'\nbibliography:\n- 'refs.bib'\ntitle: 'FitsMap: A Simple, Lightweight Tool For Displaying Interactive Astronomical Image and Catalog Data'\n---\n\nAstronomy web services, Astronomy data visualization, Astronomy data analysis, Human-centered computing\u00a0Scientific visualization, Human-centered computing\u00a0Visualization toolkits\n\nIntroduction {#sec:introduction}\n============\n\nAstronomical image data is inherently visual, and visual inspection and interpretation remain vital tools in the scientific process in astronomy. Upcoming telescopes like the James Webb Space Telescope [JWST; for a review, see @robertson2022a], Nancy Grace Roman Space Telescope [@spergel2015a; @akeson2019a], and Vera Rubin Observatory [@ivezic2008a; @ivezic2019a] will produce larger and deeper images of space than ever before." +"---\nabstract: |\n In the present paper, the Ising model with mixed spin-(1,1/2) is considered on the second order Cayley tree. A construction of splitting Gibbs measures corresponding the model is given which allows to establish the existence of the phase transition (non-uniqueness of Gibbs measures). We point out that, in the phase transition region, the considered model has three translation-invariant Gibbs measures in the ferromagnetic and anti-ferromagnetic regimes, while the classical Ising model does not possesses such Gibbs measures in the anti-ferromagnetic regime. It turns out that the considered model, like the Ising model, exhibits a disordered Gibbs measure. Therefore, non-extremity and extremity of such disordered Gibbs measures is investigated by means of tree-indexed Markov chains.\\\n **Keywords**: the mixed spin-(1,1/2) Ising model, Gibbs measures, phase transition, disorder phase.\\\nauthor:\n- 'Hasan Akin$^{1,\\dag}$'\n- 'Farrukh Mukhamedov$^{2,3,\\ddag}$'\ntitle: 'Gibbs measures of the Ising model with mixed spin-1 and spin-1/2 on a Cayley tree'\n---\n\n[^1]\n\nIntroduction\n============\n\nIn the last decades, the Ising model has been one of the most intensively studied model which used to describe critical behaviours of certain systems in natural sciences. Many interesting results have been observed in the phase transition theory by means of exactly solvable" +"---\nauthor:\n- 'Yanhong\u00a0Fei, Yingjie\u00a0Liu, Xian Wei, and\u00a0Mingsong\u00a0Chen'\nbibliography:\n- 'OViT.bib'\ntitle: ' O-ViT: Orthogonal Vision Transformer'\n---\n\n0.3in\n\nIntroduction\n============\n\nRecent years have witnessed ViT taking over the Convolution Neural Network (CNN) and achieving dramatic success in computer vision, such as image classification [@touvron2020training; @yuan2021tokenstotoken]. It benefits from transferring the self-attention mechanism [@DBLP:conf/nips/VaswaniSPUJGKP17], originally applied to language sequences, to vision tasks to learn the internal characteristics of image patch sequences [@50650]. Convolution operations gradually expand the view of the CNN kernel layer by layer. By comparison, the self-attention mechanism allows ViT to obtain the global feature even in shallow layers [@50650]. Nonetheless, linear transformations in the self-attention of ViT bring about *scale ambiguity* to the structure of the feature space. Besides, the softmax function for normalization has the risk of leading to *gradient vanishing* problems [@sun2020gradient]. Both restrict ViT to find the optimal solution or slower its optimization.\n\nThis motivates us to explore the optimization of ViT on the orthogonal manifold. To achieve this goal, we put forward a novel method named Orthogonal Vision Transformer ([O-ViT]{}). Each Matrix $A$ that resides on the orthogonal manifold has the following property [@Wang2020OrthogonalCN]: $$\\label{orth_definition}\n A^TA = AA^T =" +"---\nabstract: 'A major goal in earthquake physics is to derive a constitutive framework for fault slip that captures the dependence of shear strength on fault rheology, sliding velocity, and pore-fluid pressure. In this study, we present `H-MEC` (Hydro-Mechanical Earthquake Cycles), a newly-developed two-phase flow numerical code \u2014 which couples solid rock deformation and pervasive fluid flow \u2014 to simulate how crustal stress and fluid pressure evolve during the earthquake cycle on a fluid-bearing fault structure. This unified, continuum-based model, incorporates a staggered finite difference\u2013marker-in-cell (SFD-MIC) method and accounts for full inertial (wave mediated) effects and fluid flow in poro-visco-elasto-plastic compressible medium. Global Picard-iterations and an adaptive time stepping allows the correct resolution of both long- and short-time scales, ranging from years during slow tectonic loading to milliseconds during the propagation of dynamic ruptures. We present a comprehensive in-plane strike-slip setup in which we test analytical poroelastic benchmarks of pore-fluid pressure diffusion from an injection point along a finite fault width. We then investigate how pore-fluid pressure evolution and solid\u2013fluid compressibility control sequences of *seismic* and *aseismic* slip on geologic faults. While the onset of fluid-driven shear cracks is controlled by localized collapse of pores and dynamic self-pressurization of fluids" +"---\nauthor:\n- 'J. D. Wagenveld'\n- 'A. Saxena'\n- 'K. J. Duncan'\n- 'H. J. A. R\u00f6ttgering'\n- 'M. Zhang'\nbibliography:\n- 'HzQ-paper.bib'\ndate: 'Received XX; accepted YY'\ntitle: Revealing new high redshift quasar populations through Gaussian mixture model selection\n---\n\nIntroduction\n============\n\nStudying large statistical samples of high-redshift quasars (HzQs) is essential for understanding the formation and evolution of super-massive black holes (SMBH) in the early Universe. The presence of Gunn-Peterson (GP) troughs [@gunn1965] in the spectra of HzQs at $z\\sim6$, due to near-complete absorption of Ly$\\alpha$ photons by the increasingly neutral intergalactic medium (IGM) along the line-of-sight, make them crucial probes of cosmic reionisation [EoR; @fan2006; @becker2015]. These GP troughs can in turn be used to photometrically identify large samples of HzQs, and the proliferation of wide area multi-band photometric surveys at optical wavelengths such as the Sloan Digital Sky Survey [SDSS; @abazajian2003] and the Panoramic Survey Telescope and Rapid Response System surveys [Pan-STARRS; @chambers2016] has enabled the discovery of statistically significant samples of bright quasars at high redshifts, with now over ${\\sim}500$ confirmed HzQs at $z > 5$ [see @ross2020 for a compilation].\n\nFor HzQs at $z\\sim6$, towards the end of the EoR, the GP trough" +"---\nabstract: 'Neural Architecture Search (NAS) is a powerful tool for automating effective image processing DNN designing. The ranking has been advocated to design an efficient performance predictor for NAS. The previous contrastive method solves the ranking problem by comparing pairs of architectures and predicting their relative performance. However, it only focuses on the rankings between two involved architectures and neglects the overall quality distributions of the search space, which may suffer generalization issues. A predictor, namely Neural Architecture Ranker\u00a0(NAR) which concentrates on the global quality tier of specific architecture, is proposed to tackle such problems caused by the local perspective. The NAR explores the quality tiers of the search space globally and classifies each individual to the tier they belong to according to its global ranking. Thus, the predictor gains the knowledge of the performance distributions of the search space which helps to generalize its ranking ability to the datasets more easily. Meanwhile, the global quality distribution facilitates the search phase by directly sampling candidates according to the statistics of quality tiers, which is free of training a search algorithm, e.g., Reinforcement Learning\u00a0(RL) or Evolutionary Algorithm\u00a0(EA), thus it simplifies the NAS pipeline and saves the computational" +"---\nabstract: 'We prove that the Knot Floer homology group of a fibred knot of genus $g$ in the Alexander grading $1-g$ is isomorphic to a version of the fixed point Floer homology of an area-preserving representative of the monodromy.'\naddress: 'Laboratoire de Math\u00e9matiques Jean Leray, CNRS and Universit\u00e9 de Nantes'\nauthor:\n- Paolo Ghiggini\n- Gilberto Spano\ntitle: Knot Floer homology of fibred knots and Floer homology of surface diffeomorphisms\n---\n\nIntroduction\n============\n\nKnot Floer homology [@OS3; @Ra] is a a family of abelian groups \u2014 or vector spaces; here we will work over a field of characteristic two \u2014 $\\widehat{HFK}(Y, K, i)$ associated to any oriented null-homologous knot $K$ in an oriented three-manifold $Y$. If $g$ is the minimal genus of a Seifert surface of $K$, then $\\widehat{HFK}(Y, K, i)=0$ if $|i|>0$ and $\\widehat{HFK}(Y, K, i) \\ne 0$ for $i=-g,g$ by [@OSgenus; @Nigenus]. Moreover, $\\widehat{HFK}(Y, K, -g)$ has rank one if and only if $K$ is fibred by [@Ghi; @Nifibred]. See also [@Ju1].\n\nThe power of knot Floer homology, which is not at all limited to the results mentioned above, comes from its connections to many areas of low-dimensional topology, but its topological meaning is obscured by the" +"---\nabstract: 'The ability to have the same experience for different user groups (i.e., accessibility) is one of the most important characteristics of Web-based systems. The same is true for Knowledge Graph Question Answering (KGQA) systems that provide the access to Semantic Web data via natural language interface. While following our research agenda on the multilingual aspect of accessibility of KGQA systems, we identified several ongoing challenges. One of them is the lack of multilingual KGQA benchmarks. In this work, we extend one of the most popular KGQA benchmarks \u2013 QALD-9 by introducing high-quality questions\u2019 translations to 8 languages provided by native speakers, and transferring the SPARQL queries of QALD-9 from DBpedia to Wikidata, s.t., the usability and relevance of the dataset is strongly increased. Five of the languages \u2013 Armenian, Ukrainian, Lithuanian, Bashkir and Belarusian \u2013 to our best knowledge were never considered in KGQA research community before. The latter two of the languages are considered as [\u201cendangered\u201d]{} by UNESCO. We call the extended dataset QALD-9-plus and made it available online[^1].'\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'references.bib'\ntitle: |\n QALD-9-plus: A Multilingual Dataset for Question Answering over [DBpedia]{}and [Wikidata]{}\\\n Translated by Native Speakers\n---\n\nIntroduction\n============\n\nThe" +"---\nabstract: 'We designed two rules of binary quantum computed vote: Quantum Logical Veto (QLV) and Quantum Logical Nomination (QLN). The conjunction and disjunction from quantum computational logic are used to define QLV and QLN, respectively. Compared to classical vote, quantum computed vote is fairer, more democratic and has stronger expressive power. Since the advantage of quantum computed vote is neither the speed of computing nor the security of communication, we believe it opens a new battlefield in the second quantum revolution. Compared to other rules of quantum computed vote, QLV and QLN have better scalability. Both QLV and QLN can be implemented by the current technology and the difficulty of implementation does not grow with the increase of the number of voters.'\nauthor:\n- Xin Sun\n- Feifei He\n- Daowen Qiu\n- Piotr Kulicki\n- Mirek Sopek\n- Meiyun Guo\ntitle: 'Distributed Quantum Vote Based on Quantum Logical Operators, a New Battlefield of the Second Quantum Revolution'\n---\n\nIntroduction\n============\n\nElectronic vote, or e-vote, is a voting process in which ballot casting and counting is computer-aided. Since late 1990s and early 2000s, e-vote has received increasing interest and is widely applied to various situations of decision-making. Many voting" +"---\nabstract: 'We show that the minimum rank of a non-isotrivial local system of geometric origin on a suitably general $n$-pointed curve of genus $g$ is at least $2\\sqrt{g+1}$. We apply this result to resolve conjectures of Esnault-Kerz and Budur-Wang. The main input is an analysis of stability properties of flat vector bundles under isomonodromic deformations, which additionally answers questions of Biswas, Heu, and Hurtubise.'\nauthor:\n- 'Aaron Landesman, Daniel Litt'\nbibliography:\n- 'bibliography-isomonodromy.bib'\ntitle: Geometric local systems on very general curves and isomonodromy\n---\n\nIntroduction {#section:introduction}\n============\n\nOverview {#subsection:overview}\n--------\n\nWe work over the complex numbers $\\mathbb{C}$. The main result of this paper, , is that an analytically very general $n$-pointed curve of genus $g$ (defined in ) does not carry any non-isotrivial polarizable integral variations of Hodge structure of rank less than $2\\sqrt{g+1}$. In particular, an analytically very general $n$-pointed curve of genus $g$ carries no geometric local systems of rank less than $2\\sqrt{g+1}$ with infinite monodromy, as we show in . This is a strong restriction on the topology of smooth proper maps to an analytically very general curve, and contradicts conjectures of Esnault-Kerz [@esnault2021local Conjecture 1.1] and Budur-Wang [@budur2020absolute Conjecture 10.3.1], as explained in .\n\nThe" +"---\nabstract: 'Operating systems rely on system calls to allow the controlled communication of isolated processes with the kernel and other processes. Every system call includes a processor mode switch from the unprivileged user mode to the privileged kernel mode. Although processor mode switches are the essential isolation mechanism to guarantee the system\u2019s integrity, they induce direct and indirect performance costs as they invalidate parts of the processor state. In recent years, high-performance networks and storage hardware has made the [user/kernel transition]{} overhead the bottleneck for IO-heavy applications. To make matters worse, security vulnerabilities in modern processors ([e.g.,\u00a0]{}Meltdown) have prompted kernel mitigations that further increase the transition overhead. To decouple system calls from [user/kernel transitions]{} we propose [AnyCall]{}, which uses an in-kernel compiler to execute safety-checked user bytecode in kernel mode. This allows for very fast system calls interleaved with error checking and processing logic using only a single [user/kernel transition]{}. We have implemented [AnyCall]{}based on the Linux kernel\u2019s subsystem. Our evaluation demonstrates that system call bursts are up to 55 times faster using [AnyCall]{}and that real-world applications can be sped up by even if only a minimal part of their code is run by [10^{10}\\,$M$_{\\odot}$ star-forming galaxies at $0.4