id
stringlengths
16
29
text
stringlengths
86
3.49k
source
stringlengths
14
112
arxiv_dataset-77001609.04522
Tensor Graphical Model: Non-convex Optimization and Statistical Inference stat.ML stat.ME We consider the estimation and inference of graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. A critical challenge in the estimation and inference of this model is the fact that its penalized maximum likelihood estimation involves minimizing a non-convex objective function. To address it, this paper makes two contributions: (i) In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with an optimal statistical rate of convergence. (ii) We propose a de-biased statistical inference procedure for testing hypotheses on the true support of the sparse precision matrices, and employ it for testing a growing number of hypothesis with false discovery rate (FDR) control. The asymptotic normality of our test statistic and the consistency of FDR control procedure are established. Our theoretical results are backed up by thorough numerical studies and our real applications on neuroimaging studies of Autism spectrum disorder and users' advertising click analysis bring new scientific findings and business insights. The proposed methods are encoded into a publicly available R package Tlasso.
arxiv topic:stat.ML stat.ME
arxiv_dataset-77011609.04622
Algebraic models of homotopy types and the homotopy hypothesis math.CT math.AT We introduce and study a notion of cylinder coherator similar to the notion of Grothendieck coherator which define more flexible notion of weak infinity groupoids. We show that each such cylinder coherator produces a combinatorial semi-model category of weak infinity groupoids, whose objects are all fibrant and which is in a precise sense "freely generated by an object". We show that all those semi model categories are Quillen equivalent together and Quillen to the model category of spaces. A general procedure is given to produce such coherator, and several explicit examples are presented: one which is simplicial in nature and allows the comparison to the model category for spaces. A second example can be describe as the category of globular sets endowed with "all the operations that can be defined within a weak type theory". This second notion seem to provide a definition of weak infinity groupoids which can be defined internally within type theory and which is classically equivalent to homotopy types. Finally, the category of Grothendieck infinity groupoids for a fixed Grothendieck coherator would be an example of this formalism under a seemingly simple conjecture whose validity is shown to imply Grothendieck homotopy hypothesis. This conjecture seem to sum up what needs to be proved at a technical level to ensure that the theory of Grothendieck weak infinity groupoid is well behaved.
arxiv topic:math.CT math.AT
arxiv_dataset-77021609.04722
Concordance and the Smallest Covering Set of Preference Orderings cs.AI cs.DS cs.GT cs.IT math.IT Preference orderings are orderings of a set of items according to the preferences (of judges). Such orderings arise in a variety of domains, including group decision making, consumer marketing, voting and machine learning. Measuring the mutual information and extracting the common patterns in a set of preference orderings are key to these areas. In this paper we deal with the representation of sets of preference orderings, the quantification of the degree to which judges agree on their ordering of the items (i.e. the concordance), and the efficient, meaningful description of such sets. We propose to represent the orderings in a subsequence-based feature space and present a new algorithm to calculate the size of the set of all common subsequences - the basis of a quantification of concordance, not only for pairs of orderings but also for sets of orderings. The new algorithm is fast and storage efficient with a time complexity of only $O(Nn^2)$ for the orderings of $n$ items by $N$ judges and a space complexity of only $O(\min\{Nn,n^2\})$. Also, we propose to represent the set of all $N$ orderings through a smallest set of covering preferences and present an algorithm to construct this smallest covering set. The source code for the algorithms is available at https://github.com/zhiweiuu/secs
arxiv topic:cs.AI cs.DS cs.GT cs.IT math.IT
arxiv_dataset-77031609.04822
A free-form lensing model of A370 revealing stellar mass dominated BCGs, in Hubble Frontier Fields images astro-ph.GA astro-ph.CO We derive a free-form mass distribution for the unrelaxed cluster A370 (z=0.375), using the latest Hubble Frontier Fields images and GLASS spectroscopy. Starting from a reliable set of 10 multiply lensed systems we produce a free-form lens model that identifies ~ 80 multiple-images. Good consistency is found between models using independent subsamples of these lensed systems, with detailed agreement for the well resolved arcs. The mass distribution has two very similar concentrations centred on the two prominent Brightest Cluster Galaxies (or BCGs), with mass profiles that are accurately constrained by a uniquely useful system of long radially lensed images centred on both BCGs. We show that the lensing mass profiles of these BCGs are mainly accounted for by their stellar mass profiles, with a modest contribution from dark matter within r<100 kpc of each BCG. This conclusion may favour a cooled cluster gas origin for BCGs, rather than via mergers of normal galaxies for which dark matter should dominate over stars. Growth via merging between BCGs is, however, consistent with this finding, so that stars still dominate over dark matter .
arxiv topic:astro-ph.GA astro-ph.CO
arxiv_dataset-77041609.04922
Optically assisted trapping with high-permittivity dielectric rings: Towards optical aerosol filtration cond-mat.mtrl-sci physics.optics Controlling the transport, trapping, and filtering of nanoparticles is important for many applications. By virtue of their weak response to gravity and their thermal motion, various physical mechanisms can be exploited for such operations on nanoparticles. However, the manipulation based on optical forces is potentially most appealing since it constitutes a highly deterministic approach. Plasmonic nanostructures have been suggested for this purpose, but they possess the disadvantages of locally generating heat and trapping the nanoparticles directly on surface. Here, we propose the use of dielectric rings made of high permittivity materials for trapping nanoparticles. Thanks to their ability to strongly localize the field in space, nanoparticles can be trapped without contact. We use a semi-analytical method to study the ability of these rings to trap nanoparticles. Results are supported by full-wave simulations. Application of the trapping concept to nanoparticle filtration is suggested.
arxiv topic:cond-mat.mtrl-sci physics.optics
arxiv_dataset-77051609.05022
A double copy for ${\cal N}=2$ supergravity: a linearised tale told on-shell hep-th We construct the on-shell double copy for linearised four-dimensional ${\cal N}=2$ supergravity coupled to one vector multiplet with a quadratic prepotential. We apply this dictionary to the weak-field approximation of dyonic BPS black holes in this theory.
arxiv topic:hep-th
arxiv_dataset-77061609.05122
Search for anomalous electroweak production of $WW/WZ$ in association with a high-mass dijet system in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detector hep-ex A search is presented for anomalous quartic gauge boson couplings in vector-boson scattering. The data for the analysis correspond to $20.2$ fb$^{-1}$ of $\sqrt{s}=8$ TeV $pp$ collisions, and were collected in 2012 by the ATLAS experiment at the Large Hadron Collider. The search looks for the production of $WW$ or $WZ$ boson pairs accompanied by a high-mass dijet system, with one $W$ decaying leptonically, and a $W$ or $Z$ decaying hadronically. The hadronically decaying $W/Z$ is reconstructed as either two small-radius jets or one large-radius jet using jet substructure techniques. Constraints on the anomalous quartic gauge boson coupling parameters $\alpha_4$ and $\alpha_5$ are set by fitting the transverse mass of the diboson system, and the resulting 95% confidence intervals are $-0.024<\alpha_4<0.030$ and $-0.028<\alpha_5<0.033$.
arxiv topic:hep-ex
arxiv_dataset-77071609.05222
What prevents gravitational collapse in string theory? hep-th gr-qc It is conventionally believed that if a ball of matter of mass $M$ has a radius close to $2GM$ then it must collapse to a black hole. But string theory microstates (fuzzballs) have no horizon or singularity, and they do {\it not} collapse. We consider two simple examples from classical gravity to illustrate how this violation of our intuition happens. In each case the `matter' arises from an extra compact dimension, but the topology of this extra dimension is not trivial. The pressure and density of this matter diverge at various points, but this is only an artifact of dimensional reduction; thus we bypass results like Buchadahl's theorem. Such microstates give the entropy of black holes, so these topologically nontrivial constructions dominate the state space of quantum gravity.
arxiv topic:hep-th gr-qc
arxiv_dataset-77081609.05322
Signal Transmissibility in Marginal Granular Materials cond-mat.soft We examine the "transmissibility" of a simulated two-dimensional pack of frictionless disks formed by confining dilute disks in a shrinking, periodic box to the point of mechanical stability. Two opposite boundaries are then removed, thus allowing a set of free motions. Small free displacements on one boundary then induce proportional displacements on the opposite boundary. Transmissibility is the ability to distinguish different perturbations by their distant responses. We assess transmissibility by successively identifying free orthonormal modes of motion that have the {\em smallest} distant responses. The last modes to be identified in this "pessimistic" basis are the most transmissive. The transmitted amplitudes of these most transmissive modes fall off exponentially with mode number. Similar exponential falloff is seen in a simple elastic medium, though the responsible modes differ greatly in structure in the two systems. Thus the marginal pack's transmissibility is qualitatively similar to that of a simple elastic medium. We compare our results with recent findings based on the projection of the space of free motion onto interior sites.
arxiv topic:cond-mat.soft
arxiv_dataset-77091609.05422
Large deviations for Gibbs measures with singular Hamiltonians and emergence of Kahler-Einstein metrics math-ph math.CV math.DG math.MP In the present paper and the companion paper [9] a probabilistic (statistical-mechanical) approach to the construction of canonical metrics on a complex algebraic varieties X is introduced, by sampling "temperature deformed" determinantal point processes. The main new ingredient is a large deviation principle for Gibbs measures with singular Hamiltonians, which is proved in the present paper. As an application we show that the unique Kahler-Einstein metric with negative Ricci curvature on a canonically polarized algebraic manifold X emerges in the many particle limit of the canonical point processes on X. In the companion paper [9] the extension to algebraic varieties X with positive Kodaira dimension is given and a conjectural picture relating negative temperature states to the existence problem for Kahler-Einstein metrics with positive Ricci curvature is developed.
arxiv topic:math-ph math.CV math.DG math.MP
arxiv_dataset-77101609.05522
Learning camera viewpoint using CNN to improve 3D body pose estimation cs.CV The objective of this work is to estimate 3D human pose from a single RGB image. Extracting image representations which incorporate both spatial relation of body parts and their relative depth plays an essential role in accurate3D pose reconstruction. In this paper, for the first time, we show that camera viewpoint in combination to 2D joint lo-cations significantly improves 3D pose accuracy without the explicit use of perspective geometry mathematical models.To this end, we train a deep Convolutional Neural Net-work (CNN) to learn categorical camera viewpoint. To make the network robust against clothing and body shape of the subject in the image, we utilized 3D computer rendering to synthesize additional training images. We test our framework on the largest 3D pose estimation bench-mark, Human3.6m, and achieve up to 20% error reduction compared to the state-of-the-art approaches that do not use body part segmentation.
arxiv topic:cs.CV
arxiv_dataset-77111609.05622
Patterns of cooperation during collective emergencies in the help-or-escape social dilemma physics.soc-ph Although cooperation is central to the organisation of many social systems, relatively little is known about cooperation in situations of collective emergency. When groups of people flee from a danger such as a burning building or a terrorist attack, the collective benefit of cooperation is important, but the cost of helping is high and the temptation to defect is strong. To explore the degree of cooperation in emergencies, we develop a new social game, the help-or-escape social dilemma. Under time and monetary pressure, players decide how much risk they are willing to take in order to help others. Results indicated that players took as much risk to help others during emergencies as they did under normal conditions. In both conditions, most players applied an egalitarian heuristic and helped others until their chance of success equalled that of the group. This strategy is less efficient during emergencies, however, because the increased time pressure results in fewer people helped. Furthermore, emergencies tend to amplify participants initial tendency to cooperate, with prosocials becoming even more cooperative and individualists becoming even more selfish. Our framework offers new opportunities to study human cooperation and could help authorities to better manage crowd behaviours during mass emergencies.
arxiv topic:physics.soc-ph
arxiv_dataset-77121609.05722
Poisson Noise Reduction with Higher-order Natural Image Prior Model cs.CV Poisson denoising is an essential issue for various imaging applications, such as night vision, medical imaging and microscopy. State-of-the-art approaches are clearly dominated by patch-based non-local methods in recent years. In this paper, we aim to propose a local Poisson denoising model with both structure simplicity and good performance. To this end, we consider a variational modeling to integrate the so-called Fields of Experts (FoE) image prior, that has proven an effective higher-order Markov Random Fields (MRF) model for many classic image restoration problems. We exploit several feasible variational variants for this task. We start with a direct modeling in the original image domain by taking into account the Poisson noise statistics, which performs generally well for the cases of high SNR. However, this strategy encounters problem in cases of low SNR. Then we turn to an alternative modeling strategy by using the Anscombe transform and Gaussian statistics derived data term. We retrain the FoE prior model directly in the transform domain. With the newly trained FoE model, we end up with a local variational model providing strongly competitive results against state-of-the-art non-local approaches, meanwhile bearing the property of simple structure. Furthermore, our proposed model comes along with an additional advantage, that the inference is very efficient as it is well-suited for parallel computation on GPUs. For images of size $512 \times 512$, our GPU implementation takes less than 1 second to produce state-of-the-art Poisson denoising performance.
arxiv topic:cs.CV
arxiv_dataset-77131609.05822
Selectively (a)-spaces from almost disjoint families are necessarily countable under a certain parametrized weak diamond principle math.GN math.LO The second author has recently shown ([20]) that any selectively (a) almost disjoint family must have cardinality strictly less than $2^{\alpeh_0}$, so under the Continuum Hypothesis such a family is necessarily countable. However, it is also shown in the same paper that $2^{\alpeh_0} < 2^{\alpeh_1}$ alone does not avoid the existence of uncountable selectively (a) almost disjoint families. We show in this paper that a certain effective parametrized weak diamond principle is enough to ensure countability of the almost disjoint family in this context. We also discuss the deductive strength of this specific weak diamond principle (which is consistent with the negation of the Continuum Hypothesis, apart from other features).
arxiv topic:math.GN math.LO
arxiv_dataset-77141609.05922
The RG-improved Twin Higgs effective potential at NNLL hep-ph We present the Renormalization Group improvement of the Twin Higgs effective potential at cubic order in logarithmic accuracy. We first introduce a model-independent low-energy effective Lagrangian that captures both the pseudo-Nambu-Goldstone boson nature of the Higgs field and the twin light degrees of freedom charged under a copy of the Standard Model. We then apply the background field method to systematically re-sum all the one loop diagrams contributing to the potential. We show how this technique can be efficient to implicitly renormalize the higher-dimensional operators in the twin sector without classifying all of them. A prediction for the Higgs mass in the Twin Higgs model is derived and found to be of the order of $M_H \sim 120 ~\text{GeV}$ with an ultraviolet cut-off $m_*\sim 10-20 ~\text{TeV}$. Irrespective of any possible ultraviolet completion of the low-energy Lagrangian, the infrared degrees of freedom alone are therefore enough to account for the observed value of the Higgs mass through running effects.
arxiv topic:hep-ph
arxiv_dataset-77151609.06022
Spectral properties of the Cayley Graphs of split metacyclic groups math.CO Let $\Gamma(G,S)$ denote the Cayley graph of a group $G$ with respect to a set $S \subset G$. In this paper, we analyze the spectral properties of the Cayley graphs $\mathcal{T}_{m,n,k} = \Gamma(\mathbb{Z}_m \ltimes_k \mathbb{Z}_n, \{(\pm 1,0),(0,\pm 1)\})$, where $m,n \geq 3$ and $k^m \equiv 1 \pmod{n}$. We show that the adjacency matrix of $\mathcal{T}_{m,n,k}$, upto relabeling, is a block circulant matrix, and we also obtain an explicit description of these blocks. By extending a result due to Walker-Mieghem to Hermitian matrices, we show that $\mathcal{T}_{m,n,k}$ is not Ramanujan, when either $m > 8$, or $n \geq 400$.
arxiv topic:math.CO
arxiv_dataset-77161609.06122
Fluorescence via Reverse Intersystem Crossing from Higher Triplet States in a Bisanthracene Derivative cond-mat.mtrl-sci physics.chem-ph To elucidate the high external quantum efficiency observed for organic light-emitting diodes using a bisanthracene derivative, BD1, as the emitting molecule, off-diagonal vibronic coupling constants (VCCs) between the excited states of BD1, which govern non-radiative transition rates, were calculated employing time-dependent density functional theory. The VCCs were analysed based on the concept of vibronic coupling density. The VCC calculations suggest a fluorescence via higher triplets (FvHT) mechanism, which entails the conversion of a T$_4$ exciton generated during electrical excitation into an S$_2$ exciton via reverse intersystem crossing (RISC); moreover, the S$_2$ exciton relaxes to a fluorescent S$_1$ exciton because of large vibronic coupling between S$_2$ and S$_1$. This mechanism is valid as long as the relaxation of triplet states higher than T$_1$ to lower states is suppressed. The symmetry-controlled thermally activated delayed fluorescence (SC-TADF) and inverted singlet and triplet (iST) structure, which have been proposed in our previous studies, are the special examples of the FvHT mechanism that need high molecular symmetry. However, BD1 achieves the FvHT mechanism in spite of its asymmetrical structure. A general condition for the suppression of radiative and non-radiative transitions in molecules with pseudo-degenerate electronic structures such as BD1 is discussed. A superordinate concept, fluorescence via RISC, which includes TADF, SC-TADF, iST structure, and FvHT is also proposed.
arxiv topic:cond-mat.mtrl-sci physics.chem-ph
arxiv_dataset-77171609.06222
Collisional broadening of angular correlations in a multiphase transport model nucl-th nucl-ex Systematic comparisons of jetlike correlation data to radiative and collisional energy loss model calculations are essential to extract transport properties of the quark-gluon medium created in relativistic heavy ion collisions. This paper presents a transport study of collisional broadening of jetlike correlations, by following parton-parton collision history in a multiphase transport (AMPT) model. The correlation shape is studied as a function of the number of parton-parton collisions suffered by a high transverse momentum probe parton ($N_{\rm coll}$) and the azimuth of the probe relative to the reaction plane ($\phi_{\rm fin.}^{\rm probe}$). Correlation is found to broaden with increasing $N_{\rm coll}$ and $\phi_{\rm fin.}^{\rm probe}$ from in- to out-of-plane direction. This study provides a transport model reference for future jet-medium interaction studies.
arxiv topic:nucl-th nucl-ex
arxiv_dataset-77181609.06322
Cosmic ray heating in cool core clusters - II. Self-regulation cycle and non-thermal emission astro-ph.GA astro-ph.CO astro-ph.HE Self-regulated feedback by active galactic nuclei (AGNs) appears to be critical in balancing radiative cooling of the low-entropy gas at the centres of galaxy clusters and in regulating star formation in central galaxies. In a companion paper, we found steady-state solutions of the hydrodynamic equations that are coupled to the CR energy equation for a large cluster sample. In those solutions, radiative cooling in the central region is balanced by streaming CRs through the generation and dissipation of resonantly generated Alfv{\'e}n waves and by thermal conduction at large radii. Here we demonstrate that the predicted non-thermal emission resulting from hadronic CR interactions in the intra-cluster medium exceeds observational radio (and gamma-ray) data in a subsample of clusters that host radio mini halos (RMHs). In contrast, the predicted non-thermal emission is well below observational data in cooling galaxy clusters without RMHs. These are characterised by exceptionally large AGN radio fluxes, indicating high CR yields and associated CR heating rates. We suggest a self-regulation cycle of AGN feedback in which non-RMH clusters are heated by streaming CRs homogeneously throughout the central cooling region. We predict {\em radio micro halos} surrounding the AGNs of these CR-heated clusters in which the primary emission may predominate the hadronically generated emission. Once the CR population has streamed sufficiently far and lost enough energy, the cooling rate increases, which explains the increased star formation rates in clusters hosting RMHs. Those could be powered hadronically by CRs that have previously heated the cluster core.
arxiv topic:astro-ph.GA astro-ph.CO astro-ph.HE
arxiv_dataset-77191609.06422
Combined effects of f(R) gravity and conformally invariant Maxwell field on the extended phase space thermodynamics of higher-dimensional black holes gr-qc In this paper, we investigate the thermodynamics of higher-dimensional $f(R)$ black holes in the extended phase space. Both the analytic expressions and numerical results for the possible critical physical quantities are obtained. It is proved that meaningful critical specific volume only exists when $p$ is odd. This unique phenomenon may be attributed to the combined effect of $f(R)$ gravity and conformally invariant Maxwell field. It is also shown that the ratio $P_cv_c/T_c$ differs from that of higher dimensional charged AdS black holes in Einstein gravity. However, the ratio for four-dimensional $f(R)$ black holes is the same as that of four-dimensional RN-AdS black holes, implying that $f(R)$ gravity does not influence the ratio. So the ratio may be related to conformally invariant Maxwell field. To probe the phase transition, we derive the explicit expression of the Gibbs free energy with its graph plotted. Phase transition analogous to the van der Waals liquid-gas system take place between the small black hole and the large black hole. Classical swallow tail behavior, characteristic of first order phase transition, can also be observed in the Gibbs free energy graph. Critical exponents are also calculated. It is shown that these exponents are exactly the same as those of other AdS black holes, implying that neither $f(R)$ gravity nor conformally invariant Maxwell field influence the critical exponents. Since the investigated black hole solution depends on the form of the function $f(R)$, we discuss in detail how our results put constraint on the form of the function $f(R)$ and we also present a simple example.
arxiv topic:gr-qc
arxiv_dataset-77201609.06522
Computing Vertex-Disjoint Paths using MAOs cs.DM cs.DS Let G be a graph with minimum degree $\delta$. It is well-known that maximal adjacency orderings (MAOs) compute a vertex set S such that every pair of S is connected by at least $\delta$ internally vertex-disjoint paths in G. We present an algorithm that, given any pair of S, computes these $\delta$ paths in linear time O(n+m). This improves the previously best solutions for these special vertex pairs, which were flow-based. Our algorithm simplifies a proof about pendant pairs of Mader and makes a purely existential proof of Nagamochi algorithmic.
arxiv topic:cs.DM cs.DS
arxiv_dataset-77211609.06622
An Analysis of Introductory Programming Courses at UK Universities cs.CY cs.PL Context: In the context of exploring the art, science and engineering of programming, the question of which programming languages should be taught first has been fiercely debated since computer science teaching started in universities. Failure to grasp programming readily almost certainly implies failure to progress in computer science. Inquiry: What first programming languages are being taught? There have been regular national-scale surveys in Australia and New Zealand, with the only US survey reporting on a small subset of universities. This the first such national survey of universities in the UK. Approach: We report the results of the first survey of introductory programming courses (N=80) taught at UK universities as part of their first year computer science (or related) degree programmes, conducted in the first half of 2016. We report on student numbers, programming paradigm, programming languages and environment/tools used, as well as the underpinning rationale for these choices. Knowledge: The results in this first UK survey indicate a dominance of Java at a time when universities are still generally teaching students who are new to programming (and computer science), despite the fact that Python is perceived, by the same respondents, to be both easier to teach as well as to learn. Grounding: We compare the results of this survey with a related survey conducted since 2010 (as well as earlier surveys from 2001 and 2003) in Australia and New Zealand. Importance: This survey provides a starting point for valuable pedagogic baseline data for the analysis of the art, science and engineering of programming, in the context of substantial computer science curriculum reform in UK schools, as well as increasing scrutiny of teaching excellence and graduate employability for UK universities.
arxiv topic:cs.CY cs.PL
arxiv_dataset-77221609.06722
On the pi pi continuum in the nucleon form factors and the proton radius puzzle hep-ph hep-lat nucl-ex nucl-th physics.atom-ph We present an improved determination of the $\pi\pi$ continuum contribution to the isovector spectral functions of the nucleon electromagnetic form factors. Our analysis includes the most up-to-date results for the $\pi\pi\to\bar N N$ partial waves extracted from Roy-Steiner equations, consistent input for the pion vector form factor, and a thorough discussion of isospin-violating effects and uncertainty estimates. As an application, we consider the $\pi\pi$ contribution to the isovector electric and magnetic radii by means of sum rules, which, in combination with the accurately known neutron electric radius, are found to slightly prefer a small proton charge radius.
arxiv topic:hep-ph hep-lat nucl-ex nucl-th physics.atom-ph
arxiv_dataset-77231609.06822
Effect of adding nanometre-sized heterogeneities on the structural dynamics and the excess wing of a molecular glass former cond-mat.dis-nn cond-mat.soft We present the relaxation dynamics of glass-forming glycerol mixed with 1.1 nm sized polyhedral oligomeric silsesquioxane (POSS) molecules using dielectric spectroscopy (DS) and two different neutron scattering (NS) techniques. Both, the reorientational dynamics as measured by DS and the density fluctuations detected by NS reveal a broadening of the alpha relaxation when POSS molecules are added. Moreover, we find a significant slowing down of the alpha-relaxation time. These effects are in accord with the heterogeneity scenario considered for the dynamics of glasses and supercooled liquids. The addition of POSS also affects the excess wing in glycerol arising from a secondary relaxation process, which seems to exhibit a dramatic increase in relative strength compared to the alpha-relaxation.
arxiv topic:cond-mat.dis-nn cond-mat.soft
arxiv_dataset-77241609.06922
Constraints on galaxy formation models from the galaxy stellar mass function and its evolution astro-ph.GA We explore the parameter space of the semi-analytic galaxy formation model GALFORM, studying the constraints imposed by measurements of the galaxy stellar mass function (GSMF) and its evolution. We use the Bayesian Emulator method to quickly eliminate vast implausible volumes of the parameter space and zoom in on the most interesting regions, allowing us to identify a set of models that match the observational data within model uncertainties. We find that the GSMF strongly constrains parameters related to quiescent star formation in discs, stellar and AGN feedback and threshold for disc instabilities, but weakly restricts other parameters. Constraining the model using local data alone does not usually select models that match the evolution of the GSMF well. Nevertheless, we show that a small subset of models provides acceptable match to GSMF data out to redshift 1.5. We explore the physical significance of the parameters of these models, in particular exploring whether the model provides a better description if the mass loading of the galactic winds generated by starbursts ($\beta_{0,\text{burst}}$) and quiescent disks ($\beta_{0,\text{disc}}$) is different. Performing a principal component analysis of the plausible volume of the parameter space, we write a set of relations between parameters obeyed by plausible models with respect to GSMF evolution. We find that while $\beta_{0,\text{disc}}$ is strongly constrained by GSMF evolution data, constraints on $\beta_{0,\text{burst}}$ are weak. Although it is possible to find plausible models for which $\beta_{0,\text{burst}} = \beta_{0,\text{disc}}$, most plausible models have $\beta_{0,\text{burst}}>\beta_{0,\text{disc}}$, implying - for these - larger SN feedback efficiency at higher redshifts.
arxiv topic:astro-ph.GA
arxiv_dataset-77251609.07022
Competing turbulent cascades and eddy-wave interactions in shallow water equilibria physics.flu-dyn cond-mat.stat-mech In recent work, Renaud, Venaille, and Bouchet (RVB) revisit the equilibrium statistical mechanics theory of the shallow water equations, within a microcanonical approach, focusing on a more careful treatment of the energy partition between inertial gravity wave and eddy motions in the equilibrium state, and deriving joint probability distributions for the corresponding dynamical degrees of freedom. The authors derive a Liouville theorem that determines the underlying phase space statistical measure, but then, through some physical arguments, actually compute the equilibrium statistics using a measure that \emph{violates} this theorem. Here, using a more convenient, but essentially equivalent, grand canonical approach, the full statistical theory consistent with the Liouville theorem is derived. The results reveal several significant differences from the previous results: (1) The microscale wave motions lead to a strongly fluctuating thermodynamics, including long-ranged correlations, in contrast to the mean-field-like behavior found by RVB. The final effective model is equivalent to that of an elastic membrane with a nonlinear wave-renormalized surface tension. (2) Even when a mean field approximation is made, a rather more complex joint probability distribution is revealed. Alternative physical arguments fully support the consistency of the results. Of course, the true fluid final steady state relies on dissipative processes not included in the shallow water equations, such as wave breaking and viscous effects, but it is argued that the current theory provides a more mathematically consistent starting point for future work aimed at assessing their impacts.
arxiv topic:physics.flu-dyn cond-mat.stat-mech
arxiv_dataset-77261609.07122
UV complete composite Higgs models hep-ph We study confining gauge theories with fermions vectorial under the SM that produce a Higgs doublet as a Nambu-Goldstone boson. The vacuum misalignment required to break the electro-weak symmetry is induced by an elementary Higgs doublet with Yukawa couplings to the new fermions. The physical Higgs is a linear combination of elementary and composite Higgses while the SM fermions remain elementary. The full theory is renormalizable and the SM Yukawa couplings are generated from the ones of the elementary Higgs allowing to eliminate all flavour problems but with interesting effects for Electric Dipole Moments of SM particles. We also discuss how ideas on the relaxation of the electro-weak scale could be realised within this framework.
arxiv topic:hep-ph
arxiv_dataset-77271609.07222
Deep Multi-Task Learning with Shared Memory cs.CL Neural network based models have achieved impressive results on various specific tasks. However, in previous works, most models are learned separately based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we propose two deep architectures which can be trained jointly on multiple related tasks. More specifically, we augment neural model with an external memory, which is shared by several tasks. Experiments on two groups of text classification tasks show that our proposed architectures can improve the performance of a task with the help of other related tasks.
arxiv topic:cs.CL
arxiv_dataset-77281609.07322
On Gravitational Chirality as the Genesis of Astrophysical Jets gr-qc astro-ph.GA physics.space-ph It has been suggested that single and double jets observed emanating from certain astrophysical objects may have a purely gravitational origin. We discuss new classes of plane-fronted and pulsed gravitational wave solutions to the equation for perturbations of Ricci-flat spacetimes around Minkowski metrics, as models for the genesis of such phenomena. These solutions are classified in terms of their chirality and generate a family of non-stationary spacetime metrics. Particular members of these families are used as backgrounds in analysing time-like solutions to the geodesic equation for test particles. They are found numerically to exhibit both single and double jet-like features with dimensionless aspect ratios suggesting that it may be profitable to include such backgrounds in simulations of astrophysical jet dynamics from rotating accretion discs involving electromagnetic fields.
arxiv topic:gr-qc astro-ph.GA physics.space-ph
arxiv_dataset-77291609.07422
Heavy quark fragmentation functions at next-to-leading perturbative QCD hep-ph nucl-th It is well-known that the dominant mechanism to produce hadronic bound states with large transverse momentum is fragmentation. This mechanism is described by the fragmentation functions (FFs) which are the universal and process-independent functions. Here, we review the perturbative FFs formalism as an appropriate tool for studying these hadronization processes and detail the extension of this formalism at next-to-leading order (NLO). Using the Suzuki's model, we calculate the perturbative QCD FF for a heavy quark to fragment into a S-wave heavy meson at NLO. As an example, we study the LO and NLO FFs for a charm quark to split into the S-wave $D$-meson and compare our analytic results both with experimental data and well-known phenomenological models.
arxiv topic:hep-ph nucl-th
arxiv_dataset-77301609.07522
Interpreting formulas of divisible lattice ordered abelian groups math.LO We show that a large class of divisible abelian $\ell$-groups (lattice ordered groups) of continuous functions is interpretable (in a certain sense) in the lattice of the zero sets of these functions. This has various applications to the model theory of these $\ell$-groups, including decidability results.
arxiv topic:math.LO
arxiv_dataset-77311609.07622
Asymptotic homogenization of hygro-thermo-mechanical properties of fibrous networks cond-mat.soft cond-mat.mtrl-sci The hygro-thermo-expansive response of fibrous networks involves deformation phenomena at multiple length scales. The moisture or temperature induced expansion of individual fibres is transmitted in the network through the inter-fibre bonds; particularly in the case of anisotropic fibres, this substantially influences the resulting overall deformation. This paper presents a methodology to predict the effective properties of bonded fibrous networks. The distinctive features of the work are i) the focus on the hygro-thermo-mechanical response, whereas in the literature generally only the mechanical properties are addressed; ii) the adoption of asymptotic homogenization to model fibrous networks. Asymptotic homogenization is an efficient and versatile multi-scale technique that allows to obtain within a rigorous setting the effective material response, even for complex micro-structural geometries. The fibrous networks considered in this investigation are generated by random deposition of the fibres within a planar region according to an orientation probability density function. Most of the available network descriptions model the fibres essentially as uni-axial elements, thereby not explicitly considering the role of the bonds. In this paper, the fibres are treated as two dimensional ribbon-like elements; this allows to naturally include the contribution of the bonding regions to the effective expansion. The efficacy of the proposed study is illustrated by investigating the effective response for several network realizations, incorporating the influence of different micro-scale parameters, such as fibre hygro-thermo-elastic properties, orientation, geometry, areal coverage.
arxiv topic:cond-mat.soft cond-mat.mtrl-sci
arxiv_dataset-77321609.07722
Sooner than Expected: Hitting the Wall of Complexity in Evolution cs.NE In evolutionary robotics an encoding of the control software, which maps sensor data (input) to motor control values (output), is shaped by stochastic optimization methods to complete a predefined task. This approach is assumed to be beneficial compared to standard methods of controller design in those cases where no a-priori model is available that could help to optimize performance. Also for robots that have to operate in unpredictable environments, an evolutionary robotics approach is favorable. We demonstrate here that such a model-free approach is not a free lunch, as already simple tasks can represent unsolvable barriers for fully open-ended uninformed evolutionary computation techniques. We propose here the 'Wankelmut' task as an objective for an evolutionary approach that starts from scratch without pre-shaped controller software or any other informed approach that would force the behavior to be evolved in a desired way. Our focal claim is that 'Wankelmut' represents the simplest set of problems that makes plain-vanilla evolutionary computation fail. We demonstrate this by a series of simple standard evolutionary approaches using different fitness functions and standard artificial neural networks as well as continuous-time recurrent neural networks. All our tested approaches failed. We claim that any other evolutionary approach will also fail that does per-se not favor or enforce modularity and does not freeze or protect already evolved functionalities. Thus we propose a hard-to-pass benchmark and make a strong statement for self-complexifying and generative approaches in evolutionary computation. We anticipate that defining such a 'simplest task to fail' is a valuable benchmark for promoting future development in the field of artificial intelligence, evolutionary robotics and artificial life.
arxiv topic:cs.NE
arxiv_dataset-77331609.07822
Disorder effect on chiral edge modes and anomalous Hall conductance in Weyl semimetals cond-mat.mes-hall Typical Weyl semimetals host chiral surface states and hence show an anomalous Hall response. Although a Weyl semimetal phase is known to be robust against weak disorder, the influence of disorder on chiral states is not fully clarified so far. We study the behavior of such chiral states in the presence of disorder and its consequences on an anomalous Hall response, focusing on a thin slab of Weyl semimetal with chiral surface states along its edge. It is shown that weak disorder does not disrupt chiral edge states but crucially affects them owing to the renormalization of a mass parameter: the number of chiral edge states changes depending on the strength of disorder. It is also shown that the Hall conductance is quantized when the Fermi level is located near Weyl nodes within a finite-size gap. This quantization of the Hall conductance collapses once the strength of disorder exceeds a critical value, suggesting that it serves as a probe to distinguish a Weyl semimetal phase from a diffusive anomalous Hall metal phase.
arxiv topic:cond-mat.mes-hall
arxiv_dataset-77341609.07922
It wasn't me! Plausible Deniability in Web Search cs.CR cs.CY Our ability to control the flow of sensitive personal information to online systems is key to trust in personal privacy on the internet. We ask how to detect, assess and defend user privacy in the face of search engine personalisation? We develop practical and scalable tools allowing a user to detect, assess and defend against threats to plausible deniability. We show that threats to plausible deniability of interest are readily detectable for all topics tested in an extensive testing program. We show this remains the case when attempting to disrupt search engine learning through noise query injection and click obfuscation are used. We use our model we design a defence technique exploiting uninteresting, proxy topics and show that it provides amore effective defence of plausible deniability in our experiments.
arxiv topic:cs.CR cs.CY
arxiv_dataset-77351609.08022
Correlation based passive imaging with a white noise source math.AP math.PR Passive imaging refers to problems where waves generated by unknown sources are recorded and used to image the medium through which they travel. The sources are typically modelled as a random variable and it is assumed that some statistical information is available. In this paper we study the stochastic wave equation $\partial_t^2 u - \Delta_g u = \chi W$, where $W$ is a random variable with the white noise statistics on ${\mathbb R}^{1+n}$, $n \ge 3$, $\chi$ is a smooth function vanishing for negative times and outside a compact set in space, and $\Delta_g$ is the Laplace-Beltrami operator associated to a smooth non-trapping Riemannian metric tensor $g$ on ${\mathbb R}^n$. The metric tensor $g$ models the medium to be imaged, and we assume that it coincides with the Euclidean metric outside a compact set. We consider the empirical correlations on an open set $\mathcal X \subset {\mathbb R}^n$, $$ C_T(t_1, x_1, t_2, x_2) = \frac 1 T \int_0^T u(t_1+s,x_1) u(t_2+s,x_2) ds, \quad t_1,t_2>0,\ x_1,x_2\in \mathcal X, $$ for $T>0$. Supposing that $\chi$ is non-zero on $\mathcal X$ and constant in time after $t > 1$, we show that in the limit $T \to \infty$, the data $C_T$ becomes statistically stable, that is, independent of the realization of $W$. Our main result is that, with probability one, this limit determines the Riemannian manifold $({\mathbb R}^n,g)$ up to an isometry. To our knowledge, this is the first result showing that a medium can be determined in a passive imaging setting, without assuming a separation of scales.
arxiv topic:math.AP math.PR
arxiv_dataset-77361609.08122
On Shahidi local coefficients matrix math.NT In these notes we define and study the Shahidi local coefficients matrix associated with a genuine principal series representation I({\sigma}) of an n-fold cover of p-adic SL(2,F) and an additive character {\psi}. The conjugacy class of this matrix is an invariant of the inducing representation {\sigma} and {\psi} and its entries are linear combinations of Tate or Tate type {\gamma}-factors. We relate these entries to functional equations associated with linear maps defined on the dual of the space of Schwartz functions. As an application we give new formulas for the Plancherel measure and use these to relate principal series representations of different coverings of SL(2,F). While we do not assume that the residual characteristic of F is relatively prime to n we do assume that n is not divisible by 4.
arxiv topic:math.NT
arxiv_dataset-77371609.08222
Transport gap engineering by contact geometry in graphene nanoribbons: Experimental and theoretical studies on artificial materials cond-mat.mes-hall Electron transport in small graphene nanoribbons is studied by microwave emulation experiments and tight-binding calculations. In particular, it is investigated under which conditions a transport gap can be observed. Our experiments provide evidence that armchair ribbons of width $3m+2$ with integer $m$ are metallic and otherwise semiconducting, whereas zigzag ribbons are metallic independent of their width. The contact geometry, defining to which atoms at the ribbon edges the source and drain leads are attached, has strong effects on the transport. If leads are attached only to the inner atoms of zigzag edges, broad transport gaps can be observed in all armchair ribbons as well as in rhomboid-shaped zigzag ribbons. All experimental results agree qualitatively with tight-binding calculations using the nonequilibrium Green's function method.
arxiv topic:cond-mat.mes-hall
arxiv_dataset-77381609.08322
On L.G. Kov\`acs' problem math.GR "Kourovka notebook" contains the question due to L.G. Kov\`acs (Problem 8.23): If the dihedral group $D$ of order 18 is a section of a direct product $X\times Y$, must at least one of $X$ and $Y$ have a section isomorphic to $D$? The goal of our short paper is to give the positive answer to this question provided that $X$ and $Y$ are locally finite. In fact, we prove even more: If a non-trivial semidirect product $D=A\rtimes B$ of a cyclic $p$-group $A$ and a group $B$ of order $q$, where $p$ and $q$ are distinct primes, lies in a locally finite variety generated by a set $\mathfrak{X}$ of groups, then $D$ is a section of a group from $\mathfrak{X}$.
arxiv topic:math.GR
arxiv_dataset-77391609.08422
Optimizing the placement of tap positions and guess and determine cryptanalysis with variable sampling cs.IT math.IT In this article an optimal selection of tap positions for certain LFSR-based encryption schemes is investigated from both design and cryptanalytic perspective. Two novel algorithms towards an optimal selection of tap positions are given which can be satisfactorily used to provide (sub)optimal resistance to some generic cryptanalytic techniques applicable to these schemes. It is demonstrated that certain real-life ciphers (e.g. SOBER-t32, SFINKS and Grain-128), employing some standard criteria for tap selection such as the concept of full difference set, are not fully optimized with respect to these attacks. These standard design criteria are quite insufficient and the proposed algorithms appear to be the only generic method for the purpose of (sub)optimal selection of tap positions. We also extend the framework of a generic cryptanalytic method called Generalized Filter State Guessing Attacks (GFSGA), introduced in [26] as a generalization of the FSGA method, by applying a variable sampling of the keystream bits in order to retrieve as much information about the secret state bits as possible. Two different modes that use a variable sampling of keystream blocks are presented and it is shown that in many cases these modes may outperform the standard GFSGA mode. We also demonstrate the possibility of employing GFSGA-like attacks to other design strategies such as NFSR-based ciphers (Grain family for instance) and filter generators outputting a single bit each time the cipher is clocked. In particular, when the latter scenario is considered, the idea of combining GFSGA technique and algebraic attacks appears to be a promising unified cryptanalytic method against NFSR-based stream ciphers.
arxiv topic:cs.IT math.IT
arxiv_dataset-77401609.08522
Phaseless super-resolution in the continuous domain cs.IT math.IT Phaseless super-resolution refers to the problem of superresolving a signal from only its low-frequency Fourier magnitude measurements. In this paper, we consider the phaseless super-resolution problem of recovering a sum of sparse Dirac delta functions which can be located anywhere in the continuous time-domain. For such signals in the continuous domain, we propose a novel Semidefinite Programming (SDP) based signal recovery method to achieve the phaseless superresolution. This work extends the recent work of Jaganathan et al. [1], which considered phaseless super-resolution for discrete signals on the grid.
arxiv topic:cs.IT math.IT
arxiv_dataset-77411609.08622
The dust content of galaxies from z = 0 to z = 9 astro-ph.GA astro-ph.CO We study the dust content of galaxies from $z=0$ to $z=9$ in semi-analytic models of galaxy formation that include new recipes to track the production and destruction of dust. We include condensation of dust in stellar ejecta, the growth of dust in the interstellar medium (ISM), the destruction of dust by supernovae and in the hot halo, and dusty winds and inflows. The rate of dust growth in the ISM depends on the metallicity and density of molecular clouds. Our fiducial model reproduces the relation between dust mass and stellar mass from $z=0$ to $z=7$, the number density of galaxies with dust masses less than $10^{8.3}\,\rm{M}_\odot$, and the cosmic density of dust at $z=0$. The model accounts for the double power-law trend between dust-to-gas (DTG) ratio and gas-phase metallicity of local galaxies and the relation between DTG ratio and stellar mass. The dominant mode of dust formation is dust growth in the ISM, except for galaxies with $M_*<10^7\,\rm{M}_\odot$, where condensation of dust in supernova ejecta dominates. The dust-to-metal ratio of galaxies depends on the gas-phase metallicity, unlike what is typically assumed in cosmological simulations. Model variants including higher condensation efficiencies, a fixed timescale for dust growth in the ISM, or no growth at all reproduce some of the observed constraints, but fail to simultaneously reproduce the shape of dust scaling relations and the dust mass of high-redshift galaxies.
arxiv topic:astro-ph.GA astro-ph.CO
arxiv_dataset-77421609.08722
Solving polynomial systems via homotopy continuation and monodromy math.AG cs.MS We study methods for finding the solution set of a generic system in a family of polynomial systems with parametric coefficients. We present a framework for describing monodromy based solvers in terms of decorated graphs. Under the theoretical assumption that monodromy actions are generated uniformly, we show that the expected number of homotopy paths tracked by an algorithm following this framework is linear in the number of solutions. We demonstrate that our software implementation is competitive with the existing state-of-the-art methods implemented in other software packages.
arxiv topic:math.AG cs.MS
arxiv_dataset-77431609.08822
Quantitative separation of the anisotropic magnetothermopower and planar Nernst effect by the rotation of an in-plane thermal gradient cond-mat.mes-hall cond-mat.mtrl-sci A thermal gradient as the driving force for spin currents plays a key role in spin caloritronics. In this field the spin Seebeck effect (SSE) is of major interest and was investigated in terms of in-plane thermal gradients inducing perpendicular spin currents (transverse SSE) and out-of-plane thermal gradients generating parallel spin currents (longitudinal SSE). Up to now all spincaloric experiments employ a spatially fixed thermal gradient. Thus anisotropic measurements with respect to well defined crystallographic directions were not possible. Here we introduce a new experiment that allows not only the in-plane rotation of the external magnetic field, but also the rotation of an in-plane thermal gradient controlled by optical temperature detection. As a consequence, the anisotropic magnetothermopower and the planar Nernst effect in a permalloy thin film can be measured simultaneously and reveal a phase shift, that allows the quantitative separation of the thermopower, the anisotropic magnetothermopower and the planar Nernst effect.
arxiv topic:cond-mat.mes-hall cond-mat.mtrl-sci
arxiv_dataset-77441609.08922
The CP-odd sector and $\theta$ dynamics in holographic QCD hep-ph hep-lat hep-th The holographic model of V-QCD is used to analyze the physics of QCD in the Veneziano large-N limit. An unprecedented analysis of the CP-odd physics is performed going beyond the level of effective field theories. The structure of holographic saddle-points at finite $\theta$ is determined, as well as its interplay with chiral symmetry breaking. Many observables (vacuum energy and higher-order susceptibilities, singlet and non-singlet masses and mixings) are computed as functions of $\theta$ and the quark mass $m$. Wherever applicable the results are compared to those of chiral Lagrangians, finding agreement. In particular, we recover the Witten-Veneziano formula in the small $x\to 0$ limit, we compute the $\theta$-dependence of the pion mass and we derive the hyperscaling relation for the topological susceptibility in the conformal window in terms of the quark mass.
arxiv topic:hep-ph hep-lat hep-th
arxiv_dataset-77451609.09022
Eigenvector Statistics of Sparse Random Matrices math.PR We prove that the bulk eigenvectors of sparse random matrices, i.e. the adjacency matrices of Erd\H{o}s-R\'enyi graphs or random regular graphs, are asymptotically jointly normal, provided the averaged degree increases with the size of the graphs. Our methodology follows [6] by analyzing the eigenvector flow under Dyson Brownian motion, combining with an isotropic local law for Green's function. As an auxiliary result, we prove that for the eigenvector flow of Dyson Brownian motion with general initial data, the eigenvectors are asymptotically jointly normal in the direction $q$ after time $\eta_*\ll t\ll r$, if in a window of size $r$, the initial density of states is bounded below and above down to the scale $\eta_*$, and the initial eigenvectors are delocalized in the direction $q$ down to the scale $\eta_*$.
arxiv topic:math.PR
arxiv_dataset-77461609.09122
Sharp Dirac's Theorem for DP-Critical Graphs math.CO Correspondence coloring, or DP-coloring, is a generalization of list coloring introduced recently by Dvo\v{r}\'{a}k and Postle. In this paper we establish a version of Dirac's theorem on the minimum number of edges in critical graphs in the framework of DP-colorings. A corollary of our main result answers a question posed by Kostochka and Stiebitz on classifying list-critical graphs that satisfy Dirac's bound with equality.
arxiv topic:math.CO
arxiv_dataset-77471609.09222
Dimension reduction for systems with slow relaxation cond-mat.stat-mech math.PR physics.data-an We develop reduced, stochastic models for high dimensional, dissipative dynamical systems that relax very slowly to equilibrium and can encode long term memory. We present a variety of empirical and first principles approaches for model reduction, and build a mathematical framework for analyzing the reduced models. We introduce the notions of universal and asymptotic filters to characterize `optimal' model reductions for sloppy linear models. We illustrate our methods by applying them to the practically important problem of modeling evaporation in oil spills.
arxiv topic:cond-mat.stat-mech math.PR physics.data-an
arxiv_dataset-77481609.09322
Modified viscosity in accretion disks. Application to Galactic black hole binaries, intermediate mass black holes and AGN astro-ph.HE Black holes (BHs) surrounded by accretion disks are present in the Universe at different scales of masses, from microquasars up to the active galactic nuclei (AGNs). The current picture remains ad hoc due to the complexity of the magnetic field action. In addition, accretion disks at high Eddington rates can be radiation-pressure dominated and, according to some of the heating prescriptions, thermally unstable. The observational verification of their resulting variability patterns may shed light on both the role of radiation pressure and magnetic fields. We compute the structure and time evolution of an accretion disk, using the code GLADIS (which models the global accretion disk instability). We supplement this model with a modified viscosity prescription, which can to some extent describe the magnetisation of the disk. We study the results for a large grid of models, to cover the whole parameter space, and we derive conclusions separately for different scales of black hole masses, which are characteristic for various types of cosmic sources. We show the dependencies between the flare or outburst duration, its amplitude, and period, on the accretion rate and viscosity scaling. We show that if the heating rate in the accretion disk grows more rapidly with the total pressure and temperature, the instability results in longer and sharper flares. In general, we confirm that the disks around the supermassive black holes are more radiation-pressure dominated and present relatively brighter bursts. Our method can also be used as an independent tool for the black hole mass determination, which we confront now for the intermediate black hole in the source HLX-1.
arxiv topic:astro-ph.HE
arxiv_dataset-77491609.09422
Jet formation in solar atmosphere due to magnetic reconnection astro-ph.SR Using numerical simulations, we show that jets with features of type II spicules and cold coronal jets corresponding to temperatures $10^{4}$ K can be formed due to magnetic reconnection in a scenario in presence of magnetic resistivity. For this we model the low chromosphere-corona region using the C7 equilibrium solar atmosphere model and assuming Resistive MHD rules the dynamics of the plasma. The magnetic filed configurations we analyze correspond to two neighboring loops with opposite polarity. The separation of the loops' feet determines the thickness of a current sheet that triggers a magnetic reconnection process, and the further formation of a high speed and sharp structure. We analyze the cases where the magnetic filed strength of the two loops is equal and different. In the first case, with a symmetric configuration the spicules raise vertically whereas in an asymmetric configuration the structure shows an inclination. With a number of simulations carried out under a 2.5D approach, we explore various properties of excited jets, namely, the morphology, inclination and velocity. The parameter space involves magnetic field strength between 20 and 40 G, and the resistivity is assumed to be uniform with a constant value of the order $10^{-2}\Omega\cdot m$
arxiv topic:astro-ph.SR
arxiv_dataset-77501609.09522
Charged Point Normalization: An Efficient Solution to the Saddle Point Problem cs.LG Recently, the problem of local minima in very high dimensional non-convex optimization has been challenged and the problem of saddle points has been introduced. This paper introduces a dynamic type of normalization that forces the system to escape saddle points. Unlike other saddle point escaping algorithms, second order information is not utilized, and the system can be trained with an arbitrary gradient descent learner. The system drastically improves learning in a range of deep neural networks on various data-sets in comparison to non-CPN neural networks.
arxiv topic:cs.LG
arxiv_dataset-77511609.09622
Inhibiting decoherence of two-level atom in thermal bath by presence of boundaries quant-ph We study, in the paradigm of open quantum systems, the dynamics of quantum coherence of a static polarizable two-level atom which is coupled with a thermal bath of fluctuating electromagnetic field in the absence and presence of boundaries. The purpose is to find the conditions under which the decoherence can be inhibited effectively. We find that without boundaries, quantum coherence of the two-level atom inevitably decreases due to the effect of thermal bath. However, the quantum decoherence, in the presence of a boundary, could be effectively inhibited when the atom is transversely polarizable and near this boundary. In particular, we find that in the case of two parallel reflecting boundaries, the atom with a parallel dipole polarization at arbitrary location between these two boundaries will be never subjected to decoherence provided we take some special distances for the two boundaries.
arxiv topic:quant-ph
arxiv_dataset-77521609.09722
Photoionisation of Cl$^+$ from the $3s^23p^4\;^3P_{2,1,0}$ and the$3s^23p^4\;^1D_2, ^1S_0$ states in the energy range 19 - 28 eV physics.atom-ph astro-ph.EP Absolute photoionisation cross sections for the Cl$^+$ ion in its ground and the metastable states; $3s^2 3p^4\; ^3P_{2,1,0}$, and $3s^2 3p^4\; ^1D_2,\; ^1S_0$, were measured recently at the Advanced Light Source ALS) at Lawrence Berkeley National Laboratory using the merged beams photon-ion technique at an photon energy resolution of 15 meV in the energy range 19 -- 28 eV. These measurements are compared with large-scale Dirac Coulomb {\it R}-matrix calculations in the same energy range. Photoionisation of this sulphur-like chlorine ion is characterized by multiple Rydberg series of autoionizing resonances superimposed on a direct photoionisation continuum. A wealth of resonance features observed in the experimental spectra are spectroscopically assigned and their resonance parameters tabulated and compared with the recent measurements. Metastable fractions in the parent ion beam are determined from the present study. Theoretical resonance energies and quantum defects of the prominent Rydberg series $3s^2 3p^3 nd$, identified in the spectra as $3p\rightarrow nd$ transitions are compared with the available measurements made on this element. Weaker Rydberg series $3s^2 3p^3 ns$, identified as $3p \rightarrow ns$ transitions and window resonances $3s3p^4 (^4P)np$ features, due to $3s \rightarrow np$ transitions are also found in the spectra.
arxiv topic:physics.atom-ph astro-ph.EP
arxiv_dataset-77531609.09822
Stepping Stabilization Using a Combination of DCM Tracking and Step Adjustment cs.RO In this paper, a method for stabilizing biped robots stepping by a combination of Divergent Component of Motion (DCM) tracking and step adjustment is proposed. In this method, the DCM trajectory is generated, consistent with the predefined footprints. Furthermore, a swing foot trajectory modification strategy is proposed to adapt the landing point, using DCM measurement. In order to apply the generated trajectories to the full robot, a Hierarchical Inverse Dynamics (HID) is employed. The HID enables us to use different combinations of the DCM tracking and step adjustment for stabilizing different biped robots. Simulation experiments on two scenarios for two different simulated robots, one with active ankles and the other with passive ankles, are carried out. Simulation results demonstrate the effectiveness of the proposed method for robots with both active and passive ankles.
arxiv topic:cs.RO
arxiv_dataset-77541610.00053
Superconducting optoelectronic circuits for neuromorphic computing cs.NE cond-mat.supr-con physics.optics Neural networks have proven effective for solving many difficult computational problems. Implementing complex neural networks in software is very computationally expensive. To explore the limits of information processing, it will be necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density necessary to scale to systems with enormous entropy. The proposed processing units can operate at speeds of at least $20$ MHz with fully asynchronous activity, light-speed-limited latency, and power densities on the order of 1 mW/cm$^2$ for neurons with 700 connections operating at full speed at 2 K. The processing units achieve an energy efficiency of $\approx 20$ aJ per synapse event. By leveraging multilayer photonics with deposited waveguides and superconductors with feature sizes $>$ 100 nm, this approach could scale to systems with massive interconnectivity and complexity for advanced computing as well as explorations of information processing capacity in systems with an enormous number of information-bearing microstates.
arxiv topic:cs.NE cond-mat.supr-con physics.optics
arxiv_dataset-77551610.00153
Study of the lowest tensor and scalar resonances in the $\tau \to \pi\pi\pi \nu_\tau$ decay hep-ph In this note we present a new parametrization of the hadronic current for the decay $\tau \to \pi\pi\pi \nu_\tau$ derived from the chiral lagrangian with explicit inclusion of resonances. We have included both scalar, vector and axial-vector resonances. For the first time, the lowest tensor resonance ($f_2(1270)$) is included as well. Both single and double-resonance contributions to the hadronic form factors are taken into account. To satisfy the correct high energy behaviour of the hadronic form factors, constraints on numerical values of the vertex constants are obtained.
arxiv topic:hep-ph
arxiv_dataset-77561610.00253
Asynchronous Distributed Execution Of Fixpoint-Based Computational Fields cs.LO Coordination is essential for dynamic distributed systems whose components exhibit interactive and autonomous behaviors. Spatially distributed, locally interacting, propagating computational fields are particularly appealing for allowing components to join and leave with little or no overhead. Computational fields are a key ingredient of aggregate programming, a promising software engineering methodology particularly relevant for the Internet of Things. In our approach, space topology is represented by a fixed graph-shaped field, namely a network with attributes on both nodes and arcs, where arcs represent interaction capabilities between nodes. We propose a SMuC calculus where mu-calculus- like modal formulas represent how the values stored in neighbor nodes should be combined to update the present node. Fixpoint operations can be understood globally as recursive definitions, or locally as asynchronous converging propagation processes. We present a distributed implementation of our calculus. The translation is first done mapping SMuC programs into normal form, purely iterative programs and then into distributed programs. Some key results are presented that show convergence of fixpoint computations under fair asynchrony and under reinitialization of nodes. The first result allows nodes to proceed at different speeds, while the second one provides robustness against certain kinds of failure. We illustrate our approach with a case study based on a disaster recovery scenario, implemented in a prototype simulator that we use to evaluate the performance of a recovery strategy.
arxiv topic:cs.LO
arxiv_dataset-77571610.00353
$O(m^9)$ network flow LP model of the Assignment Problem polytope with applications to hard combinatorial optimization problems cs.DS cs.CC cs.DM math.CO math.OC In this paper, we present a new, network flow LP model of the standard Assignment Problem (AP) polytope. The model is not meant to be competitive with existing standard procedures for solving the AP, as its complexity order of size is $O(m^9)$, where m is the number of assignments. However, it allows for hard combinatorial optimization problems (COPs) to be solved as Assignment Problems (APs), including, in particular, the Quadratic, Cubic, Quartic, Quintic, and Sextic Assignment Problems, as well as the Traveling Salesman Problem and many of its variations. Hence, in particular, the model re-affirms "P = NP." Illustrations are provided for the Linear Assignment (LAP), Quadratic Assignment (QAP), and Traveling Salesman (TSP) problems. Issues pertaining to the extended formulations "barriers" for the LP modeling of hard COPs are not discussed in this paper because the developments are focused on the Assignment Problem polytope only, and also the applicability/non-applicability of those "barriers" are thoroughly addressed in a separate paper* in which it is shown that, in an optimization context, these "barriers" have no pertinence for a model which projects to the AP polytope, provided appropriate costs can be attached to the non-superfluous variables of the model. Hence, the issues of the "barriers" are left out of this paper essentially for the sake of space. *: Diaby, M., M. Karwan, and L. Sun [2024]. On modeling NP-Complete problems as polynomial-sized linear programs: Escaping/Side-stepping the "barriers." Available at: arXiv:2304.07716 [cc.CC].
arxiv topic:cs.DS cs.CC cs.DM math.CO math.OC
arxiv_dataset-77581610.00453
Estimating thermodynamic expectations and free energies in expanded ensemble simulations: systematic variance reduction through conditioning cond-mat.stat-mech Markov chain Monte Carlo methods are primarily used for sampling from a given probability distribution and estimating multi-dimensional integrals based on the information contained in the generated samples. Whenever it is possible, more accurate estimates are obtained by combining Monte Carlo integration and integration by numerical quadrature along particular coordinates. We show that this variance reduction technique, referred to as conditioning in probability theory, can be advantageously implemented in \emph{expanded ensemble} simulations. These simulations aim at estimating thermodynamic expectations as a function of an external parameter that is sampled like an additional coordinate. Conditioning therein entails integrating along the external coordinate by numerical quadrature. We prove variance reduction with respect to alternative standard estimators and demonstrate the practical efficiency of the technique by estimating free energies and characterizing a structural phase transition between two solid phases.
arxiv topic:cond-mat.stat-mech
arxiv_dataset-77591610.00553
A general theory of linear cosmological perturbations: bimetric theories gr-qc astro-ph.CO hep-th We implement the method developed in [1] to construct the most general parametrised action for linear cosmological perturbations of bimetric theories of gravity. Specifically, we consider perturbations around a homogeneous and isotropic background, and identify the complete form of the action invariant under diffeomorphism transformations, as well as the number of free parameters characterising this cosmological class of theories. We discuss, in detail, the case without derivative interactions, and compare our results with those found in massive bigravity.
arxiv topic:gr-qc astro-ph.CO hep-th
arxiv_dataset-77601610.00653
Phase transitions in distributed control systems with multiplicative noise cond-mat.stat-mech cond-mat.dis-nn cs.SY math.OC Contemporary technological challenges often involve many degrees of freedom in a distributed or networked setting. Three aspects are notable: the variables are usually associated with the nodes of a graph with limited communication resources, hindering centralized control; the communication is subjected to noise; and the number of variables can be very large. These three aspects make tools and techniques from statistical physics particularly suitable for the performance analysis of such networked systems in the limit of many variables (analogous to the thermodynamic limit in statistical physics). Perhaps not surprisingly, phase-transition like phenomena appear in these systems, where a sharp change in performance can be observed with a smooth parameter variation, with the change becoming discontinuous or singular in the limit of infinite system size. In this paper we analyze the so called network consensus problem, prototypical of the above considerations, that has been previously analyzed mostly in the context of additive noise. We show that qualitatively new phase-transition like phenomena appear for this problem in the presence of multiplicative noise. Depending on dimensions and on the presence or absence of a conservation law, the system performance shows a discontinuous change at a threshold value of the multiplicative noise strength. In the absence of the conservation law, and for graph spectral dimension less than two, the multiplicative noise threshold (the stability margin of the control problem) is zero. This is reminiscent of the absence of robust controllers for certain classes of centralized control problems. Although our study involves a toy model we believe that the qualitative features are generic, with implication for the robust stability of distributed control systems, as well as the effect of roundoff errors and communication noise on distributed algorithms.
arxiv topic:cond-mat.stat-mech cond-mat.dis-nn cs.SY math.OC
arxiv_dataset-77611610.00753
Maxwell's equations as a special case of deformation of a solid lattice in Euler's coordinates physics.gen-ph It is shown that the set of equations known as Maxwell's equations perfectly describe two very different systems: (1) the usual electromagnetic phenomena in vacuum or in the matter and (2) the deformation of isotropic solid lattices, containing topological defects as dislocations and disclinations, in the case of constant and homogenous expansion. The analogy between these two physical systems is complete, as it is not restricted to one of the two Maxwell's equation couples in the vacuum, but generalized to the two equation couples as well as to the diverse phenomena of dielectric polarization and magnetization of matter, just as to the electrical charges and the electrical currents. The eulerian approach of the solid lattice developed here includes Maxwell's equations as a special case, since it stems from a tensor theory, which is reduced to a vector one by contraction on the tensor indices. Considering the tensor aspect of the eulerian solid lattice deformation theory, the analogy can be extended to other physical phenomena than electromagnetism, a point which is shortly discussed at the end of the paper.
arxiv topic:physics.gen-ph
arxiv_dataset-77621610.00853
Constrained Hitting Set and Steiner Tree in $SC_k$ and $2K_2$-free Graphs math.CO cs.DM \emph{Strictly Chordality-$k$ graphs ($SC_k$)} are graphs which are either cycle-free or every induced cycle is of length exactly $k, k \geq 3$. Strictly chordality-3 and strictly chordality-4 graphs are well known chordal and chordal bipartite graphs, respectively. For $k\geq 5$, the study has been recently initiated in \cite{sadagopan} and various structural and algorithmic results are reported. In this paper, we show that maximum independent set (MIS), minimum vertex cover, minimum dominating set, feedback vertex set (FVS), odd cycle transversal (OCT), even cycle transversal (ECT) and Steiner tree problem are polynomial time solvable on $SC_k$ graphs, $k\geq 5$. We next consider $2K_2$-free graphs and show that FVS, OCT, ECT, Steiner tree problem are polynomial time solvable on subclasses of $2K_2$-free graphs.
arxiv topic:math.CO cs.DM
arxiv_dataset-77631610.00953
Fast and Reliable Primary Frequency Reserves From Refrigerators with Decentralized Stochastic Control math.OC cs.SY Due to increasing shares of renewable energy sources, more frequency reserves are required to maintain power system stability. In this paper, we present a decentralized control scheme that allows a large aggregation of refrigerators to provide Primary Frequency Control (PFC) reserves to the grid based on local frequency measurements and without communication. The control is based on stochastic switching of refrigerators depending on the frequency deviation. We develop methods to account for typical lockout constraints of compressors and increased power consumption during the startup phase. In addition, we propose a procedure to dynamically reset the thermostat temperature limits in order to provide reliable PFC reserves, as well as a corrective temperature feedback loop to build robustness to biased frequency deviations. Furthermore, we introduce an additional randomization layer in the controller to account for thermostat resolution limitations, and finally, we modify the control design to account for refrigerator door openings. Extensive simulations with actual frequency signal data and with different aggregation sizes, load characteristics, and control parameters, demonstrate that the proposed controller outperforms a relevant state-of-the-art controller.
arxiv topic:math.OC cs.SY
arxiv_dataset-77641610.01053
Magnetic state of Nb(1-7nm)/Cu$_{30}$Ni$_{70}$(6nm) superlattices revealed by Polarized Neutron Reflectometry and SQUID magnetometry cond-mat.mes-hall We report results of a magnetic characterization of [Cu$_{30}$Ni$_{70}$(6nm)]$_{20}$ (x=1-7nm) superlattices using Polarized Neutron Reflectometry (PNR) and SQUID magnetometry. The study has shown that the magnetic moment of the structures growths almost linearly from H = 0 to H$_{sat}$ = 1.3kOe which is an indirect evidence of antiferromagnetic (AF) coupling of the magnetic moments in neighbouring layers. PNR, however, did not detect any in-plane AF coupling. Taking into account the out-of-plane easy axis of the Cu$_{30}$Ni$_{70}$ layers, this may mean that only the out-of-plane component of the magnetic moments are AF coupled.
arxiv topic:cond-mat.mes-hall
arxiv_dataset-77651610.01153
Integration of higher IT education in Ukraine in the global IT-educational space cs.CY The article presents the results of a study of the current state of higher IT education system in Ukraine. The problems of reforming higher IT education system of Ukraine in accordance with the commitments made by Ukraine in connection with the ratification of the EU-Ukraine Agreement Law of Ukraine N 1678-VII of September 16, 2014. An indicator of the presence or absence of a real reform of the system of higher IT education in Ukraine is detected. A comparative analysis of lists of IT-specialties of higher education in Ukraine in 2005 and 2015 with similar lists adopted by the international system of higher IT education is made. A discrepancy between the list of IT-specialties in Ukraine and international list of IT specialties are identified. The conclusion about the need for immediate correction of the list of higher education in Ukraine IT-specialties in order to bring it into line with international standards. It recommended a series of actions that will lead to the solution of the problem.
arxiv topic:cs.CY
arxiv_dataset-77661610.01253
Some bivariate stochastic models arising from group representation theory math.PR math.CA The aim of this paper is to study some continuous-time bivariate Markov processes arising from group representation theory. The first component (level) can be either discrete (quasi-birth-and-death processes) or continuous (switching diffusion processes), while the second component (phase) will always be discrete and finite. The infinitesimal operators of these processes will be now matrix-valued (either a block tridiagonal matrix or a matrix-valued second-order differential operator). The matrix-valued spherical functions associated to the compact symmetric pair $(\mathrm{SU}(2)\times \mathrm{SU}(2), \mathrm{diag} \, \mathrm{SU}(2))$ will be eigenfunctions of these infinitesimal operators, so we can perform spectral analysis and study directly some probabilistic aspects of these processes. Among the models we study there will be rational extensions of the one-server queue and Wright-Fisher models involving only mutation effects.
arxiv topic:math.PR math.CA
arxiv_dataset-77671610.01353
Confidence regions for high-dimensional generalized linear models under sparsity stat.ME math.ST stat.TH We study asymptotically normal estimation and confidence regions for low-dimensional parameters in high-dimensional sparse models. Our approach is based on the $\ell_1$-penalized M-estimator which is used for construction of a bias corrected estimator. We show that the proposed estimator is asymptotically normal, under a sparsity assumption on the high-dimensional parameter, smoothness conditions on the expected loss and an entropy condition. This leads to uniformly valid confidence regions and hypothesis testing for low-dimensional parameters. The present approach is different in that it allows for treatment of loss functions that we not sufficiently differentiable, such as quantile loss, Huber loss or hinge loss functions. We also provide new results for estimation of the inverse Fisher information matrix, which is necessary for the construction of the proposed estimator. We formulate our results for general models under high-level conditions, but investigate these conditions in detail for generalized linear models and provide mild sufficient conditions. As particular examples, we investigate the case of quantile loss and Huber loss in linear regression and demonstrate the performance of the estimators in a simulation study and on real datasets from genome-wide association studies. We further investigate the case of logistic regression and illustrate the performance of the estimator on simulated and real data.
arxiv topic:stat.ME math.ST stat.TH
arxiv_dataset-77681610.01453
Gravitational Baryogenesis in Running Vacuum models gr-qc astro-ph.CO We study the gravitational baryogenesis mechanism for generating baryon asymmetry in the context of running vacuum models. Regardless if these models can produce a viable cosmological evolution, we demonstrate that they produce a non-zero baryon-to-entropy ratio even if the universe is filled with conformal matter. This is a sound difference between the running vacuum gravitational baryogenesis and the Einstein-Hilbert one, since in the latter case, the predicted baryon-to-entropy ratio is zero. We consider two well known and most used running vacuum models and show that the resulting baryon-to-entropy ratio is compatible with the observational data. Moreover, we also show that the mechanism of gravitational baryogenesis may constrain the running vacuum models.
arxiv topic:gr-qc astro-ph.CO
arxiv_dataset-77691610.01553
Coordination of Heterogeneous Nonlinear Multi-Agent Systems with Prescribed Behaviors math.OC In this paper, we consider a coordination problem for a class of heterogeneous nonlinear multi-agent systems with a prescribed input-output behavior which was represented by another input-driven system. In contrast to most existing multi-agent coordination results with an autonomous (virtual) leader, this formulation takes possible control inputs of the leader into consideration. First, the coordination was achieved by utilizing a group of distributed observers based on conventional assumptions of model matching problem. Then, a fully distributed adaptive extension was proposed without using the input of this input-output behavior. An example was given to verify their effectiveness.
arxiv topic:math.OC
arxiv_dataset-77701610.01653
Decay Properties of Solutions to a 4-parameter Family of Wave Equations math.AP In this paper, persistence properties of solutions are investigated for a 4-parameter family ($k-abc$ equation) of evolution equations having $(k+1)$-degree non-linearities and containing as its integrable members the Camassa-Holm, the Degasperis-Procesi, Novikov and Fokas-Olver-Rosenau-Qiao equations. These properties will imply that strong solutions of the $k-abc$ equation will decay at infinity in the spatial variable provided that the initial data does. Furthermore, it is shown that the equation exhibits unique continuation for appropriate values of the parameters $k$, $a$, $b$, and $c$.
arxiv topic:math.AP
arxiv_dataset-77711610.01753
A general lower bound for collaborative tree exploration cs.DM cs.DS We consider collaborative graph exploration with a set of $k$ agents. All agents start at a common vertex of an initially unknown graph and need to collectively visit all other vertices. We assume agents are deterministic, vertices are distinguishable, moves are simultaneous, and we allow agents to communicate globally. For this setting, we give the first non-trivial lower bounds that bridge the gap between small ($k \leq \sqrt n$) and large ($k \geq n$) teams of agents. Remarkably, our bounds tightly connect to existing results in both domains. First, we significantly extend a lower bound of $\Omega(\log k / \log\log k)$ by Dynia et al. on the competitive ratio of a collaborative tree exploration strategy to the range $k \leq n \log^c n$ for any $c \in \mathbb{N}$. Second, we provide a tight lower bound on the number of agents needed for any competitive exploration algorithm. In particular, we show that any collaborative tree exploration algorithm with $k = Dn^{1+o(1)}$ agents has a competitive ratio of $\omega(1)$, while Dereniowski et al. gave an algorithm with $k = Dn^{1+\varepsilon}$ agents and competitive ratio $O(1)$, for any $\varepsilon > 0$ and with $D$ denoting the diameter of the graph. Lastly, we show that, for any exploration algorithm using $k = n$ agents, there exist trees of arbitrarily large height $D$ that require $\Omega(D^2)$ rounds, and we provide a simple algorithm that matches this bound for all trees.
arxiv topic:cs.DM cs.DS
arxiv_dataset-77721610.01853
$GW$100: a plane wave perspective for small molecules cond-mat.mtrl-sci In a recent work, van Setten and coworkers have presented a carefully converged $G_0W_0$ study of 100 closed shell molecules [J. Chem. Theory Comput. 11, 5665 (2015)]. For two different codes they found excellent agreement to within few 10 meV if identical Gaussian basis sets were used. We inspect the same set of molecules using the projector augmented wave method and the Vienna ab initio simulation package (VASP). For the ionization potential, the basis set extrapolated plane wave results agree very well with the Gaussian basis sets, often reaching better than 50 meV agreement. In order to achieve this agreement, we correct for finite basis set errors as well as errors introduced by periodically repeated images. For electron affinities below the vacuum level differences between Gaussian basis sets and VASP are slightly larger. We attribute this to larger basis set extrapolation errors for the Gaussian basis sets. For quasi particle (QP) resonances above the vacuum level, differences between VASP and Gaussian basis sets are, however, found to be substantial. This is tentatively explained by insufficient basis set convergence of the Gaussian type orbital calculations as exemplified for selected test cases.
arxiv topic:cond-mat.mtrl-sci
arxiv_dataset-77731610.01953
The Future Internet of Things and Security of its Control Systems cs.CY cs.CR We consider the future cyber security of industrial control systems. As best as we can see, much of this future unfolds in the context of the Internet of Things (IoT). In fact, we envision that all industrial and infrastructure environments, and cyber-physical systems in general, will take the form reminiscent of what today is referred to as the IoT. IoT is envisioned as multitude of heterogeneous devices densely interconnected and communicating with the objective of accomplishing a diverse range of objectives, often collaboratively. One can argue that in the relatively near future, the IoT construct will subsume industrial plants, infrastructures, housing and other systems that today are controlled by ICS and SCADA systems. In the IoT environments, cybersecurity will derive largely from system agility, moving-target defenses, cybermaneuvering, and other autonomous or semi-autonomous behaviors. Cyber security of IoT may also benefit from new design methods for mixed-trusted systems; and from big data analytics -- predictive and autonomous.
arxiv topic:cs.CY cs.CR
arxiv_dataset-77741610.02053
Biorthogonal projected energies of a Gutzwiller similarity transformed Hamiltonian cond-mat.str-el We present a method incorporating biorthogonal orbital-optimization, symmetry projection, and double-occupancy screening with a non-unitary similarity transformation generated by the Gutzwiller factor $ n_{i\uparrow}n_{i\downarrow}$, and apply it to the Hubbard model. Energies are calculated with mean-field computational scaling with high-quality results comparable to coupled cluster singles and doubles. This builds on previous work performing similarity transformations with more general, two-body Jastrow-style correlators. The theory is tested on two-dimensional lattices ranging from small systems into the thermodynamic limit and is compared to available reference data.
arxiv topic:cond-mat.str-el
arxiv_dataset-77751610.02153
Distribution of singular values of random band matrices; Marchenko-Pastur law and more math.PR We consider the limiting spectral distribution of matrices of the form $\frac{1}{2b_{n}+1} (R + X)(R + X)^{*}$, where $X$ is an $n\times n$ band matrix of bandwidth $b_{n}$ and $R$ is a non random band matrix of bandwidth $b_{n}$. We show that the Stieltjes transform of ESD of such matrices converges to the Stieltjes transform of a non-random measure. And the limiting Stieltjes transform satisfies an integral equation. For $R=0$, the integral equation yields the Stieltjes transform of the Marchenko-Pastur law.
arxiv topic:math.PR
arxiv_dataset-77761610.02253
Performance analysis of multi-dimensional ESPRIT-type algorithms for arbitrary and strictly non-circular sources with spatial smoothing cs.IT math.IT Spatial smoothing is a widely used preprocessing scheme to improve the performance of high-resolution parameter estimation algorithms in case of coherent signals or if only a small number of snapshots is available. In this paper, we present a first-order performance analysis of the spatially smoothed versions of R-D Standard ESPRIT and R-D Unitary ESPRIT for sources with arbitrary signal constellations as well as R-D NC Standard ESPRIT and R-D NC Unitary ESPRIT for strictly second-order (SO) non-circular (NC) sources. The derived expressions are asymptotic in the effective signal-to-noise ratio (SNR), i.e., the approximations become exact for either high SNRs or a large sample size. Moreover, no assumptions on the noise statistics are required apart from a zero-mean and finite SO moments. We show that both R-D NC ESPRIT-type algorithms with spatial smoothing perform asymptotically identical in the high effective SNR regime. Generally, the performance of spatial smoothing based algorithms depends on the number of subarrays, which is a design parameter and needs to be chosen beforehand. In order to gain more insights into the optimal choice of the number of subarrays, we simplify the derived analytical R-D mean square error (MSE) expressions for the special case of a single source. The obtained MSE expression explicitly depends on the number of subarrays in each dimension, which allows us to analytically find the optimal number of subarrays for spatial smoothing. Based on this result, we additionally derive the maximum asymptotic gain from spatial smoothing and explicitly compute the asymptotic efficiency for this special case. All the analytical results are verified by simulations.
arxiv topic:cs.IT math.IT
arxiv_dataset-77771610.02353
Phonon Optimized Potentials cond-mat.mtrl-sci Molecular dynamics (MD) simulations have been extensively used to study phonons and gain insight, but direct comparisons to experimental data are often difficult, due to a lack of empirical interatomic potentials (EIPs) for different systems. As a result, this issue has become a major barrier to realizing the promise associated with advanced atomistic level modeling techniques. Here, we present a general method for specifically optimizing EIPs from ab initio inputs for the study of phonon transport properties, thereby resulting in phonon optimized potentials (POPs). The method uses a genetic algorithm (GA) to directly fit to the key properties that determine whether or not the atomic level dynamics and most notably the phonon transport are described properly.
arxiv topic:cond-mat.mtrl-sci
arxiv_dataset-77781610.02453
A construction of certain weak colimits and an exactness property of the 2-category of categories math.CT Given a 2-category $\mathcal{A}$, a $2$-functor $\mathcal{A} \overset {F} {\longrightarrow} \mathcal{C}at$ and a distinguished 1-subcategory $\Sigma \subset \mathcal{A}$ containing all the objects, a $\sigma$-cone for $F$ (with respect to $\Sigma$) is a lax cone such that the structural $2$-cells corresponding to the arrows of $\Sigma$ are invertible. The conical $\sigma$-limit is the universal (up to isomorphism) $\sigma$-cone. The notion of $\sigma$-limit generalises the well known notions of pseudo and lax limit. We consider the fundamental notion of $\sigma$-filtered} pair $(\mathcal{A}, \, \Sigma)$ which generalises the notion of 2-filtered 2-category. We give an explicit construction of $\sigma$-filtered $\sigma$-colimits of categories, construction which allows computations with these colimits. We then state and prove a basic exactness property of the 2-category of categories, namely, that $\sigma$-filtered $\sigma$-colimits commute with finite weighted pseudo (or bi) limits. An important corollary of this result is that a $\sigma$-filtered $\sigma$-colimit of exact category valued 2-functors is exact. This corollary is essential in the 2-dimensional theory of flat and pro-representable 2-functors, that we develop elsewhere.
arxiv topic:math.CT
arxiv_dataset-77791610.02553
Regularization by noise and flows of solutions for a stochastic heat equation math.PR Motivated by the regularization by noise phenomenon for SDEs we prove existence and uniqueness of the flow of solutions for the non-Lipschitz stochastic heat equation $$\frac{\partial u}{\partial t}=\frac12\frac{\partial^2 u}{\partial z^2} + b(u(t,z)) + \dot{W}(t,z), $$ where $\dot W$ is a space-time white noise on $\mathbb{R}_+\times\mathbb{R}$ and $b$ is a bounded measurable function on $\mathbb{R}$. As a byproduct of our proof we also establish the so-called path--by--path uniqueness for any initial condition in a certain class on the same set of probability one. This extends recent results of Davie (2007) to the context of stochastic partial differential equations.
arxiv topic:math.PR
arxiv_dataset-77801610.02653
Lasso-based forecast combinations for forecasting realized variances stat.AP Volatility forecasts are key inputs in financial analysis. While lasso based forecasts have shown to perform well in many applications, their use to obtain volatility forecasts has not yet received much attention in the literature. Lasso estimators produce parsimonious forecast models. Our forecast combination approach hedges against the risk of selecting a wrong degree of model parsimony. Apart from the standard lasso, we consider several lasso extensions that account for the dynamic nature of the forecast model. We apply forecast combined lasso estimators in a comprehensive forecasting exercise using realized variance time series of ten major international stock market indices. We find the lasso extended "ordered lasso" to give the most accurate realized variance forecasts. Multivariate forecast models, accounting for volatility spillovers between different stock markets, outperform univariate forecast models for longer forecast horizons.
arxiv topic:stat.AP
arxiv_dataset-77811610.02753
Local M-estimation with Discontinuous Criterion for Dependent and Limited Observations math.ST stat.ME stat.TH This paper examines asymptotic properties of local M-estimators under three sets of high-level conditions. These conditions are sufficiently general to cover the minimum volume predictive region, conditional maximum score estimator for a panel data discrete choice model, and many other widely used estimators in statistics and econometrics. Specifically, they allow for discontinuous criterion functions of weakly dependent observations, which may be localized by kernel smoothing and contain nuisance parameters whose dimension may grow to infinity. Furthermore, the localization can occur around parameter values rather than around a fixed point and the observation may take limited values, which leads to set estimators. Our theory produces three different nonparametric cube root rates and enables valid inference for the local M-estimators, building on novel maximal inequalities for weakly dependent data. Our results include the standard cube root asymptotics as a special case. To illustrate the usefulness of our results, we verify our conditions for various examples such as the Hough transform estimator with diminishing bandwidth, maximum score-type set estimator, and many others.
arxiv topic:math.ST stat.ME stat.TH
arxiv_dataset-77821610.02853
Minimal energy solutions to the fractional Lane-Emden system, I: Existence and singularity formation math.AP This is the first of two papers which study asymptotic behavior of minimal energy solutions to the fractional Lane-Emden system in a smooth bounded domain $\Omega$ \[(-\Delta)^s u = v^p, \quad (-\Delta)^s v = u^q \text{ in } \Omega \quad \text{and} \quad u = v = 0 \text{ on } \pa \Omega \quad \text{for } 0 < s < 1\] under the assumption that the subcritical pair $(p,q)$ approaches to the critical Sobolev hyperbola. If $p = 1$, the above problem is reduced to the subcritical higher-order fractional Lane-Emden equation with the Navier boundary condition \[(-\Delta)^s u = u^{\frac{n+2s}{n-2s}-\ep} \text{ in } \Omega \quad \text{and} \quad u = (-\Delta)^{s \over 2} u = 0 \quad \text{for } 1 < s < 2.\] The main objective of this paper is to deduce the existence of minimal energy solutions, and to examine their (normalized) pointwise limits provided that $\Omega$ is convex. As a by-product of our study, a new approach for the existence of an extremal function for the Hardy-Littlewood-Sobolev inequality is provided.
arxiv topic:math.AP
arxiv_dataset-77831610.02953
Comments on Dumitrescu's "A Selectable Sloppy Heap" cs.DS Dumitrescu [arXiv:1607.07673] describes a data structure referred to as a Selectable Sloppy Heap. We present a simplified approach, and also point out aspects of Dumitrescu's exposition that require scrutiny.
arxiv topic:cs.DS
arxiv_dataset-77841610.03053
Solid-state neutron detectors based on thickness scalable hexagonal boron nitride physics.ins-det This paper reports on the device processing and characterization of hexagonal boron nitride (hBN) based solid-state thermal neutron detectors, where hBN thickness varied from 2.5 to 15 microns. These natural hBN epilayers (with 19.9% B-10) were grown by a low pressure chemical vapor deposition process. Complete dry processing was adopted for the fabrication of these metal-semiconductor-metal (MSM) configuration detectors. These detectors showed intrinsic thermal neutron detection efficiency values of 0.86%, 2.4%, 3.15%, and 4.71% for natural hBN thickness values of 2.5, 7.5, 10, and 15 microns, respectively. Measured efficiencies are very close (more than 92%) to the theoretical maximum efficiencies for corresponding hBN thickness values for these detectors. This clearly shows the hBN thickness scalability of these detectors. A 15-micron thick hBN based MSM detector is expected to yield an efficiency of 21.4%, if enriched hBN (with ~100% B-10) is used instead of natural hBN. These results demonstrate that the fabrication of hBN thickness scalable highly efficient thermal neutron detectors is possible.
arxiv topic:physics.ins-det
arxiv_dataset-77851610.03153
Determination of a structural break in a mean-reverting process math.ST stat.TH Determining accurately when regime and structural changes occur in various time-series data is critical in many social and natural sciences. We develop and show further the equivalence of two consistent estimation techniques in locating the change point under the framework of a generalised version of the Ornstein-Uhlehnbeck process. Our methods are based on the least sum of squared error and the maximum log-likelihood approaches. The case where both the existence and the location of the change point are unknown is investigated and an informational methodology is employed to address these issues. Numerical illustrations are presented to assess the performance of the methods.
arxiv topic:math.ST stat.TH
arxiv_dataset-77861610.03253
A quantum spectrum analyzer enhanced by a nuclear spin memory quant-ph cond-mat.mes-hall We realize a two-qubit sensor designed for achieving high spectral resolution in quantum sensing experiments. Our sensor consists of an active "sensing qubit" and a long-lived "memory qubit", implemented by the electronic and the nitrogen-15 nuclear spins of a nitrogen-vacancy center in diamond, respectively. Using state storage times of up to 45 ms, we demonstrate spectroscopy of external ac signals with a line width of 19 Hz (~2.9 ppm) and of carbon-13 nuclear magnetic resonance (NMR) signals with a line width of 190 Hz (~ 74 ppm). This represents an up to 100-fold improvement in spectral resolution compared to measurements without nuclear memory.
arxiv topic:quant-ph cond-mat.mes-hall
arxiv_dataset-77871610.03353
Heegaard Floer invariants in codimension one math.GT Using Heegaard Floer homology, we construct a numerical invariant for any smooth, oriented $4$-manifold $X$ with the homology of $S^1 \times S^3$. Specifically, we show that for any smoothly embedded $3$-manifold $Y$ representing a generator of $H_3(X)$, a suitable version of the Heegaard Floer $d$ invariant of $Y$, defined using twisted coefficients, is a diffeomorphism invariant of $X$. We show how this invariant can be used to obstruct embeddings of certain types of $3$-manifolds, including those obtained as a connected sum of a rational homology $3$-sphere and any number of copies of $S^1 \times S^2$. We also give similar obstructions to embeddings in certain open $4$-manifolds, including exotic $\mathbb{R}^4$s.
arxiv topic:math.GT
arxiv_dataset-77881610.03453
High Energy QCD at NLO: from light-cone wave function to JIMWLK evolution hep-ph hep-th nucl-th Soft components of the light cone wave-function of a fast moving projectile hadron is computed in perturbation theory to third order in QCD coupling constant. At this order, the Fock space of the soft modes consists of one-gluon, two-gluon, and a quark-antiquark states. The hard component of the wave-function acts as a non-Abelian background field for the soft modes and is represented by a valence charge distribution that accounts for non-linear density effects in the projectile. When scattered off a dense target, the diagonal element of the S-matrix reveals the Hamiltonian of high energy evolution, the JIMWLK Hamiltonian. This way we provide a new direct derivation of the JIMWLK Hamiltonian at the Next-to-Leading Order.
arxiv topic:hep-ph hep-th nucl-th
arxiv_dataset-77891610.03553
Long Term Sunspot Cycle Phase Coherence with Periodic Phase Disruptions astro-ph.SR In 1965 Paul D. Jose published his discovery that both the motion of the Sun about the center of mass of the solar system and periods comprised of eight Hale magnetic sunspot cycles with a mean period of ~22.37 years have a matching periodicity of ~179 years. We have investigated the implied link between solar barycentric torque cycles and sunspot cycles and have found that the unsigned solar torque values from 1610 to 2057 are consistently phase and magnitude coherent in ~179 year Jose Cycles. We are able to show that there is also a surprisingly high degree of sunspot cycle phase coherence for times of minima in addition to magnitude correlation of peaks between the nine Schwabe sunspot cycles of 1878.8 to 1976.1 (SC12 through SC20) and those of 1699 to 1798.3 (SC[-5] through SC4). We further show that the remaining seven Schwabe cycles in each ~179 year cycle are non-coherent. In addition we have analyzed the empirical solar motion triggers of both sunspot cycle phase coherence and phase disruption, from which we conclude that sunspot cycles SC28 through SC35 (2057 to 2143) will be phase coherent at times of minima and amplitude correlated at maxima with SC12 through SC19 (1878.8-1964.8). The resulting predicted start times +/- 0.9 year, 1 sigma, of future sunspot cycles SC28 to SC36 are tabulated.
arxiv topic:astro-ph.SR
arxiv_dataset-77901610.03653
Dynamic patterns of overexploitation in fisheries q-bio.PE Understanding overfishing phenomenon and regulating fishing quotas is a major global challenge for the 21st Century both in terms of providing food for humankind and to preserve the oceans ecosystems. However, fishing is a complex economic activity, affected not just by overfishing but also by such factors as pollution, technology, financial factors and more. For this reason, it is often difficult to state with complete certainty that overfishing is the cause of the decline of a fishery. In this study, we developed a simple dynamic model based on the earlier, well-known Lotka-Volterra model or Prey-Predator model. To describe exploitation patterns, we assume that the fish stock and the fishing industry are coupled stock variables in the model and they dynamically affect each other, with the fishing yield proportional to both the fishing capital and the fish stock. The model is based on the concept that the fishing industry acts as the predator of the resource and that its growth and subsequent decline is directly related to the abundance of the fish stock. If the model can be fit historical data relative to specific fisheries, then it is a strong indication that the fishing industry is strongly affected by the magnitude of the fish stock and that, in particular, the decline of the yield and the decline of the stock are linked to each other. The model does not pretend to be a general description of the fishing industry in all its varied forms; however, the data reported here show that the model can indeed qualitatively describe several historical case of the collapse of fisheries. The model can also be used as a qualitative guide to understand the behavior of several other fisheries. These result indicate that one of the main factors causing the present crisis of the world's fisheries is the overexploitation of the fish stocks.
arxiv topic:q-bio.PE
arxiv_dataset-77911610.03753
Evidence for a spinon Fermi surface in the triangular S=1 quantum spin liquid Ba$_3$NiSb$_2$O$_9$ cond-mat.str-el Inelastic neutron scattering is used to study the low-energy magnetic excitations in the spin-1 triangular lattice of the 6H-B phase of Ba$_3$NiSb$_2$O$_9$. We study two powder samples: Ba$_3$NiSb$_2$O$_9$ synthesized under high pressure and Ba$_{2.5}$Sr$_{0.5}$NiSb$_2$O$_9$ in which chemical pressure stabilizes the 6H-B structure. The measured excitation spectra show broad gapless and nondispersive continua at characteristic wave vectors. Our data rules out most theoretical scenarios that have previously been proposed for this phase, and we find that it is well described by an exotic quantum spin liquid with three flavors of unpaired fermionic spinons, forming a large spinon Fermi surface.
arxiv topic:cond-mat.str-el
arxiv_dataset-77921610.03853
IMF and [Na/Fe] abundance ratios from optical and NIR Spectral Features in Early-type Galaxies astro-ph.GA We present a joint analysis of the four most prominent sodium-sensitive features (NaD, NaI8190, NaI1.14, and NaI2.21), in the optical and Near-Infrared spectral range, of two nearby, massive (sigma~300km/s), early-type galaxies (named XSG1 and XSG2). Our analysis relies on deep VLT/X-Shooter long-slit spectra, along with newly developed stellar population models, allowing for [Na/Fe] variations, up to 1.2dex, over a wide range of age, total metallicity, and IMF slope. The new models show that the response of the Na-dependent spectral indices to [Na/Fe] is stronger when the IMF is bottom heavier. For the first time, we are able to match all four Na features in the central regions of massive early-type galaxies, finding an overabundance of [Na/Fe], in the range 0.5-0.7dex, and a bottom-heavy IMF. Therefore, individual abundance variations cannot be fully responsible for the trends of gravity-sensitive indices, strengthening the case towards a non-universal IMF. Given current limitations of theoretical atmosphere models, our [Na/Fe] estimates should be taken as upper limits. For XSG1, where line strengths are measured out to 0.8Re, the radial trend of [Na/Fe] is similar to [Mg/Fe] and [C/Fe], being constant out to 0.5Re, and decreasing by 0.2-0.3dex at 0.8Re, without any clear correlation with local metallicity. Such a result seems to be in contrast with the predicted increase of Na nucleosynthetic yields from AGB stars and TypeII SNe. For XSG1, the Na-inferred IMF radial profile is consistent, within the errors, with that derived from TiO features and the Wing-Ford band, presented in a recent paper.
arxiv topic:astro-ph.GA
arxiv_dataset-77931610.03953
Excitons in solids from time-dependent density-functional theory: Assessing the Tamm-Dancoff approximation cond-mat.mtrl-sci Excitonic effects in solids can be calculated using the Bethe-Salpeter equation (BSE) or the Casida equation of time-dependent density-functional theory (TDDFT). In both methods, the Tamm-Dancoff approximation (TDA), which decouples excitations and de-excitations, is widely used to reduce computational cost. Here, we study the effect of the TDA on exciton binding energies of solids obtained from the Casida equation using long-range corrected (LRC) exchange-correlation kernels. We find that the TDA underestimates TDDFT-LRC exciton binding energies of semiconductors slightly, but those of insulators significantly (i.e., by more than 100%), and thus it is essential to solve the full Casida equation to describe strongly bound excitons. These findings are relevant in the ongoing search for accurate and efficient TDDFT approaches for excitons.
arxiv topic:cond-mat.mtrl-sci
arxiv_dataset-77941610.04053
Relaxation of charge in monolayer graphene: fast non-linear diffusion vs Coulomb effects cond-mat.mes-hall Pristine monolayer graphene exhibits very poor screening because the density of states vanishes at the Dirac point. As a result, charge relaxation is controlled by the effects of zero-point motion (rather than by the Coulomb interaction) over a wide range of parameters. Combined with the fact that graphene possesses finite intrinsic conductivity, this leads to a regime of relaxation described by a non-linear diffusion equation with a diffusion coefficient that diverges at zero charge density. Some consequences of this fast diffusion are self-similar superdiffusive regimes of relaxation, the development of a charge depleted region at the interface between electron- and hole-rich regions, and finite extinction times for periodic charge profiles.
arxiv topic:cond-mat.mes-hall
arxiv_dataset-77951610.04153
Aminated TiO2 nanotube as a Photoelectrochemical Water Splitting photoanode cond-mat.mtrl-sci The present work reports on the enhancement of TiO2 nanotubes photoelectrochemical water splitting rate by decorating the nanostructure with an amine layer in a hydrothermal process using diethylenetriamine (DETA). The aminate coated TiO2 tubes show a stable improvement of the photoresponse in both UV and visible light spectrum and under hydrothermal conditions, 4-fold increase of the photoelectrochemical water splitting rate is observed. From intensity modulated photocurrent spectroscopy (IMPS) measurements significantly faster electron transport times are observed indicating a surface passivating effect of the N-decoration.
arxiv topic:cond-mat.mtrl-sci
arxiv_dataset-77961610.04253
On Near Perfect Numbers math.NT The study of perfect numbers (numbers which equal the sum of their proper divisors) goes back to antiquity, and is responsible for some of the oldest and most popular conjectures in number theory. We investigate a generalization introduced by Pollack and Shevelev: $k$-near-perfect numbers. These are examples to the well-known pseudoperfect numbers first defined by Sierpi\'nski, and are numbers such that the sum of all but at most $k$ of its proper divisors equals the number. We establish their asymptotic order for all integers $k\ge 4$, as well as some properties of related quantities.
arxiv topic:math.NT
arxiv_dataset-77971610.04353
Products of Ideals and Jet Schemes math.AC math.AG In the present paper, we give a full description of the jet schemes of the polynomial ideal $\left( x_1\ldots x_n \right) \in k[x_1, \ldots, x_n]$ over a field of zero characteristic. We use this description to answer questions about products and intersections of ideals emerged recently in algorithmic studies of algebraic differential equations.
arxiv topic:math.AC math.AG
arxiv_dataset-77981610.04453
Note on recursion relations for the $\mathcal{Q}$-cut representation hep-th In this note, we study the $\mathcal{Q}$-cut representation by combining it with BCFW deformation. As a consequence, the one-loop integrand is expressed in terms of a recursion relation, i.e., $n$-point one-loop integrand is constructed using tree-level amplitudes and $m$-point one-loop integrands with $m\leq n-1$. By giving explicit examples, we show that the integrand from the recursion relation is equivalent to that from Feynman diagrams or the original $\mathcal{Q}$-cut construction, up to scale free terms.
arxiv topic:hep-th
arxiv_dataset-77991610.04553
Optical phase conjugation with less than a photon per degree of freedom physics.optics We demonstrate experimentally that optical phase conjugation can be used to focus light through strongly scattering media even when far less than a photon per optical degree of freedom is detected. We found that the best achievable intensity contrast is equal to the total number of detected photons, as long as the resolution of the system is high enough. Our results demonstrate that phase conjugation can be used even when the photon budget is extremely low, such as in high-speed focusing through dynamic media, or imaging deep inside tissue.
arxiv topic:physics.optics