text
stringlengths
1
3.65k
source
stringlengths
15
79
over the last twenty five years, advances in the collection and analysis of fmri data have enabled new insights into the brain basis of human health and disease. individual behavioral variation can now be visualized at a neural level as patterns of connectivity among brain regions. functional brain imaging is enhancing our understanding of clinical psychiatric disorders by revealing ties between regional and network abnormalities and psychiatric symptoms. initial success in this arena has recently motivated collection of larger datasets which are needed to leverage fmri to generate brain - based biomarkers to support development of precision medicines. despite methodological advances and enhanced computational power, evaluating the quality of fmri scans remains a critical step in the analytical framework. before analysis can be performed, expert reviewers visually inspect raw scans and preprocessed derivatives to determine viability of the data. this quality control ( qc ) process is labor intensive, and the inability to automate at large scale has proven to be a limiting factor in clinical neuroscience fmri research. we present a novel method for automating the qc of fmri scans. we train machine learning classifiers using features derived from brain mr images to predict the " quality " of those images, based on the ground truth of an expert ' s opinion. we emphasize the importance of these classifiers ' ability to generalize their predictions across data from different studies. to address this, we propose a novel approach entitled " fmri preprocessing log mining for automated, generalizable quality control " ( flag - qc ), in which features derived from mining runtime logs are used to train the classifier. we show that classifiers trained on flag - qc features perform much better ( auc = 0. 79 ) than previously proposed feature sets ( auc = 0. 56 ) when testing their ability to generalize across studies.
arxiv:1912.10127
image inpainting is an essential task for multiple practical applications like object removal and image editing. deep gan - based models greatly improve the inpainting performance in structures and textures within the hole, but might also generate unexpected artifacts like broken structures or color blobs. users perceive these artifacts to judge the effectiveness of inpainting models, and retouch these imperfect areas to inpaint again in a typical retouching workflow. inspired by this workflow, we propose a new learning task of automatic segmentation of inpainting perceptual artifacts, and apply the model for inpainting model evaluation and iterative refinement. specifically, we first construct a new inpainting artifacts dataset by manually annotating perceptual artifacts in the results of state - of - the - art inpainting models. then we train advanced segmentation networks on this dataset to reliably localize inpainting artifacts within inpainted images. second, we propose a new interpretable evaluation metric called perceptual artifact ratio ( par ), which is the ratio of objectionable inpainted regions to the entire inpainted area. par demonstrates a strong correlation with real user preference. finally, we further apply the generated masks for iterative image inpainting by combining our approach with multiple recent inpainting methods. extensive experiments demonstrate the consistent decrease of artifact regions and inpainting quality improvement across the different methods.
arxiv:2208.03357
in this article, we propose a new concept of bilayer stacking a - type altermagnet ( bsaa ), in which two identical ferromagnetic monolayers are stacked with antiferromagnetic coupling to form a two - dimensional a - type altermagnet. by solving the stacking model, we derive all bsaas for all layer groups and draw three key conclusions : ( 1 ) only 17 layer groups can realize intrinsic a - type altermagnetism. all 2d a - type altermagnets must belong to these 17 layer groups, which will be helpful to search for 2d a - type altermagnet. ( 2 ) it is impossible to connect the two sublattices of bsaa using $ s _ { 3z } $ or $ s _ { 6z } $, a constraint that is also applicable to all 2d altermagnets. ( 3 ) $ c _ { 2 \ alpha } $ is a general stacking operation to generate bsaa for an arbitrary monolayer. our theory not only can explain the previously reported twisted - bilayer altermagnets, but also can provide more possibilities to generate a - type altermagnets. our research has significantly broadened the range of candidate materials for 2d altermagnets. based on conclusion ( 1 ), the bilayer nizrcl $ _ 6 $ is predicted to exhibit intrinsic a - type altermagnetism. additionally, we use twisted - bilayer nicl $ _ 2 $, previously reported in the literature, as the second example of bsaa. furthermore, utilizing symmetry analysis and first - principles calculation, we scrutinize their spin - momentum locking characteristic to substantiate their altermagnetic properties.
arxiv:2407.15097
cystic fibrosis foundation in order to expand the atlanta cystic fibrosis research and development program. in 2015, the two universities received a five - year, $ 2. 9 million grant from the national science foundation ( nsf ) to create new bachelor ' s, master ' s, and doctoral degree programs and concentrations in healthcare robotics, which will be the first program of its kind in the southeastern united states. the georgia tech panama logistics innovation & research center is an initiative between the h. milton stewart school of industrial and systems engineering, the ecuador national secretariat of science and technology, and the government of panama that aims to enhance panama ' s logistics capabilities and performance through a number of research and education initiatives. the center is creating models of country level logistics capabilities that will support the decision - making process for future investments and trade opportunities in the growing region and has established dual degree programs in the university of panama and other panamanian universities with georgia tech. a similar center in singapore, the centre for next generation logistics, was established in 2015 and is a collaboration between georgia tech and the national university of singapore. the center will work closely with government agencies and the industry to perform research in logistics and supply chain systems for translation into innovations and commercialization to achieve transformative economic and societal impact. = = = industry connections = = = georgia tech maintains close ties to the industrial world. many of these connections are made through georgia tech ' s cooperative education and internship programs. georgia tech ' s division of professional practice ( dopp ), established in 1912 as the georgia institute of technology cooperative division, operates the largest and fourth - oldest cooperative education program in the united states, and is accredited by the accreditation council for cooperative education. the graduate cooperative education program, established in 1983, is the largest such program in the united states. it allows graduate students pursuing master ' s degrees or doctorates in any field to spend a maximum of two consecutive semesters working full - or part - time with employers. the undergraduate professional internship program enables undergraduate students — typically juniors or seniors — to complete a one - or two - semester internship with employers. the work abroad program hosts a variety of cooperative education and internship experiences for upperclassmen and graduate students seeking international employment and cross - cultural experiences. while all four programs are voluntary, they consistently attract high numbers of students — more than 3, 000 at last count. around 1, 000 businesses and organizations hire these students, who collectively earn $ 20 million per year. georgia tech ' s cooperative education and internship programs have been externally recognized
https://en.wikipedia.org/wiki/Georgia_Tech
a smooth bounded pseudoconvex domain in two complex variables is of finite type if and only if the number of eigenvalues of the d - bar - neumann laplacian that are less than or equal to $ \ lambda $ has at most polynomial growth as $ \ lambda $ goes to infinity.
arxiv:math/0508475
we present the measurements of strange hadron elliptic flow at mid - rapidity in au + au collisions at $ \ sqrt { s _ { nn } } $ = 7. 7 - 200 gev using the star detector in the years 2010 and 2011. the transverse momentum and collision centrality dependence of elliptic flow is presented. at the intermediate transverse momentum $ \ omega $ baryon and $ \ phi $ - meson show baryon - meson separation effect similar to proton and pion for minimum - bias au + au collision at $ \ sqrt { s _ { nn } } $ = 200 gev. this indicates formation of collective flow at the early partonic phase. the separation between baryons and mesons at intermediate transverse momentum decreases with decrease in beam energy and almost disappears at $ \ sqrt { s _ { nn } } $ $ \ leq $ 11. 5 gev, indicating hadronic interaction being dominant at the lower beam energy. we observe difference in elliptic flow between particle and anti - particle and this increases with decrease in beam energy. differences are larger for baryons than mesons. relative difference between particle and anti - particle elliptic flow is larger in central collisions than in peripheral ones.
arxiv:1509.04300
in this paper, we focus on solving an important class of nonconvex optimization problems which includes many problems for example signal processing over a networked multi - agent system and distributed learning over networks. motivated by many applications in which the local objective function is the sum of smooth but possibly nonconvex part, and non - smooth but convex part subject to a linear equality constraint, this paper proposes a proximal zeroth - order primal dual algorithm ( pzo - pda ) that accounts for the information structure of the problem. this algorithm only utilize the zeroth - order information ( i. e., the functional values ) of smooth functions, yet the flexibility is achieved for applications that only noisy information of the objective function is accessible, where classical methods cannot be applied. we prove convergence and rate of convergence for pzo - pda. numerical experiments are provided to validate the theoretical results.
arxiv:1810.10085
open heavy - flavour hadrons are a powerful tool to investigate the properties of the high - density medium created in heavy - ion collisions at high energies as they come from the hadronization of heavy quarks. the latter are created in the early stage of the interaction and experience the whole collision history. heavy quarks in - medium energy loss can be investigated by comparing the heavy - flavour production cross sections in p - p and nucleus - nucleus collisions. in addition, initial spatial anisotropy of the fireball is converted into momentum anysotropy of final state particles, in particular the elliptic flow. d mesons are identifed from their hadronic decays which can be reconstructed in the central rapidity region using the tracking and pid detectors of the alice detector.
arxiv:1305.3435
the collision problem is to decide whether a given list of numbers $ ( x _ 1, \ ldots, x _ n ) \ in [ n ] ^ n $ is $ 1 $ - to - $ 1 $ or $ 2 $ - to - $ 1 $ when promised one of them is the case. we show an $ n ^ { \ omega ( 1 ) } $ randomised communication lower bound for the natural two - party version of collision where alice holds the first half of the bits of each $ x _ i $ and bob holds the second half. as an application, we also show a similar lower bound for a weak bit - pigeonhole search problem, which answers a question of itsykson and riazanov ( ccc 2021 ).
arxiv:2208.00029
we study the connection between spherical wedge and full spherical shell geometries using simple mean - field $ \ alpha ^ 2 $ dynamos. we solve the equations for a one - dimensional time - dependent mean - field dynamo to examine the effects of varying the polar angle $ \ theta _ 0 $ between the latitudinal boundaries and the poles in spherical coordinates. we investigate the effects of turbulent magnetic diffusivity and $ \ alpha $ effect profiles as well as different latitudinal boundary conditions to isolate parameter regimes where oscillatory solutions are found. finally, we add shear along with a damping term mimicking radial gradients to study the resulting dynamo regimes. we find that the commonly used perfect conductor boundary condition leads to oscillatory $ \ alpha ^ 2 $ dynamo solutions only if the wedge boundary is at least one degree away from the poles. other boundary conditions always produce stationary solutions. by varying the profile of the turbulent magnetic diffusivity alone, oscillatory solutions are achieved with models extending to the poles, but the magnetic field is strongly concentrated near the poles and the oscillation period is very long. by introducing radial shear and a damping term mimicking radial gradients, we again see oscillatory dynamos, and the direction of drift follows the parker - - yoshimura rule. oscillatory solutions in the weak shear regime are found only in the wedge case with $ \ theta _ 0 = 1 ^ \ circ $ and perfect conductor boundaries. a reduced $ \ alpha $ effect near the poles with a turbulent diffusivity concentrated toward the equator yields oscillatory dynamos with equatorward migration and reproduces best the solutions in spherical wedges.
arxiv:1601.05246
marked mesh patterns are a very general type of permutation pattern. we examine a particular marked mesh pattern originally defined by kitaev and remmel, and show that its generating function is described by the $ r $ - stirling numbers. we examine some ramifications of various properties of the $ r $ - stirling numbers for this generating function, and find ( seemingly new ) formulas for the $ r $ - stirling numbers in terms of the classical stirling numbers and harmonic numbers. we also answer some questions posed by kitaev and remmel and show a connection to another mesh pattern introduced by kitaev and liese.
arxiv:1412.0345
galaxy evolution by interaction driven transformation is probably highly efficient in groups of galaxies. dwarf galaxies with their shallow potential are expected to reflect the interaction most prominently in their observable structure. the major aim of this series of papers is to establish a data base which allows to study the impact of group interaction onto the morphology and star - forming properties of dwarf galaxies. firstly, we present our selection rules for target groups and the morphological selection method of target dwarf member candidates. secondly, the spectroscopic follow - up observations with the het are present. thirdly, we applied own reduction methods based on adaptive filtering to derive surface photometry of the candidates. the spectroscopic follow - up indicate a dwarf identification success rate of roughly 55 %, and a group member success rate of about 33 %. a total of 17 new low surface brightness members is presented. for all candidates, total magnitudes, colours, and light distribution parameters are derived and discussed in the context of scaling relations. we point out short comings of the sdss standard pipeline for surface photometry for these dim objects. we conclude that our selection strategy is rather efficient to obtain a sample of dim, low surface brightness members of groups of galaxies within the virgo super - cluster. the photometric scaling relation in these x - ray dim, rather isolated groups does not significantly differ from those of the galaxies within the local volume.
arxiv:1407.0307
improving the predictive capability of molecular properties in ab initio simulations is essential for advanced material discovery. despite recent progress making use of machine learning, utilizing deep neural networks to improve quantum chemistry modelling remains severely limited by the scarcity and heterogeneity of appropriate experimental data. here we show how training a neural network to replace the exchange - correlation functional within a fully - differentiable three - dimensional kohn - sham density functional theory ( dft ) framework can greatly improve simulation accuracy. using only eight experimental data points on diatomic molecules, our trained exchange - correlation networks enable improved prediction accuracy of atomization energies across a collection of 104 molecules containing new bonds and atoms that are not present in the training dataset.
arxiv:2102.04229
we consider the derivatives which appear in the context of noncommutative string theory. first, we identify the correct derivations to use when the underlying structure of the theory is a quasitriangular hopf algebra. then we show that this is a specific case of a more general structure utilising the drinfel ' d twist. we go on to present reasons as to why we feel that the low - energy effective action, when written in terms of the original commuting coordinates, should explicitly exhibit this twisting.
arxiv:hep-th/0003234
] defined the minmin coalition number $ c _ { \ min } ( g ) $ of $ g $ to equal the minimum order of a minimal $ c $ - partition of $ g $. we show that $ 2 \ le c _ { \ min } ( g ) \ le n $, and we characterize graphs $ g $ of order $ n $ satisfying $ c _ { \ min } ( g ) = n $. a polynomial - time algorithm is given to determine if $ c _ { \ min } ( g ) = 2 $ for a given graph $ g $. a necessary and sufficient condition for a graph $ g $ to satisfy $ c _ { \ min } ( g ) \ ge 3 $ is given, and a characterization of graphs $ g $ with minimum degree ~ $ 2 $ and $ c _ { \ min } ( g ) = 4 $ is provided.
arxiv:2307.01222
first - principles density - functional calculations are performed to investigate the thermal transport properties in graphene nanoribbons ( gnrs ). the dimensional crossover of thermal conductance from one to two dimensions ( 2d ) is clearly demonstrated with increasing ribbon width. the thermal conductance of gnrs in a few nanometer width already exhibits an approximate low - temperature dependence of $ t ^ { 1. 5 } $, like that of 2d graphene sheet which is attributed to the quadratic nature of dispersion relation for the out - of - plane acoustic phonon modes. using a zone - folding method, we heuristically derive the dimensional crossover of thermal conductance with the increase of ribbon width. combining our calculations with the experimental phonon mean - free path, some typical values of thermal conductivity at room temperature are estimated for gnrs and for 2d graphene sheet, respectively. our findings clarify the issue of low - temperature dependence of thermal transport in gnrs and suggest a calibration range of thermal conductivity for experimental measurements in graphene - based materials.
arxiv:1203.2819
differential flows among different ion species are often observed in the solar wind, and such ion differential flows can provide the free energy to drive alfv \ ' en / ion - cyclotron and fast - magnetosonic / whistler instabilities. previous works mainly focused on ion beam instability under the parameters representative of the solar wind nearby 1 au. in this paper we further study proton beam instability using the radial models of the magnetic field and plasma parameters in the inner heliosphere. we explore a comprehensive distribution of proton beam instability as functions of the heliocentric distance and the beam speed. we also perform a detailed analysis of the energy transfer between unstable waves and particles and quantify how much the free energy of the proton beam flows into unstable waves and other kinds of particle species ( i. e., proton core, alpha particle, and electron ). this work clarifies that both parallel and perpendicular electric fields are responsible for the excitation of oblique alfv \ ' en / ion - cyclotron and oblique fast - magnetosonic / whistler instabilities. moreover, this work proposes an effective growth length to estimate whether the instability is efficiently excited or not. it shows that oblique alfv \ ' en / ion - cyclotron instability, oblique fast - magnetosonic / whistler instability, and oblique alfv \ ' en / ion - beam instability can be efficiently driven by proton beams drifting at the speed $ \ sim $ 600 - 1300 km s $ ^ { - 1 } $ in the solar atmosphere. in particular, oblique alfv \ ' en / ion - cyclotron waves driven in the solar atmosphere can be significantly damped therein, leading to the solar corona heating. these results are helpful for understanding proton beam dynamics in the inner heliosphere and can be verified through in situ satellite measurements.
arxiv:2107.12883
next - generation large - scale structure surveys will deliver a significant increase in the precision of growth data, allowing us to use ` agnostic ' methods to study the evolution of perturbations without the assumption of a cosmological model. we focus on a particular machine learning tool, gaussian processes, to reconstruct the growth rate $ f $, the root mean square of matter fluctuations $ \ sigma _ 8 $, and their product $ f \ sigma _ 8 $. we apply this method to simulated data, representing the precision of upcoming stage iv galaxy surveys. we extend the standard single - task approach to a multi - task approach that reconstructs the three functions simultaneously, thereby taking into account their inter - dependence. we find that this multi - task approach outperforms the single - task approach for future surveys and will allow us to detect departures from the standard model with higher significance. by contrast, the limited sensitivity of current data severely hinders the use of agnostic methods, since the gaussian processes parameters need to be fine tuned in order to obtain robust reconstructions.
arxiv:2105.01613
the significant properties and phase transition of charged anti - de sitter ( ads ) black holes have been extensively studied in a variety of modified theories of gravity in the presence of numerous matter fields. the goal of our current research is to investigate the ads black hole ' s thermodynamics under the impact of $ f ( q ) $ gravity. additionally, this paper explores the black hole ' s local stability and phase structure under the relevant gravity. besides, we use ruppeiner geometry to look into the ads black hole ' s microscopic structure. we have numerically computed the ricci curvature scalar $ r $ to explain the interactions between the ads black hole ' s microscopic particles under the influence of $ f ( q ) $ gravity.
arxiv:2311.02145
using authors ' s methods of 1980, 1981, some explicit finite sets of number fields containing ground fields of arithmetic hyperbolic reflection groups are defined, and good bounds of their degrees ( over q ) are obtained. for example, degree of the ground field of any arithmetic hyperbolic reflection group in dimension at least 6 is bounded by 56. these results could be important for further classification. we also formulate a mirror symmetric conjecture to finiteness of the number of arithmetic hyperbolic reflection groups which was established in full generality recently.
arxiv:0708.3991
a large population of extended substructures generates a stochastic gravitational field that is fully specified by the function $ p ( { \ bf f } ) $, which defines the probability that a tracer particle experiences a force $ \ bf f $ within the interval $ { \ bf f }, { \ bf f } + d \ bf f $. this paper presents a statistical technique for deriving the spectrum of random fluctuations directly from the number density of substructures with known mass and size functions. application to the subhalo population found in cold dark matter simulations of milky way - sized haloes shows that, while the combined force distribution is governed by the most massive satellites, the fluctuations of the { \ it tidal } field are completely dominated by the smallest and most abundant subhaloes. in light of this result we discuss observational experiments that may be sufficiently sensitive to galactic tidal fluctuations to probe the " dark " low - end of the subhalo mass function and constrain the particle mass of warm and ultra - light axion dark matter models.
arxiv:1710.06443
singular - value decomposition ( svd ) is a ubiquitous data analysis method in engineering, science, and statistics. singular - value estimation, in particular, is of critical importance in an array of engineering applications, such as channel estimation in communication systems, electromyography signal analysis, and image compression, to name just a few. conventional svd of a data matrix coincides with standard principal - component analysis ( pca ). the l2 - norm ( sum of squared values ) formulation of pca promotes peripheral data points and, thus, makes pca sensitive against outliers. naturally, svd inherits this outlier sensitivity. in this work, we present a novel robust non - parametric method for svd and singular - value estimation based on a l1 - norm ( sum of absolute values ) formulation, which we name l1 - csvd. accordingly, the proposed method demonstrates sturdy resistance against outliers and can facilitate more reliable data analysis and processing in a wide range of engineering applications.
arxiv:2210.12097
what will 5g be? what it will not be is an incremental advance on 4g. the previous four generations of cellular technology have each been a major paradigm shift that has broken backwards compatibility. and indeed, 5g will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities and unprecedented numbers of antennas. but unlike the previous four generations, it will also be highly integrative : tying any new 5g air interface and spectrum together with lte and wifi to provide universal high - rate coverage and a seamless user experience. to support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. this paper discusses all of these topics, identifying key challenges for future research and preliminary 5g standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.
arxiv:1405.2957
a modal stability analysis shows that plane poiseuille flow of an oldroyd - b fluid becomes unstable to a ` center mode ' with phase speed close to the maximum base - flow velocity, $ u _ { max } $. the governing dimensionless groups are the reynolds number $ re = \ rho u _ { max } h / \ eta $, the elasticity number $ e = \ lambda \ eta / ( h ^ 2 \ rho ) $, and the ratio of solvent to solution viscosity $ \ eta _ s / \ eta $ ; here, $ \ lambda $ is the polymer relaxation time, $ h $ is the channel half - width, and $ \ rho $ is the fluid density. for experimentally relevant values ( e. g., $ e \ sim 0. 1 $ and $ \ beta \ sim 0. 9 $ ), the predicted critical reynolds number, $ re _ c $, for the center - mode instability is around $ 200 $, with the associated eigenmodes being spread out across the channel. in the asymptotic limit of $ e ( 1 - \ beta ) \ ll 1 $, with $ e $ fixed, corresponding to strongly elastic dilute polymer solutions, $ re _ c \ propto ( e ( 1 - \ beta ) ) ^ { - \ frac { 3 } { 2 } } $ and the critical wavenumber $ k _ c \ propto ( e ( 1 - \ beta ) ) ^ { - \ frac { 1 } { 2 } } $. the unstable eigenmode in this limit is confined in a thin layer near the channel centerline. the above features are largely analogous to the center - mode instability in viscoelastic pipe flow ( garg et al., phys. rev. lett., 121, 024502 ( 2018 ) ), and suggest a universal linear mechanism underlying the onset of turbulence in both channel and pipe flows of suffciently elastic dilute polymer solutions.
arxiv:2008.00231
we report on the stark deceleration and electrostatic trapping of $ ^ { 14 } $ nh ( $ a ^ 1 \ delta $ ) radicals. in the trap, the molecules are excited on the spin - forbidden $ a ^ 3 \ pi \ leftarrow a ^ 1 \ delta $ transition and detected via their subsequent fluorescence to the $ x ^ 3 \ sigma ^ - $ ground state. the 1 / e trapping time is 1. 4 $ \ pm $ 0. 1 s, from which a lower limit of 2. 7 s for the radiative lifetime of the $ a ^ 1 \ delta, v = 0, j = 2 $ state is deduced. the spectral profile of the molecules in the trapping field is measured to probe their spatial distribution. electrostatic trapping of metastable nh followed by optical pumping of the trapped molecules to the electronic ground state is an important step towards accumulation of these radicals in a magnetic trap.
arxiv:0709.3212
linked open data exhibits growth in both volume and variety of published data. due to this variety, instances of many different types ( e. g. person ) can be found in published datasets. type alignment is the problem of automatically matching types ( in a possibly many - many fashion ) between two such datasets. type alignment is an important preprocessing step in instance matching. instance matching concerns identifying pairs of instances referring to the same underlying entity. by performing type alignment a priori, only instances conforming to aligned types are processed together, leading to significant savings. this article describes a type alignment experience with two large - scale cross - domain rdf knowledge graphs, dbpedia and freebase, that contain hundreds, or even thousands, of unique types. specifically, we present a mapreduce - based type alignment algorithm and show that there are at least three reasonable ways of evaluating type alignment within the larger context of instance matching. we comment on the consistency of those results, and note some general observations for researchers evaluating similar algorithms on cross - domain graphs.
arxiv:1608.04442
detecting tev - - pev cosmic neutrinos provides crucial tests of neutrino physics and astrophysics. the statistics of icecube and the larger proposed icecube - gen2 demand calculations of neutrino - nucleus interactions subdominant to deep - inelastic scattering, which is mediated by weak - boson couplings to nuclei. the largest such interactions are w - boson and trident production, which are mediated instead through photon couplings to nuclei. in a companion paper [ 1 ], we make the most comprehensive and precise calculations of those interactions at high energies. in this paper, we study their phenomenological consequences. we find that : ( 1 ) these interactions are dominated by the production of on - shell w - bosons, which carry most of the neutrino energy, ( 2 ) the cross section on water / iron can be as large as 7. 5 % / 14 % that of charged - current deep - inelastic scattering, much larger than the quoted uncertainty on the latter, ( 3 ) attenuation in earth is increased by as much as 15 %, ( 4 ) w - boson production on nuclei exceeds that through the glashow resonance on electrons by a factor of $ \ simeq $ 20 for the best - fit icecube spectrum, ( 5 ) the primary signals are showers that will significantly affect the detection rate in icecube - gen2 ; a small fraction of events give unique signatures that may be detected sooner.
arxiv:1910.10720
we study the effect that non - equilibrium chemistry in dynamical models of collapsing molecular cloud cores has on measurements of the magnetic field in these cores, the degree of ionization, and the mean molecular weight of ions. we find that oh and cn, usually used in zeeman observations of the line - of - sight magnetic field, have an abundance that decreases toward the center of the core much faster than the density increases. as a result, zeeman observations tend to sample the outer layers of the core and consistently underestimate the core magnetic field. the degree of ionization follows a complicated dependence on the number density at central densities up to 10 ^ 5 cm ^ { - 3 } for magnetic models and 10 ^ 6 cm ^ { - 3 } in non - magnetic models. at higher central densities the scaling approaches a power - law with a slope of - 0. 6 and a normalization which depends on the cosmic - ray ionization rate { \ zeta } and the temperature t as ( { \ zeta } t ) ^ 1 / 2. the mean molecular weight of ions is systematically lower than the usually assumed value of 20 - 30, and, at high densities, approaches a value of 3 due to the asymptotic dominance of the h3 + ion. this significantly lower value implies that ambipolar diffusion operates faster.
arxiv:1111.4218
the multiplicity of inclusive photons has been measured on an event - by - event basis for 158 agev pb induced reactions on ni, nb, and pb targets. the systematics of the pseudorapidity densities at midrapidity ( rho _ max ) and the width of the pseudorapidity distributions have been studied for varying centralities for these collisions. a power law fit to the photon yield as a function of the number of participating nucleons gives a value of 1. 13 + - 0. 03 for the exponent. the mean transverse momentum, < p _ t >, of photons determined from the ratio of the measured electromagnetic transverse energy and photon multiplicity, remains almost constant with increasing rho _ max. results are compared with model predictions.
arxiv:nucl-ex/9903006
space borne nulling interferometry in the mid - infrared waveband is one of the most promising techniques for characterizing the atmospheres of extra - solar planets orbiting in the habitable zone of their parent star, and possibly discovering life markers. one of its most difficult challenges is the control of free - flying telescope spacecrafts moving around a central combiner in order to modulating the planet signal, within accuracy better than one micrometer at least. moreover, the whole array must be reconfigured regularly in order to observe different celestial targets, thus increasing the risk of loosing one or more spacecrafts and aborting the mission before its normal end. in this paper is described a simplified optical configuration where the telescopes do not need to be rotated, and the number of necessary array reconfigurations is minimized. it allows efficient modulation of the planet signal, only making use of rotating prisms or mirrors located into the central combiner. in this paper the general principle of a nulling interferometer with a fixed telescope array is explained. mathematical relations are established in order to determining the planet modulation signal. numerical simulations are carried out for three different arrangements of the collecting telescopes. they confirm that nulling interferometry in space does not require a rotating telescope array
arxiv:2208.13470
the escape from a given domain is one of the fundamental problems in statistical physics and the theory of stochastic processes. here, we explore properties of the escape of an inertial particle driven by l \ ' evy noise from a bounded domain, restricted by two absorbing boundaries. presence of two absorbing boundaries assures that the escape process can be characterized by the finite mean first passage time. the detailed analysis of escape kinetics shows that properties of the mean first passage time for the integrated ornstein - - uhlenbeck process driven by l \ ' evy noise are closely related to properties of the integrated l \ ' evy motions which, in turn, are close to properties of the integrated wiener process. the extensive studies of the mean first passage time were complemented by examination of the escape velocity and energy along with their sensitivity to initial conditions.
arxiv:2104.10185
we present a novel online unsupervised method for face identity learning from video streams. the method exploits deep face descriptors together with a memory based learning mechanism that takes advantage of the temporal coherence of visual data. specifically, we introduce a discriminative feature matching solution based on reverse nearest neighbour and a feature forgetting strategy that detect redundant features and discard them appropriately while time progresses. it is shown that the proposed learning procedure is asymptotically stable and can be effectively used in relevant applications like multiple face identification and tracking from unconstrained video streams. experimental results show that the proposed method achieves comparable results in the task of multiple face tracking and better performance in face identification with offline approaches exploiting future information. code will be publicly available.
arxiv:1711.07368
in order to determine a suitable automobile insurance policy premium one needs to take into account three factors, the risk associated with the drivers and cars on the policy, the operational costs associated with management of the policy and the desired profit margin. the premium should then be some function of these three values. we focus on risk assessment using a data science approach. instead of using the traditional frequency and severity metrics we instead predict the total claims that will be made by a new customer using historical data of current and past policies. given multiple features of the policy ( age and gender of drivers, value of car, previous accidents, etc. ) one can potentially try to provide personalized insurance policies based specifically on these features as follows. we can compute the average claims made per year of all past and current policies with identical features and then take an average over these claim rates. unfortunately there may not be sufficient samples to obtain a robust average. we can instead try to include policies that are " similar " to obtain sufficient samples for a robust average. we therefore face a trade - off between personalization ( only using closely similar policies ) and robustness ( extending the domain far enough to capture sufficient samples ). this is known as the bias - variance trade - off. we model this problem and determine the optimal trade - off between the two ( i. e. the balance that provides the highest prediction accuracy ) and apply it to the claim rate prediction problem. we demonstrate our approach using real data.
arxiv:2209.02762
rydberg atom array experiments have demonstrated the ability to act as powerful quantum simulators, preparing strongly - correlated phases of matter which are challenging to study for conventional computer simulations. a key direction has been the implementation of interactions on frustrated geometries, in an effort to prepare exotic many - body states such as spin liquids and glasses. in this paper, we apply two - dimensional recurrent neural network ( rnn ) wave functions to study the ground states of rydberg atom arrays on the kagome lattice. we implement an annealing scheme to find the rnn variational parameters in regions of the phase diagram where exotic phases may occur, corresponding to rough optimization landscapes. for rydberg atom array hamiltonians studied previously on the kagome lattice, our rnn ground states show no evidence of exotic spin liquid or emergent glassy behavior. in the latter case, we argue that the presence of a non - zero edwards - anderson order parameter is an artifact of the long autocorrelations times experienced with quantum monte carlo simulations. this result emphasizes the utility of autoregressive models, such as rnns, to explore rydberg atom array physics on frustrated lattices and beyond.
arxiv:2405.20384
although the phenomenon of chirality appears in many investigations of maps and hypermaps no detailed study of chirality seems to have been carried out. chirality of maps and hypermaps is not merely a binary invariant but can be quantified by two new invariants - - the chirality group and the chirality index, the latter being the size of the chirality group. a detailed investigation of the chirality groups of maps and hypermaps will be the main objective of this paper. the most extreme type of chirality arises when the chirality group coincides with the monodromy group. such hypermaps are called totally chiral. examples of them are constructed by considering appropriate ` ` asymmetric ' ' pairs of generators for some non - abelian simple groups. we also show that every finite abelian group is the chirality group of some hypermap, whereas many non - abelian groups, including symmetric and dihedral groups, cannot arise as chirality groups.
arxiv:math/0609070
the matrix sturm - liouville operator on a finite interval with singular potential of class $ w _ 2 ^ { - 1 } $ and the general self - adjoint boundary conditions is studied. this operator generalizes the sturm - liouville operators on geometrical graphs. we investigate the inverse problem that consists in recovering the considered operator from the spectral data ( eigenvalues and weight matrices ). the inverse problem is reduced to a linear equation in a suitable banach space, and a constructive algorithm for the inverse problem solution is developed. moreover, we obtain the spectral data characterization for the studied operator.
arxiv:2007.07299
we study non - archimedean $ \ mu $ - entropy for toric variety as a further exploration of $ \ mu $ k - stability. we show the existence of optimizer of toric non - archimedean $ \ mu ^ \ lambda $ - entropy for $ \ lambda \ in \ mathbb { r } $ and the uniqueness for $ \ lambda \ le 0 $. for the proof of existence, we establish a rellich type compactness result for convex functions on simple polytope. we also reveal a thermodynamical structure on toric non - archimedean $ \ mu $ - entropy. this observation allows us to interpret the enigmatic parameter $ t = - \ frac { \ lambda } { 2 \ pi } $ as temperature and non - archimedean $ \ mu $ - entropy as entropy of an infinite dimensional composite system.
arxiv:2303.09090
we present quantum simulations of carbon nanotube field - effect transistors ( cnt - fets ) based on top - gated architectures and compare to electrical characterization on devices with 15 nm channel lengths. a non - equilibrium green ' s function ( negf ) quantum transport method coupled with a $ \ vec { k } \ cdot \ vec { p } $ description of the electronic structure is demonstrated to achieve excellent agreement with the reported experimental data. factors influencing the electrostatic control of the channel are investigated and reveal that detailed modeling of the electrostatics and the electronic band structure of the cnt is required to achieve quantitative agreement with experiment.
arxiv:2108.07013
recent results of a high - statistics study of tau lepton properties and decays at b factories are reviewed. we discuss measurements of tau lifetime, branching fractions, and spectral functions for several hadronic tau decay modes with $ k ^ 0 _ s $. results of a search for lepton flavor violating tau decays as well as cp symmetry violation are briefly discussed.
arxiv:1407.7196
in this paper, we propose a novel method to make distance predictions in real - world social networks. as predicting missing distances is a difficult problem, we take a two - stage approach. structural parameters for families of synthetic networks are first estimated from a small set of measurements of a real - world network and these synthetic networks are then used to pre - train the predictive neural networks. since our model first searches for the most suitable synthetic graph parameters which can be used as an " oracle " to create arbitrarily large training data sets, we call our approach " oracle search pre - training " ( osp ). for example, many real - world networks exhibit a power law structure in their node degree distribution, so a power law model can provide a foundation for the desired oracle to generate synthetic pre - training networks, if the appropriate power law graph parameters can be estimated. accordingly, we conduct experiments on real - world facebook, email, and train bombing networks and show that osp outperforms models without pre - training, models pre - trained with inaccurate parameters, and other distance prediction schemes such as low - rank matrix completion. in particular, we achieve a prediction error of less than one hop with only 1 % of sampled distances from the social network. osp can be easily extended to other domains such as random networks by choosing an appropriate model to generate synthetic training data, and therefore promises to impact many different network learning problems.
arxiv:2106.03233
accurate phase diagrams of multicomponent plasmas are required for the modeling of dense stellar plasmas, such as those found in the cores of white dwarf stars and the crusts of neutron stars. those phase diagrams have been computed using a variety of standard techniques, which suffer from physical and computational limitations. here, we present an efficient and accurate method that overcomes the drawbacks of previously used approaches. in particular, finite - size effects are avoided as each phase is calculated separately ; the plasma electrons and volume changes are explicitly taken into account ; and arbitrary analytic fits to simulation data are avoided. furthermore, no simulations at uninteresting state conditions, i. e., away from the phase coexistence curves, are required, which improves the efficiency of the technique. the method consists of an adaptation of the so - called gibbs - duhem integration approach to electron - ion plasmas, where the coexistence curve is determined by direct numerical integration of its underlying clapeyron equation. the thermodynamics properties of the coexisting phases are evaluated separately using monte carlo simulations in the isobaric semi - grand canonical ensemble. we describe this monte carlo - based clapeyron integration method, including its basic principles, our extension to electron - ion plasmas, and our numerical implementation. we illustrate its applicability and benefits with the calculation of the melting curve of dense c / o plasmas under conditions relevant for white dwarf cores and provide analytic fits to implement this new melting curve in white dwarf models. while this work focuses on the liquid - solid phase boundary of dense two - component plasmas, a wider range of physical systems and phase boundaries are within the scope of the clapeyron integration method, which had until now only been applied to simple model systems of neutral particles.
arxiv:2104.00599
the scaling relations between the gas content and star formation rate of galaxies provide useful insights into processes governing their formation and evolution. we investigate the emergence and the physical drivers of the global kennicutt - schmidt ( ks ) relation at $ 0. 25 \ leq z \ leq 4 $ in the cosmological hydrodynamic simulation newhorizon capturing the evolution of a few hundred galaxies with a resolution of $ \ sim $ 40 pc. the details of this relation vary strongly with the stellar mass of galaxies and the redshift. a power - law relation $ \ sigma _ { \ rm sfr } \ propto \ sigma _ { \ rm gas } ^ { a } $ with $ a \ approx 1. 4 $, like that found empirically, emerges at $ z \ approx 2 - 3 $ for the most massive half of the galaxy population. however, no such convergence is found in the lower - mass galaxies, for which the relation gets shallower with decreasing redshift. at the galactic scale, the star formation activity correlates with the level of turbulence of the interstellar medium, quantified by the mach number, rather than with the gas fraction ( neutral or molecular ), confirming previous works. with decreasing redshift, the number of outliers with short depletion times diminishes, reducing the scatter of the ks relation, while the overall population of galaxies shifts toward low densities. using pc - scale star formation models calibrated with local universe physics, our results demonstrate that the cosmological evolution of the environmental and intrinsic conditions conspire to converge towards a significant and detectable imprint in galactic - scale observables, in their scaling relations, and in their reduced scatter.
arxiv:2309.06485
this contribution to the xxvii symposium on multiparticle dynamics held in frascati, italy, september, 1997 consists of the following subject matter : ( 1 ) introductory generalities. ( 2 ) brief mention of some of the contributions to the meeting. ( 3 ) more extended discussion of a few specialized topics. ( 4 ) discussion of the felix initiative for a qcd detector at the lhc.
arxiv:hep-ph/9712240
we investigate aging in glassy systems based on a simple model, where a point in configuration space performs thermally activated jumps between the minima of a random energy landscape. the model allows us to show explicitly a subaging behavior and multiple scaling regimes for the correlation function. both the exponents characterizing the scaling of the different relaxation times with the waiting time and those characterizing the asymptotic decay of the scaling functions are obtained analytically by invoking a ` partial equilibrium ' concept.
arxiv:cond-mat/0001161
in this paper, i propose a general procedure for multivariate distribution - free nonparametric testing derived from the concept of ranks that are based upon measure transportation in the context of multiple change point analysis. i will use this algorithm to estimate both the number of change points and their locations within an observed multivariate time series. in this paper, the change point problem is observed in a general setting in which both the given distribution and number of change points are unknown, rather than assume the observed time series follows a specific distribution or contains only one change point as many works in this area of study assume. the intention of this is to develop a technique for accurately identifying the changes in a distribution while making as few suppositions as possible. the rank energy statistic used here is based on energy statistics and has the potential to detect any change in a distribution. i present the properties of this new algorithm, which can be used to analyze various datasets, including hierarchical clustering, testing multivariate normality, gene selection, and microarray data analysis. this algorithm has also been implemented in the r package recp, which is available on github.
arxiv:2108.04903
solar neutrinos are discussed in the light of the new data and of recent progress in helioseismology. most attention is given to the new status of standard solar models due to seismically measured density and sound speed in the inner solar core. the elementary particle solutions to the solar neutrino problem and their observational signatures are discussed.
arxiv:astro-ph/9710126
in this paper, calculated energies of the lowest bound state of coulomb three - body systems containing an electron ( $ e ^ - $ ), a negatively charged muon ( $ \ mu ^ - $ ) and a nucleus ( $ n ^ { z + } $ ) of charge number z are reported. the 3 - body relative wave function in the resulting schr \ " odinger equation is expanded in the complete set of hyperspherical harmonics ( hh ). use of the orthonormality of hh leads to an infinite set of coupled differential equations ( cde ) which are solved numerically to get the energy e.
arxiv:1510.06831
fluorescence spectroscopy is a fundamental tool in life sciences and chemistry, widely used for applications such as environmental monitoring, food quality control, and biomedical diagnostics. however, analysis of spectroscopic data with deep learning, in particular of fluorescence excitation - emission matrices ( eems ), presents significant challenges due to the typically small and sparse datasets available. furthermore, the analysis of eems is difficult due to their high dimensionality and overlapping spectral features. this study proposes a new approach that exploits domain adaptation with pretrained vision models, alongside a novel interpretability algorithm to address these challenges. thanks to specialised feature engineering of the neural networks described in this work, we are now able to provide deeper insights into the physico - chemical processes underlying the data. the proposed approach is demonstrated through the analysis of the oxidation process in extra virgin olive oil ( evoo ) during ageing, showing its effectiveness in predicting quality indicators and identifying the spectral bands, and thus the molecules involved in the process. this work describes a significantly innovative approach in the use of deep learning for spectroscopy, transforming it from a black box into a tool for understanding complex biological and chemical processes.
arxiv:2406.10031
we study the formation of ( quasi - ) coherent matter waves emerging from a mott insulator for strongly interacting bosons on a one - dimensional lattice. it has been shown previously that a quasi - condensate emerges at momentum k = \ pi / 2a, where a is the lattice constant, in the limit of infinitely strong repulsion ( hard - core bosons ). here we show that this phenomenon persists for all values of the repulsive interaction that lead to a mott insulator at a commensurate filling. the non - equilibrium dynamics of hard - core bosons is treated exactly by means of a jordan - wigner transformation, and the generic case is studied using a time - dependent density matrix renormalization group technique. different methods for controlling the emerging matter wave are discussed.
arxiv:cond-mat/0606155
we search the lambda = 1. 1 mm bolocam galactic plane survey for clumps containing sufficient mass to form ~ 10 ^ 4 m \ odot star clusters. 18 candidate massive proto - clusters are identified in the first galactic quadrant outside of the central kiloparsec. this sample is complete to clumps with mass m ( clump ) > 10 ^ 4 m _ sun and radius r < 2. 5 pc. the overall galactic massive cluster formation rate is cfr ( m _ cluster > 10 ^ 4 ) ~ 5 myr ^ - 1, which is in agreement with the rates inferred from galactic open clusters and m31 massive clusters. we find that all massive proto - clusters in the first quadrant are actively forming massive stars and place an upper limit of t _ starless < 0. 5 myr on the lifetime of the starless phase of massive cluster formation. if massive clusters go through a starless phase with all of their mass in a single clump, the lifetime of this phase is very short.
arxiv:1208.4097
self - regulation of living tissue as an example of self - organization phenomena in hierarchical systems of biological, ecological, and social nature is under consideration. the characteristic feature of these systems is the absence of any governing center and, thereby, their self - regulation is based on a cooperative interaction of all the elements. the work develops a mathematical theory of a vascular network response to local effects on scales of individual units of peripheral circulation.
arxiv:0911.5131
this paper describes the accuracy and the errors of water vapour content measurements in the atmosphere using optical methods, especially starphotometer. after the general explanations of the used expressions for the star - magnitude observations of the water vapour absorption in section 3 the absorption model for the water vapour band will be discussed. sections 4 and 5 give an overview on the technique to determine the model parameters both from spectroscopic laboratory and radiosonde observation data. finally, the sections 6 and 7 are dealing with the details of the errors ; that means errors of observable magnitude, of instrumental extraterrestrial magnitude, of atmospheric extinction determination and of water vapour content determination by radiosonde humidity measurements. the main conclusion is : because of the high precision of the results the optical methods for water vapour observation are suited to validate and calibrate alternative methods ( gps, lidar, microwave ) which are making constant progress world - wide in these days.
arxiv:1010.3669
this short note gives a sufficient condition for having the class of polynomials dense in the space of square integrable functions with respect to a finite measure dominated by the lebesgue measure in the real line, here denoted by $ l ^ 2 $. it is shown that if the laplace transform of the measure in play is bounded in a neighbourhood of the origin, then the moments of all order are finite and the class of polynomials is dense in $ l ^ 2 $. the existence of the moments of all orders is well known for the case where the measure is concentrated in the positive real line ( see feller, 1966 ), but the result concerning the polynomial approximation is original, even thought the proof is relatively simple. additionally, an alternative stronger condition easier to be verified not involving the calculation of the laplace transform is given. the condition essentially says that the density of the measure should have exponential decaying tails. the tools presented are of interest for constructing semiparametric extensions of classic parametric models.
arxiv:1603.03473
we present a study of carbon radio recombination lines towards cassiopeia a using lofar observations in the frequency range 10 - 33 mhz. individual carbon $ \ alpha $ lines are detected in absorption against the continuum at frequencies as low as 16 mhz. stacking several c $ \ alpha $ lines we obtain detections in the 11 - 16 mhz range. these are the highest signal - to - noise measurements at these frequencies. the peak optical depth of the c $ \ alpha $ lines changes considerably over the 11 - 33 mhz range with the peak optical depth decreasing from 4 $ \ times10 ^ { - 3 } $ at 33 mhz to 2 $ \ times10 ^ { - 3 } $ at 11 mhz, while the line width increases from 20 km s $ ^ { - 1 } $ to 150 km s $ ^ { - 1 } $. the combined change in peak optical depth and line width results in a roughly constant integrated optical depth. we interpret this as carbon atoms close to local thermodynamic equilibrium. in this work we focus on how the 11 - 33 mhz carbon radio recombination lines can be used to determine the gas physical conditions. we find that the ratio of the carbon radio recombination lines to that of the 158 $ \ mu $ m [ cii ] fine - structure line is a good thermometer, while the ratio between low frequency carbon radio recombination lines provides a good barometer. by combining the temperature and pressure constraints with those derived from the line width we are able to constrain the gas properties ( electron temperature and density ) and radiation field intensity. given the 1 $ \ sigma $ uncertainties in our measurements these are ; $ t _ { e } \ approx68 $ - $ 98 $ k, $ n _ { e } \ approx0. 02 $ - $ 0. 035 $ cm $ ^ { - 3 } $ and $ t _ { r, 100 } \ approx1500 $ - $ 1650 $ k. despite challenging rfi and ionospheric conditions, our work demonstrates that observations of carbon radio recombination lines in the 10 - 33 mhz range can provide insight into the gas conditions.
arxiv:1701.08802
providing wellbeing for all while safeguarding planetary boundaries may require governments to pursue post - growth policies. previous empirical studies of sustainable wellbeing initiatives investigating enablers of and barriers to post - growth policymaking are either based on a small number of empirical cases or lack an explicit analytical framework. to better understand how post - growth policymaking could be fostered, we investigate 29 initiatives across governance scales in europe, new zealand, and canada. we apply a framework that distinguishes polity, politics, and policy to analyze the data. we find that the main enablers and barriers relate to the economic growth paradigm, the organization of government, attitudes towards policymaking, political strategies, and policy tools and outcomes. engaging in positive framings of post - growth visions to change narratives and building broad - based alliances could act as drivers. however, initiatives face a tension between the need to connect to broad audiences and a risk of co - optation by depolitization.
arxiv:2501.17600
modifications of general relativity often involve coupling additional scalar fields to the ricci scalar, leading to scalar - tensor theories of brans - dicke type. if the additional scalar fields are light, they can give rise to long - range fifth forces, which are subject to stringent constraints from local tests of gravity. in this talk, we show that yukawa - like fifth forces only arise for the standard model ( sm ) due to a mass mixing of the additional scalar with the higgs field, and we emphasise the pivotal role played by discrete and continuous symmetry breaking. quite remarkably, if one assumes that sufficiently light, non - minimally coupled scalar fields exist in nature, the non - observation of fifth forces has the potential to tell us about the structure of the sm higgs sector and the origin of its symmetry breaking. moreover, with these observations, we argue that certain classes of scalar - tensor theories are, up to and including their dimension - four operators, equivalent to higgs - portal theories. in this way, ultra - light dark matter models may also exhibit fifth - force phenomenology, and we consider the impact on the dynamics of disk galaxies as an example.
arxiv:1903.09603
the pressing game on black - and - white graphs is the following : given a graph $ g ( v, e ) $ with its vertices colored with black and white, any black vertex $ v $ can be pressed, which has the following effect : ( a ) all neighbors of $ v $ change color, i. e. white neighbors become black and \ emph { vice versa }, ( b ) all pairs of neighbors of $ v $ change connectivity, i. e. connected pairs become unconnected, unconnected ones become connected, ( c ) and finally, $ v $ becomes a separated white vertex. the aim of the game is to transform $ g $ into an all white, empty graph. it is a known result that the all white empty graph is reachable in the pressing game if each component of $ g $ contains at least one black vertex, and for a fixed graph, any successful transformation has the same number of pressed vertices. the pressing game conjecture is that any successful pressing path can be transformed into any other successful pressing path with small alterations. here we prove the conjecture for linear graphs. the connection to genome rearrangement and sorting signed permutations with reversals is also discussed.
arxiv:1303.6799
we use a series of statistical techniques to compare the clustering of samples of iras galaxies selected on the basis of their far - infrared emission temperature, to see whether a temperature - dependent effect, such as might be produced by interaction - induced star formation, could be responsible for the increase in clustering strength with redshift in the qdot redshift survey that has been reported by several authors. the temperature - luminosity relation for iras galaxies means that warm and cool samples drawn from a flux - limited sample like qdot will sample quite different volumes of space. to overcome this problem, and to distinguish truly temperature - dependent results from those depending directly on the volume of space sampled, we consider a pair of samples of warmer and cooler galaxies with matched redshift distributions, as well as pairs of samples selected using a simple temperature cut.... ( abstract shortened ).... we conclude that that there may be a temperature - dependent component to the observed increase in the clustering strength of qdot galaxies with redshift, but that it is less important than a sampling effect, which reflects the local cosmography, rather than the physical properties of the galaxies and their environment. we discuss the implications of this work for the use of iras galaxies as probes of large - scale structure and for models accounting for their far - infrared emission by interaction - induced star formation.
arxiv:astro-ph/9511028
entanglement - based networks ( ebns ) enable general - purpose quantum communication by combining entanglement and its swapping in a sequence that addresses the challenges of achieving long distance communication with high fidelity associated with quantum technologies. in this context, entanglement distribution refers to the process by which two nodes in a quantum network share an entangled state, serving as a fundamental resource for communication. in this paper, we study the performance of entanglement distribution mechanisms over a physical topology comprising end nodes and quantum switches, which are crucial for constructing large - scale links. to this end, we implemented a switch - based topology in netsquid and conducted a series of simulation experiments to gain insight into practical and realistic quantum network engineering challenges. these challenges include, on the one hand, aspects related to quantum technology, such as memory technology, gate durations, and noise ; and, on the other hand, factors associated with the distribution process, such as the number of switches, distances, purification, and error correction. all these factors significantly impact the end - to - end fidelity across a path, which supports communication between two quantum nodes. we use these experiments to derive some guidelines towards the design and configuration of future ebns.
arxiv:2501.03210
the most important part of model selection and hyperparameter tuning is the evaluation of model performance. the most popular measures, such as auc, f1, acc for binary classification, or rmse, mad for regression, or cross - entropy for multilabel classification share two common weaknesses. first is, that they are not on an interval scale. it means that the difference in performance for the two models has no direct interpretation. it makes no sense to compare such differences between datasets. second is, that for k - fold cross - validation, the model performance is in most cases calculated as an average performance from particular folds, which neglects the information how stable is the performance for different folds. in this talk, we introduce a new epp rating system for predictive models. we also demonstrate numerous advantages for this system, first, differences in epp scores have probabilistic interpretation. based on it we can assess the probability that one model will achieve better performance than another. second, epp scores can be directly compared between datasets. third, they can be used for navigated hyperparameter tuning and model selection. forth, we can create embeddings for datasets based on epp scores.
arxiv:1908.09213
imaging using interferometer arrays based on the van cittert - zernike theorem has been widely used in astronomical observation. recently it was shown that superresolution can be achieved in this system for imaging two weak thermal point sources. using quantum estimation theory, we consider the fundamental quantum limit of resolving the transverse separation of two strong thermal point sources using interferometer arrays, and show that the resolution is not limited by the longest baseline. we propose measurement techniques using linear beam splitters and photon - number - resolving detection to achieve our bound. our results demonstrate that superresolution for resolving two thermal point sources of any strength can be achieved in interferometer arrays.
arxiv:2012.14026
we investigate a family of integrals involving modified bessel functions that arise in the context of neutrino scattering. recursive formulas are derived for evaluating these integrals and their asymptotic expansions are computed. we prove in certain cases that the asymptotic expansion yields the exact result after a finite number of terms. in each of these cases we derive a formula that bounds the order at which the expansion terminates. the method of calculation developed in this paper is applicable to similar families of integrals that involve bessel or modified bessel functions.
arxiv:1509.06308
the national institutes of health ' s ( nih ) human biomolecular atlas program ( hubmap ) aims to create a comprehensive high - resolution atlas of all the cells in the healthy human body. multiple laboratories across the united states are collecting tissue specimens from different organs of donors who vary in sex, age, and body size. integrating and harmonizing the data derived from these samples and ' mapping ' them into a common three - dimensional ( 3d ) space is a major challenge. the key to making this possible is a ' common coordinate framework ' ( ccf ), which provides a semantically annotated, 3d reference system for the entire body. the ccf enables contributors to hubmap to ' register ' specimens and datasets within a common spatial reference system, and it supports a standardized way to query and ' explore ' data in a spatially and semantically explicit manner. [... ] this paper describes the construction and usage of a ccf for the human body and its reference implementation in hubmap. the ccf consists of ( 1 ) a ccf clinical ontology, which provides metadata about the specimen and donor ( the ' who ' ) ; ( 2 ) a ccf semantic ontology, which describes ' what ' part of the body a sample came from and details anatomical structures, cell types, and biomarkers ( asct + b ) ; and ( 3 ) a ccf spatial ontology, which indicates ' where ' a tissue sample is located in a 3d coordinate system. an initial version of all three ccf ontologies has been implemented for the first hubmap portal release. it was successfully used by tissue mapping centers to semantically annotate and spatially register 48 kidney and spleen tissue blocks. the blocks can be queried and explored in their clinical, semantic, and spatial context via the ccf user interface in the hubmap portal.
arxiv:2007.14474
we present examples of bicontinuous interfacially jammed emulsion gels ( " bijels " ) with a designed gradient in the channel size along the sample. these samples are created by quenching binary fluids which have a gradient in particle concentration along the sample, since the channel size is determined by the local particle concentration. a gradient in local particle concentration is achieved using a two - stage loading process, with different particle volume fractions in each stage. confocal microscopy and image analysis were used to quantitatively measure the channel size of the bijels. bijels with a gradient in channel size of up to 2. 8 % / mm have been created. such tailored soft materials could act as templates for energy materials optimised for both high ionic transport rates ( high power ) and high interfacial area ( high energy density ), potentially making them useful in novel energy applications.
arxiv:2110.11988
let $ m $ be a cartan - hadamard manifold with sectional curvature satisfying $ - b ^ 2 \ leq k \ leq - a ^ 2 < 0 $, $ b \ geq a > 0. $ denote by $ \ partial _ { \ infty } m $ the asymptotic boundary of $ m $ and by $ \ bar m : = m \ cup \ partial _ \ infty m $ the geometric compactification of $ m $ with the cone topology. we investigate here the following question : given a finite number of points $ p _ { 1 },..., p _ { k } \ in \ partial _ \ infty m, $ if $ u \ in c ^ { \ infty } ( m ) \ cap c ^ { 0 } \ left ( \ bar { m } \ backslash \ left \ { p _ { 1 },..., p _ { k } \ right \ } \ right ) $ satisfies a pde $ \ mathcal q ( u ) = 0 $ in $ m $ and if $ u | _ { \ partial _ \ infty m \ setminus \ left \ { p _ { 1 },..., p _ { k } \ right \ } } $ extends continuously to $ p _ { i }, $ $ i = 1,..., k, $ can one conclude that $ u \ in c ^ { 0 } \ left ( \ bar { m } \ right )? $ when $ \ dim m = 2 $, for $ \ mathcal q $ belonging to a linearly convex space of quasi - linear elliptic operators $ \ mathcal { s } $ of the form $ $ \ mathcal { q } ( u ) = \ operatorname { div } \ left ( \ frac { \ mathcal { a } ( | \ nabla u | ) } { | \ nabla u | } \ nabla u \ right ) = 0, $ $ where $ \ mathcal { a } $ satisfies some structural conditions, then the answer is yes provided that $ \ mathcal { a } $ has a certain asymptotic growth. this condition includes, besides the minimal graph pde, a class of minimal type pdes. in the hyperbolic space $ \ mathbb { h } ^ n $, $ n \ ge 2, $ we are able to give a complete answer :
arxiv:1601.00361
we prove a rank 1 version of the hanna neumann theorem. this shows that every one - relator 2 - complex without torsion has the nonpositive immersion property. the proof generalizes to staggered and reducible 2 - complexes.
arxiv:1410.2579
we study the phase diagram of a one - dimensional balls - in - boxes ( bib ) model that has been proposed as an effective model for the spatial - volume dynamics of ( 2 + 1 ) - dimensional causal dynamical triangulations ( cdt ). the latter is a statistical model of random geometries and a candidate for a nonperturbative formulation of quantum gravity, and it is known to have an interesting phase diagram, in particular including a phase of extended geometry with classical properties. our results corroborate a previous analysis suggesting that a particular type of potential is needed in the bib model in order to reproduce the droplet condensation typical of the extended phase of cdt. since such a potential can be obtained by a minisuperspace reduction of a ( 2 + 1 ) - dimensional gravity theory of the ho \ v { r } ava - lifshitz type, our result strengthens the link between cdt and ho \ v { r } ava - lifshitz gravity.
arxiv:1612.09533
electronic correlations arise from the competition between the electrons ' kinetic and coulomb interaction energy and give rise to a rich phase diagram and many emergent quasiparticles. the binding of doubly - occupied and empty sites into a doublon - holon exciton is an example of this in the hubbard model. unlike traditional excitons in semiconductors, in the hubbard model it is the kinetic energy which provides the binding energy. upon doping, we find the emergence of exciton complexes, such as a holon - doublon - holon trion. the appearance of these low - lying collective excitations make screening more effective in the doped system. as a result, hubbard - based modelling of correlated materials should use different values of $ u $ for the doped system and the insulating parent compound, which we illustrate using the cuprates as an example.
arxiv:2409.05640
in this paper we establish an attainability result for the minimum time function of a control problem in the space of probability measures endowed with wasserstein distance. the dynamics is provided by a suitable controlled continuity equation, where we impose a nonlocal nonholonomic constraint on the driving vector field, which is assumed to be a borel selection of a given set - valued map. this model can be used to describe at a macroscopic level a so - called \ emph { multiagent system } made of several possible interacting agents.
arxiv:1904.10933
we propose a focusing method of intense midair ultrasound out of ultrasonic emission from a single flexurally vibrating square plate partially covered with a purposely designed amplitude mask. many applications relying on nonlinear acoustic effects, such as radiation force employed in acoustic levitation, have been devised. for those applications, focused intense airborne ultrasound is conventionally formed using phased arrays of transducers or sound sources with specific fabricated shapes. however, the former strategies are considerably costly, and the latter may require minute three - dimensional fabrication processes, which both hinder their utility, especially for the construction of a large ultrasound emitting aperture. our method offers a possible solution for this, where the amplitude masks are designed in a fashion similar to the freznel - zone - plate designing, but according to the positions of nodes and antinodes of the vibrating plate that are measured beforehand. we experimentally demonstrate the successful formation of midair ultrasound focus at a desired position. our method only requires a monolithic plate, a driving transducer under the plate, and an amplitude mask fabricated out of laser machining processes of an acrylic plate. magnification of the spatial scale of ultrasound apertures based on our method is much more readily and affordably achieved than conventional methods, which will lead to new midair ultrasound applications with the whole room workspace.
arxiv:2406.00996
some elaborations regarding the hilbert and fourier transforms of fermi function are presented. the main result shows that the hilbert transform of the difference of two fermi functions has an analytical expression in terms of the $ \ psi $ ( digamma ) function, while its fourier transform is expressed by mean of elementary functions. moreover an integral involving the product of the difference of two fermi functions with its hilbert transform is evaluated analytically. these findings are of fundamental importance in discussing the transport properties of electronic systems.
arxiv:1303.6206
interpolation inequalities in triebel - lizorkin - lorentz spaces and besov - lorentz spaces are studied for both inhomogeneous and homogeneous cases. first we establish interpolation inequalities under quite general assumptions on the parameters of the function spaces. several results on necessary conditions are also provided. next, utilizing the interpolation inequalities together with some embedding results, we prove gagliardo - nirenberg inequalities for fractional derivatives in lorentz spaces, which do hold even for the limiting case when one of the parameters is equal to 1 or $ \ infty $.
arxiv:2109.07518
the 2 + 1 dimensional quantum lifshitz model can be generalised to a class of higher dimensional free field theories that exhibit lifshitz scaling. when the dynamical critical exponent equals the number of spatial dimensions, equal time correlation functions of scaling operators in the generalised quantum lifshitz model are given by a d - dimensional higher - derivative conformal field theory. autocorrelation functions in the generalised quantum lifshitz model in any number of dimensions can on the other hand be expressed in terms of autocorrelation functions of a two - dimensional conformal field theory. this also holds for autocorrelation functions in a strongly coupled lifshitz field theory with a holographic dual of einstein - maxwell - dilaton type. the map to a two - dimensional conformal field theory extends to autocorrelation functions in thermal states and out - of - equilbrium states preserving symmetry under spatial translations and rotations in both types of lifshitz models. furthermore, the spectrum of quasinormal modes of scalar field perturbations in lifshitz black hole backgrounds can be obtained analytically at low spatial momenta and exhibits a linear dispersion relation at z = d. at high momentum, the mode spectrum can be obtained in a wkb approximation and displays very different behaviour compared to holographic duals of conformal field theories. this has implications for thermalisation in strongly coupled lifshitz field theories with z > 1.
arxiv:1611.09371
in this paper, we present the directly denoising diffusion model ( dddm ) : a simple and generic approach for generating realistic images with few - step sampling, while multistep sampling is still preserved for better performance. dddms require no delicately designed samplers nor distillation on pre - trained distillation models. dddms train the diffusion model conditioned on an estimated target that was generated from previous training iterations of its own. to generate images, samples generated from the previous time step are also taken into consideration, guiding the generation process iteratively. we further propose pseudo - lpips, a novel metric loss that is more robust to various values of hyperparameter. despite its simplicity, the proposed approach can achieve strong performance in benchmark datasets. our model achieves fid scores of 2. 57 and 2. 33 on cifar - 10 in one - step and two - step sampling respectively, surpassing those obtained from gans and distillation - based models. by extending the sampling to 1000 steps, we further reduce fid score to 1. 79, aligning with state - of - the - art methods in the literature. for imagenet 64x64, our approach stands as a competitive contender against leading models.
arxiv:2405.13540
initial development and subsequent calibration of discrete event simulation models for complex systems require accurate identification of dynamically changing process characteristics. existing data driven change point methods ( dd - cpd ) assume changes are extraneous to the system, thus cannot utilize available process knowledge. this work proposes a unified framework for process - driven multi - variate change point detection ( pd - cpd ) by combining change point detection models with machine learning and process - driven simulation modeling. the pd - cpd, after initializing with dd - cpd ' s change point ( s ), uses simulation models to generate system level outputs as time - series data streams which are then used to train neural network models to predict system characteristics and change points. the accuracy of the predictive models measures the likelihood that the actual process data conforms to the simulated change points in system characteristics. pd - cpd iteratively optimizes change points by repeating simulation and predictive model building steps until the set of change point ( s ) with the maximum likelihood is identified. using an emergency department case study, we show that pd - cpd significantly improves change point detection accuracy over dd - cpd estimates and is able to detect actual change points.
arxiv:2005.05385
we have performed first - principles calculations using density functional theory on a kagome lattice model with a chiral spin state, as a representative example demonstrating significant longitudinal and transverse thermoelectric properties. the results revealed that the saddle - point - type van hove singularity ( vhs ) enhances thermoelectric effects. the longitudinal thermoelectric conductivity $ \ alpha _ { xx } $ was large at the chemical potentials tuned close to the band at the symmetry points, k ( lower band edge ), $ \ gamma $ ( upper band edge ), and m ( saddle point ), where the vhss of the density of states ( dos ) were at the corresponding band energies. the transverse thermoelectric conductivity $ \ alpha _ { xy } $ was large at the chemical potential of saddle - point - type vhs. a large anomalous nernst coefficient of about 10 $ \ mu $ v / k at 50 k was expected.
arxiv:2309.11728
we show that instanton calculations in qcd become theoretically well defined in the gluon saturation environment which suppresses large size instantons. the effective cutoff scale is determined by the inverse of the saturation scale. we concentrate on two most important cases : the small - x tail of a gluon distribution of a high energy hadron or a large nucleus and the central rapidity region in a high energy hadronic or heavy ion collision. in the saturation regime the gluon density in a single large ultrarelativistic nucleus is high and gluonic fields are given by the classical solutions of the equations of motion. we show that these strong classical fields do not affect the density of instantons in the nuclear wave function compared to the instanton density in the vacuum. a classical solution with non - trivial topological charge is found for the gluon field of a single nucleus at the lowest order in the instanton perturbation theory. in the case of ultrarelativistic heavy ion collisions a strong classical gluonic field is produced in the central rapidity region. we demonstrate that this field introduces a suppression factor of exp { - c \ rho ^ 4 q _ s ^ 4 / [ 8 \ alpha ^ 2 n _ c ( q _ s \ tau ) ^ 2 ] } in the instanton size distribution, where q _ s is the saturation scale of both ( identical ) nuclei, \ tau is the proper time and c = 1 is the gluon liberation coefficient. this factor suggests that gluonic saturation effects at the early stages of nuclear collisions regulate the instanton size distribution in the infrared region and make the instanton density finite by suppressing large size instantons.
arxiv:hep-ph/0106248
we compute the entanglement cost of several families of bipartite mixed states, including arbitrary mixtures of two bell states. this is achieved by developing a technique that allows us to ascertain the additivity of the entanglement of formation for any state supported on specific subspaces. as a side result, the proof of the irreversibility in asymptotic local manipulations of entanglement is extended to two - qubit systems.
arxiv:quant-ph/0112131
magnetic properties of the $ s = 1 / 2 $ antiferromagnet $ \ alpha $ - cu $ _ { 2 } $ v $ _ { 2 } $ o $ _ { 7 } $ have been studied using magnetization, quantum monte carlo ( qmc ) simulations, and neutron diffraction. magnetic susceptibility shows a broad peak at $ \ sim50 $ ~ k followed by an abrupt increase indicative of a phase transition to a magnetically ordered state at $ t _ { n } $ = 33. 4 ( 1 ) k. above $ t _ n $, a fit to the curie - weiss law gives a curie - weiss temperature of $ \ theta = - 73 ( 1 ) $ ~ k suggesting the dominant antiferromagnetic coupling. the result of the qmc calculations on the helical - honeycomb spin network with two antiferromagnetic exchange interactions $ j _ 1 $ and $ j _ 2 $ provides a better fit to the susceptibility than the previously proposed spin - chain model. two sets of the coupling parameters $ j _ 1 : j _ 2 = 1 : 0. 45 $ with $ j _ 1 = 5. 79 ( 1 ) $ ~ mev and $ 0. 65 : 1 $ with $ j _ 2 = 6. 31 ( 1 ) $ ~ mev yield equally good fits down to $ \ sim t _ n $. below $ t _ { n } $, weak ferromagnetism due to spin canting is observed. the canting is caused by the dzyaloshinskii - moriya interaction with an estimated $ bc $ - plane component $ \ left | d _ p \ right | $ $ \ simeq0. 14j _ 1 $. neutron diffraction reveals that the $ s = 1 / 2 $ cu $ ^ { 2 + } $ spins antiferromagnetically align in the $ fd ' d ' 2 $ magnetic space group. the ordered moment of 0. 93 ( 9 ) ~ $ \ mu _ b $ is predominantly along the crystallographic $ a $ - axis.
arxiv:1502.02769
non - autoregressive ( nar ) text generation has attracted much attention in the field of natural language processing, which greatly reduces the inference latency but has to sacrifice the generation accuracy. recently, diffusion models, a class of latent variable generative models, have been introduced into nar text generation, showing an improved text generation quality. in this survey, we review the recent progress in diffusion models for nar text generation. as the background, we first present the general definition of diffusion models and the text diffusion models, and then discuss their merits for nar generation. as the core content, we further introduce two mainstream diffusion models in existing work of text diffusion, and review the key designs of the diffusion process. moreover, we discuss the utilization of pre - trained language models ( plms ) for text diffusion models and introduce optimization techniques for text data. finally, we discuss several promising directions and conclude this paper. our survey aims to provide researchers with a systematic reference of related research on text diffusion models for nar generation. we present our collection of text diffusion models at https : / / github. com / rucaibox / awesome - text - diffusion - models.
arxiv:2303.06574
superlattice ( sl ) thin films composed of refractory ceramics unite extremely high hardness and enhanced fracture toughness ; a material combination often being mutually exclusive. while the hardness enhancement obtained whentwo materials form a superlattice is well described by existing models based on dislocation mobility, the underlying mechanisms behind the increase in fracture toughness are yet to be unraveled. here we provide a model based on linear elasticity theory to predict the fracture toughness enhancement in ( semi - ) epitaxial nanolayers due to coherency stresses and formation of misfit dislocations. we exemplarily study a superlattice structure composed of two cubic transition metal nitrides ( tin, crn ) on a mgo ( 100 ) single - crystal substrate. minimization of the overall strain energy, each time a new layer is added on the nanolayered stack, allows estimating the density of misfit dislocations formed at the interfaces. the evolving coherency stresses, which are partly relaxed by the misfit dislocations, are then used to calculate the apparent fracture toughness of respective sl architectures by applying the weight function method. the results show that the critical stress intensity increases steeply with increasing bilayer period for very thin ( essentially dislocation - free ) sls, before the k _ ic values decline more gently along with the formation of misfit dislocations. the characteristic k _ ic vs. bilayer - period - dependence nicely matches experimental trends. importantly, all critical stress intensity values of the superlattice films clearly exceed the intrinsic fracture toughness of the constituting layer materials, evincing the importance of coherency stresses for increasing the crack growth resistance.
arxiv:2008.13652
we present a simple physically motivated picture for the mildly non - linear regime of structure formation, which captures the effects of the bulk flows. we apply this picture to develop a method to significantly reduce the sample variance in cosmological n - body simulations at the scales relevant to the baryon acoustic oscillations ( bao ). the results presented in this paper will allow for a speed - up of an order of magnitude ( or more ) in the scanning of the cosmological parameter space using n - body simulations for studies which require a good handle of the mildly non - linear regime, such as those targeting the bao. using this physical picture we develop a simple formula, which allows for the rapid calculation of the mildly non - linear matter power spectrum to percent level accuracy, and for robust estimation of the bao scale.
arxiv:1109.4939
we present measurements of elliptic flow ( $ v _ { 2 } $ ) of $ k _ { s } ^ { 0 } $, $ \ lambda $, $ \ bar { \ lambda } $, $ \ phi $, $ \ xi ^ { - } $, $ \ overline { \ xi } ^ { + } $, and $ \ omega ^ { - } $ + $ \ overline { \ omega } ^ { + } $ at mid - rapidity ( $ | \ eta | < $ 1. 0 ) in isobar collisions ( $ ^ { 96 } _ { 44 } $ ru + $ ^ { 96 } _ { 44 } $ ru and $ ^ { 96 } _ { 40 } $ zr + $ ^ { 96 } _ { 40 } $ zr ) at $ \ sqrt { s _ { \ mathrm { nn } } } $ = 200 gev. the centrality and transverse momentum ( $ p _ { \ mathrm { t } } $ ) dependence of elliptic flow is presented. the number of constituent quark ( ncq ) scaling of $ v _ { 2 } $ in isobar collisions is discussed. $ p _ { t } $ - integrated elliptic flow ( $ \ left \ langle v _ { 2 } \ right \ rangle $ ) is observed to increase from central to peripheral collisions. the ratio of $ \ left \ langle v _ { 2 } \ right \ rangle $ between the two isobars shows a deviation from unity for strange hadrons ( $ k _ { s } ^ { 0 } $, $ \ lambda $ and $ \ bar { \ lambda } $ ) indicating a difference in nuclear structure and deformation. a system size dependence of strange hadron $ v _ { 2 } $ at high $ p _ { t } $ is observed among ru + ru, zr + zr, cu + cu, au + au, and u + u systems. a multi - phase transport ( ampt ) model with string melting ( sm ) describes the experimental data well in the measured $ p _ { \ mathrm { t } } $ range for isobar collisions at $ \ sqrt { s _ { \ mathrm { nn } } } $ = 200 gev.
arxiv:2311.09698
single - molecule fluorescence techniques have revolutionized our ability to study proteins. however, the presence of a fluorescent label can alter the protein structure and / or modify its reaction with other species. to avoid the need for a fluorescent label, the intrinsic autofluorescence of proteins in the ultraviolet offers the benefits of fluorescence techniques without introducing the labelling drawbacks. unfortunately, the low autofluorescence brightness of proteins has greatly challenged single molecule detection so far. here we introduce optical horn antennas, a dedicated nanophotonic platform enabling the label - free detection of single proteins in the uv. this design combines fluorescence plasmonic enhancement, efficient collection up to 85 { \ textdegree } angle and background screening. we detect the uv autofluorescence from immobilized and diffusing single proteins, and monitor protein unfolding and dissociation upon denaturation. optical horn antennas open up a unique and promising form of fluorescence spectroscopy to investigate single proteins in their native states in real time.
arxiv:2204.02807
detecting out - of - distribution ( ood ) inputs is a principal task for ensuring the safety of deploying deep - neural - network classifiers in open - set scenarios. ood samples can be drawn from arbitrary distributions and exhibit deviations from in - distribution ( id ) data in various dimensions, such as foreground features ( e. g., objects in cifar100 images vs. those in cifar10 images ) and background features ( e. g., textural images vs. objects in cifar10 ). existing methods can confound foreground and background features in training, failing to utilize the background features for ood detection. this paper considers the importance of feature disentanglement in out - of - distribution detection and proposes the simultaneous exploitation of both foreground and background features to support the detection of ood inputs in in out - of - distribution detection. to this end, we propose a novel framework that first disentangles foreground and background features from id training samples via a dense prediction approach, and then learns a new classifier that can evaluate the ood scores of test images from both foreground and background features. it is a generic framework that allows for a seamless combination with various existing ood detection methods. extensive experiments show that our approach 1 ) can substantially enhance the performance of four different state - of - the - art ( sota ) ood detection methods on multiple widely - used ood datasets with diverse background features, and 2 ) achieves new sota performance on these benchmarks.
arxiv:2303.08727
the reactions pp - > pf ( x0 ) ps, where x0 is observed decaying to eta etaprime and etaprime etaprime, have been studied at 450 gev / c. this is the first time that these channels have been observed in central production and only the second time that the etaprime etaprime channel has been observed in any production mechanism. in the eta etaprime channel there is evidence for the f0 ( 1500 ) and a peak at 1. 95 gev. the etaprime etaprime channel shows a peak at threshold which is compatible with having jpc = 2 + + and spin projection jz = 0.
arxiv:hep-ex/9911041
in this article, we consider the number of collisions of three independent simple random walks on a subgraph of the two - dimensional square lattice obtained by removing all horizontal edges with vertical coordinate not equal to 0 and then, for $ n \ in \ mathbb { z } $, restricting the vertical segment of the graph located at horizontal coordinate $ n $ to the interval $ \ { 0, 1, \ dots, \ log ^ { \ alpha } ( | n | \ vee 1 ) \ } $. specifically, we show the following phase transition : when $ \ alpha \ leq 1 $, the three random walks collide infinitely many times almost - surely, whereas when $ \ alpha > 1 $, they collide only finitely many times almost - surely. this is a variation of a result of barlow, peres and sousi, who showed a similar phase transition for two random walks when the vertical segments are truncated at height $ | n | ^ { \ alpha } $.
arxiv:2410.04882
an integral homology theory on the category of undirected reflexive graphs was constructed in [ 2 ]. a geometrical method to understand behaviors of $ 1 $ - and $ 2 $ - simplices under differential maps of the theory was developed in [ 3 ] and led us to an independent proof that the first homology group of any cycle graphs is $ \ mathbb { z } $, as it was proved before by a version of hurewicz theorem harshly defined and shown in [ 1 ] and [ 2 ]. in this work, we use the old method in [ 3 ] to study behaviors of the first homology group of hamiltonian graphs. we discovered that $ h _ 1 ( g ) $ is torsion - free, for any hamiltonian graphs $ g $.
arxiv:1912.06603
many real oscillators are coupled to other oscillators and the coupling can affect the response of the oscillators to stimuli. we investigate phase response curves ( prcs ) of coupled oscillators. the prcs for two weakly coupled phase - locked oscillators are analytically obtained in terms of the prc for uncoupled oscillators and the coupling function of the system. through simulation and analytic methods, the prcs for globally coupled oscillators are also discussed.
arxiv:0809.3371
abhyankar defined an ideal to be hilbertian if its hilbert polynomial coincides with its hilbert function for all nonnegative integers. in 1984, he proved that the ideal of ( r + 1 ) - order minors of a generic p x q matrix is hilbertian. we give a different proof and a generalization to the schubert determinantal ideals introduced by fulton in 1992. our proof reduces to a simple upper bound for the castelnuovo - mumford regularity of these ideals. we further indicate the pervasiveness of the hilbertian property in schubert geometry.
arxiv:2305.12558
we present the mixed qcd - ew two - loop virtual amplitudes for the neutral current drell - yan production, one of the bottlenecks for the complete calculation of the nnlo mixed qcd - ew corrections. we present the computational details and the first steps towards their automation. we describe the evaluation of all the relevant two - loop feynman integrals using analytical and semi - analytical methods, the subtraction of the universal infrared singularities and present the numerical evaluation of the finite remainder.
arxiv:2208.03510
in this paper, ageing behavior of suspensions of laponite with varying salt concentration is investigated using rheological tools. it is observed that the ageing is accompanied by an increase in the complex viscosity. the succeeding creep experiments performed at various ages showed damped oscillations in the strain. the characteristic time - scale of the damped oscillations, retardation time, showed a prominent decrease with the age of the system. however, this dependence weakens with an increase in the salt concentration, which is known to change microstructure of the system from glass - like to gel - like. we postulate that a decrease in the retardation time can be represented as a decrease in the viscosity ( friction ) of the dissipative environment surrounding the arrested entities that opposes elastic deformation of the system. we believe that ageing in colloidal glass leads to a greater ordering that enhances relative spacing between the constituents thereby reducing the frictional resistance. however, since a gel state is inherently different in structure ( fractal network ) than that of a glass ( disordered ), ageing in the same does not induce ordering. consequently, we observe inverse dependence of retardation time on age becoming weaker with an increase in the salt concentration. we analyze these results from a perspective of ageing dynamics of both glass state and gel state of laponite suspensions.
arxiv:0708.0456
stochastic gradient descent ( sgd ) is a well known method for regression and classification tasks. however, it is an inherently sequential algorithm at each step, the processing of the current example depends on the parameters learned from the previous examples. prior approaches to parallelizing linear learners using sgd, such as hogwild! and allreduce, do not honor these dependencies across threads and thus can potentially suffer poor convergence rates and / or poor scalability. this paper proposes symsgd, a parallel sgd algorithm that, to a first - order approximation, retains the sequential semantics of sgd. each thread learns a local model in addition to a model combiner, which allows local models to be combined to produce the same result as what a sequential sgd would have produced. this paper evaluates symsgd ' s accuracy and performance on 6 datasets on a shared - memory machine shows upto 11x speedup over our heavily optimized sequential baseline on 16 cores and 2. 2x, on average, faster than hogwild!.
arxiv:1705.08030
in this paper we give an exact relation between the green ' s function in a scattering problem for a wave equation and the correlation of scattered plane waves. this general relation was proved in a special case by sanchez - sesma and al.
arxiv:1103.4450
a generalization of the quantum xor - gate is presented which operates in arbitrary dimensional hilbert spaces. together with one - particle fourier transforms this gate is capable of performing a variety of tasks which are important for quantum information processing in arbitrary dimensional hilbert spaces. among these tasks are the preparation of bell states, quantum teleportation and quantum state purification. a physical realization of this generalized xor - gate is proposed which is based on non - linear optical elements.
arxiv:quant-ph/0008022
differential structure of lattices can be defined if the lattices are treated as models of noncommutative geometry. the detailed construction consists of specifying a generalized dirac operator and a wedge product. gauge potential and field strength tensor can be defined based on this differential structure. when an inner product is specified for differential forms, classical action can be deduced for lattice gauge fields. besides the familiar wilson action being recovered, an additional term, related to the non - unitarity of link variables and loops spanning no area, emerges.
arxiv:hep-th/0101184
large language models ( llms ), excel in natural language understanding, but their capability for complex mathematical reasoning with an amalgamation of structured tables and unstructured text is uncertain. this study explores llms ' mathematical reasoning on four financial tabular question - answering datasets : tatqa, finqa, convfinqa, and multihiertt. through extensive experiments with various models and prompting techniques, we assess how llms adapt to complex tables and mathematical tasks. we focus on sensitivity to table complexity and performance variations with an increasing number of arithmetic reasoning steps. the results provide insights into llms ' capabilities and limitations in handling complex mathematical scenarios for semi - structured tables. ultimately, we introduce a novel prompting technique tailored to semi - structured documents, matching or outperforming other baselines in performance while providing a nuanced understanding of llms abilities for such a task.
arxiv:2402.11194
dependent pattern matching is a key feature in dependently typed programming. however, there is a theory - practice disconnect : while many proof assistants implement pattern matching as primitive, theoretical presentations give semantics to pattern matching by elaborating to eliminators. though theoretically convenient, eliminators can be awkward and verbose, particularly for complex combinations of patterns. this work aims to bridge the theory - practice gap by presenting a direct categorical semantics for pattern matching, which does not elaborate to eliminators. this is achieved using sheaf theory to describe when sets of arrows ( terms ) can be amalgamated into a single arrow. we present a language with top - level dependent pattern matching, without specifying which sets of patterns are considered covering for a match. then, we give a sufficient criterion for which pattern - sets admit a sound model : patterns should be in the canonical coverage for the category of contexts. finally, we use sheaf - theoretic saturation conditions to devise some allowable sets of patterns. we are able to express and exceed the status quo, giving semantics for datatype constructors, nested patterns, absurd patterns, propositional equality, and dot patterns.
arxiv:2501.18087
spins in quantum dots can act as the qubit for quantum computation. in this context we point out that spins on neighboring dots will experience an anisotropic form of the exchange coupling, called the dzyaloshinskii - moriya ( dm ) interaction, which mixes the spin singlet and triplet states. this will have an important effect on both qubit interactions and spin - dependent tunneling. we show that the interaction depends strongly on the direction of the external field, which gives an unambiguous signature of this effect. we further propose a new experiment using coupled quantum dots to detect and characterize the dm interaction.
arxiv:cond-mat/0601098
this study probes the phonetic and phonological knowledge of lexical tones in tts models through two experiments. controlled stimuli for testing tonal coarticulation and tone sandhi in mandarin were fed into tacotron 2 and waveglow to generate speech samples, which were subject to acoustic analysis and human evaluation. results show that both baseline tacotron 2 and tacotron 2 with bert embeddings capture the surface tonal coarticulation patterns well but fail to consistently apply the tone - 3 sandhi rule to novel sentences. incorporating pre - trained bert embeddings into tacotron 2 improves the naturalness and prosody performance, and yields better generalization of tone - 3 sandhi rules to novel complex sentences, although the overall accuracy for tone - 3 sandhi was still low. given that tts models do capture some linguistic phenomena, it is argued that they can be used to generate and validate certain linguistic hypotheses. on the other hand, it is also suggested that linguistically informed stimuli should be included in the training and the evaluation of tts models.
arxiv:1912.10915