text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
we present a new method for inferring hidden markov models from noisy time sequences without the necessity of assuming a model architecture, thus allowing for the detection of degenerate states. this is based on the statistical prediction techniques developed by crutchfield et al., and generates so called causal state models, equivalent to hidden markov models. this method is applicable to any continuous data which clusters around discrete values and exhibits multiple transitions between these values such as tethered particle motion data or fluorescence resonance energy transfer ( fret ) spectra. the algorithms developed have been shown to perform well on simulated data, demonstrating the ability to recover the model used to generate the data under high noise, sparse data conditions and the ability to infer the existence of degenerate states. they have also been applied to new experimental fret data of holliday junction dynamics, extracting the expected two state model and providing values for the transition rates in good agreement with previous results and with results obtained using existing maximum likelihood based methods. | arxiv:1011.2969 |
in present technological era, healthcare providers generate huge amount of clinical data on daily basis. generated clinical data is stored digitally in the form of electronic health records ( ehr ) as a central data repository of hospitals. data contained in ehr is not only used for the patients ' primary care but also for various secondary purposes such as clinical research, automated disease surveillance and clinical audits for quality enhancement. using ehr data for secondary purposes without consent or in some cases even with consent creates privacy issues for individuals. secondly, ehr data is also made accessible to various stake holders including different government agencies at various geographical sites through wired or wireless networks. sharing of ehr across multiples agencies makes it vulnerable to cyber attacks and also makes it difficult to implement strict privacy laws as in some cases data is shared with organization that is governed by specific regional law. privacy of an individual could be severely affected when their sensitive private information contained in ehr is leaked or exposed to public. data leak can cause financial losses or an individuals may encounter social boycott if their medical condition is exposed in public. to protect patients personal data from such threats, there exists different privacy regulations such as gdpr, hipaa and mhr. however, continually evolving state - of - the - art techniques in machine learning, data analytics and hacking are making it even more difficult to completely protect individual ' s / patient ' s privacy. in this article, we have systematically examined various secondary uses of ehr with the aim to highlight how these secondary uses effect patients ' privacy. secondly, we have critically analyzed gdpr and highlighted possible areas of improvement, considering escalating use of technology and different secondary uses of ehr. | arxiv:2001.09479 |
it is a commonly stated that the acceleration sensitivity of an atom interferometer is proportional to the space - time area enclosed between the two interfering arms. here we derive the interferometric phase shift for an extensive class of interferometers, and explore the circumstances in which only the inertial terms contribute. we then analyse various configurations in light of this geometric interpretation of the interferometric phase shift. | arxiv:1312.2713 |
we show that if ( u ; k ) is a minimizer of the mumford - shah functional in an open set of r ^ 3, and if x, k and r > 0 are such that k is close enough to a minimal cone of type p ( a plane ), y ( three half planes meeting with 120 degrees angles ) or t ( cone over a regular tetrahedron centered at the origin ) in terms of hausdorff distance in b ( x ; r ), then k is c ^ 1, alpha equivalent to the minimal cone in b ( x ; cr ) where c < 1 is an universal constant. | arxiv:0806.2994 |
first observed in 1995, the top quark is one of a pair of third - generation quarks in the standard model of particle physics. it has charge + 2 / 3e and a mass of 171. 4 gev, about 40 times heavier than its partner, the bottom quark. the cdf and d0 collaborations have identified several hundred events containing the decays of top - antitop pairs in the large dataset collected at the tevatron proton - antiproton collider over the last four years. they have used these events to measure the top quark ' s mass to nearly 1 % precision and to study other top quark properties. the mass of the top quark is a fundamental parameter of the standard model, and knowledge of its value with small uncertainty allows us to predict properties of the as - yet - unobserved higgs boson. this paper presents the status of the measurements of the top quark mass. | arxiv:hep-ex/0609028 |
using optical measurements, we demonstrate that the rotation of micron - scale graphene nanoplatelets levitated in a quadrupole ion trap in high vacuum can be frequency locked to an applied radio frequency ( rf ) electric field. over time, frequency locking stabilizes the nanoplatelet so that its axis of rotation is normal to the nanoplatelet and perpendicular to the rf electric field. we observe that residual slow dynamics of the direction of the axis of rotation in the plane normal to the rf electric field are determined by an applied magnetic field. we present a simple model that accurately describes our observations. from our data and model we can infer both a diamagnetic polarizability and a magnetic moment proportional to the frequency of rotation, which we compare to theoretical values. our results establish that trapping technologies have applications for materials measurements at the nanoscale. | arxiv:1612.05928 |
we performed an experimental observation on the spontaneous imbibition of water in a porous media in a radial hele - shaw cell and confirmed washburn ' s law, where r is distance and t is time. spontaneous imbibition with a radial interface window followed scaling dynamics when the front invaded into the porous media. we found a growth exponent ( \ b { eta } = 0. 6 ) that was independent of the pressure applied at the liquid inlet. the roughness exponent decreased with an increase in pressure. the roughening dynamics of two dimensional spontaneous radial imbibition obey family - vicsek scaling, which is different from that with a one - dimensional planar interface window. | arxiv:1503.03943 |
in the present paper, we study the relationships of $ n $ - cotorsion pairs among three abelian categories in a recollement. under certain conditions, we present an explicit construction of gluing of $ n $ - cotorsion pairs in an abelian category $ \ mathcal { d } $ with respect to $ n $ - cotorsion pairs in abelian categories $ \ mathcal { d } ^ { ' } $, $ \ mathcal { d } ^ { ' ' } $ respectively. on the other hand, we study the construction of $ n $ - cotorsion pairs in abelian categories $ \ mathcal { d } ^ { ' } $, $ \ mathcal { d } ^ { ' ' } $ obtained from $ n $ - cotorsion pairs in an abelian category $ \ mathcal { d } $. | arxiv:2403.04220 |
the ability to efficiently evolve hydrogen via electrocatalysis at low overpotentials holds tremendous promise for clean energy. hydrogen evolution reaction ( her ) can be easily achieved from water if a voltage above the thermodynamic potential of the her is applied. large overpotentials are energetically inefficient but can be lowered with expensive platinum based catalysts. replacement of pt with inexpensive, earth abundant electrocatalysts would be significantly beneficial for clean and efficient hydrogen evolution. towards this end, promising her characteristics have been reported using 2h ( trigonal prismatic ) xs2 ( where x = mo or w ) nanoparticles with a high concentration of metallic edges as electrocatalysts. the key challenges for her with xs2 are increasing the number and catalytic activity of active sites. here we report atomically thin nanosheets of chemically exfoliated ws2 as efficient catalysts for hydrogen evolution with very low overpotentials. atomic - resolution transmission electron microscopy and spectroscopy analyses indicate that enhanced electrocatalytic activity of ws2 is associated with high concentration of strained metallic 1t ( octahedral ) phase in the as - exfoliated nanosheets. density functional theory calculations reveal that the presence of strain in the 1t phase leads to an enhancement of the density of states at the fermi level and increases the catalytic activity of the ws2 nanosheet. our results suggest that chemically exfoliated ws2 nanosheets could be interesting catalysts for hydrogen evolution. | arxiv:1212.1513 |
in this paper we report further progress towards a complete theory of state - independent expected utility maximization with semimartingale price processes for arbitrary utility function. without any technical assumptions we establish a surprising fenchel duality result on conjugate orlicz spaces, offering a new economic insight into the nature of primal optima and providing fresh perspective on the classical papers of kramkov and schachermayer ( 1999, 2003 ). the analysis points to an intriguing interplay between no - arbitrage conditions and standard convex optimization and motivates study of the fundamental theorem of asset pricing ( ftap ) for orlicz tame strategies. | arxiv:1711.09121 |
geocasting is a special variant of multicasting, where data packet or message is transmitted to a predefined geographical location i. e., known as geocast region. the applications of geocasting in vanet are to disseminate information like, collision warning, advertising, alerts message, etc. in this paper, we have proposed a model for highway scenario where the highway is divided into number of cells. the intersection area between two successive cells is computed to find the number of common nodes. therefore, probabilistic analysis of the nodes present and void occurrence in the intersection area is carried out. further, we have defined different forwarding zones to restrict the number of participated nodes for data delivery. number of nodes present and void occurrence in the different forwarding zones have also been analysed based on various node density in the network to determine the successful delivery of data. our analytical results show that in a densely populated network, data can be transmitted with low radio transmission range. in a densely populated network smaller forwarding zones will be selected for data delivery. | arxiv:1203.1981 |
we propose a customized convolutional neural network based autoencoder called a hierarchical autoencoder, which allows us to extract nonlinear autoencoder modes of flow fields while preserving the contribution order of the latent vectors. as preliminary tests, the proposed method is first applied to a cylinder wake at $ re _ d $ = 100 and its transient process. it is found that the proposed method can extract the features of these laminar flow fields as the latent vectors while keeping the order of their energy content. the present hierarchical autoencoder is further assessed with a two - dimensional $ y - z $ cross - sectional velocity field of turbulent channel flow at $ re _ { \ tau } $ = 180 in order to examine its applicability to turbulent flows. it is demonstrated that the turbulent flow field can be efficiently mapped into the latent space by utilizing the hierarchical model with a concept of ordered autoencoder mode family. the present results suggest that the proposed concept can be extended to meet various demands in fluid dynamics including reduced order modeling and its combination with linear theory - based methods by using its ability to arrange the order of the extracted nonlinear modes. | arxiv:2006.06977 |
the surface component of the icecube neutrino observatory, icetop, consists of an array of ice - cherenkov tanks measuring the electromagnetic signal as well as low - energy ( $ \ sim \ rm { gev } $ ) muons from cosmic - ray air showers. in addition, accompanying high - energy ( above a few $ 100 \, \ rm { gev } $ ) muons can be observed in coincidence in the deep in - ice detector. a combined measurement of the low - and high - energy muon content is of particular interest for tests of hadronic interaction models as well as for cosmic - ray mass discrimination. however, since icetop does not feature dedicated muon detectors, an estimation of the low - energy muon component of individual air showers is challenging. in this work, a two - component lateral distribution function ( ldf ), using separate descriptions for the electromagnetic and muon lateral distributions of the detector signals, is introduced as a new approach for the estimation of low - energy muons in air showers on an event - by - event basis. the principle of the air - shower reconstruction using the two - component ldf, as well as its reconstruction performance with respect to primary energy and number of low - energy muons will be discussed. | arxiv:2309.00741 |
young people are increasingly exposed to adverse effects of data - driven profiling, recommending, and manipulation on social media platforms, most of them without adequate understanding of the mechanisms that drive these platforms. in the context of computing education, educating learners about mechanisms and data practices of social media may improve young learners ' data agency, digital literacy, and understanding how their digital services work. a four - hour technology - - supported intervention was designed and implemented in 12 schools involving 209 5th and 8th grade learners. two new classroom apps were developed to support the classroom activities. using likert - scale questions borrowed from a data agency questionnaire and open - ended questions that mapped learners ' data - driven reasoning on social media phenomena, this article shows significant improvement between pre - and post - tests in learners ' data agency and data - driven explanations of social media mechanisms. results present an example of improving young learners ' understanding of social media mechanisms. | arxiv:2501.16494 |
we present preliminary results for the masses and decay constants of the $ \ eta $ and $ \ eta ^ \ prime $ mesons using cls $ n _ f = 2 + 1 $ ensembles. one of the major challenges in these calculations are the large statistical fluctuations due to disconnected quark loops. we tackle these by employing a combination of noise reduction techniques which are tuned to minimize the statistical error at a fixed cost. on the analysis side we carefully assess excited states contributions by using a direct fit approach. | arxiv:1710.06733 |
in this article, we discuss subspace duals of a frame of translates by an action of a closed abelian subgroup $ \ gamma $ of a locally compact group $ \ mathscr g. $ these subspace duals are not required to lie in the space generated by the frame. we characterise translation - generated subspace duals of a frame / riesz basis involving the zak transform for the pair $ ( \ mathscr g, \ gamma ). $ we continue our discussion on the orthogonality of two translation - generated bessel pairs using the zak transform, which allows us to explore the dual of super - frames. as an example, we extend our findings to splines, gabor systems, $ p $ - adic fields $ \ mathbb q p, $ locally compact abelian groups using the fiberization map. | arxiv:2309.09066 |
we discuss a system of stochastic differential equations with a stiff linear term and additive noise driven by fractional brownian motions ( fbms ) with hurst parameter h > 1 / 2, which arise e. g., from spatial approximations of stochastic partial differential equations. for their numerical approximation, we present an exponential euler scheme and show that it converges in the strong sense with an exact rate close to the hurst parameter h. further, based on ( e. buckwar, m. g. riedler, and p. e. kloeden 2011 ), we conclude the existence of a unique stationary solution of the exponential euler scheme that is pathwise asymptotically stable. | arxiv:2308.13224 |
we determine the p - kazhdan - lusztig bases for antispherical ( co ) minuscule hecke categories in all characteristics, and for spherical ( co ) minuscule hecke categories in good characteristic. this is achieved using geometric and diagrammatic methods. the 2 - kazhdan - lusztig bases of antispherical cominuscule hecke categories exhibit extremely pathological behaviour. the notions of p - small resolutions and p - tight elements are introduced and conjecturally explain this behaviour. | arxiv:2409.16131 |
we give a characterization of the validity of the distributive law in a solid. there exists equivalence between the characterization and the modified axiom of distibutivity valid in a solid. | arxiv:1510.08722 |
this paper presents surena - v, a humanoid robot designed to enhance human - robot collaboration capabilities. the robot features a range of sensors, including barometric tactile sensors in its hands, to facilitate precise environmental interaction. this is demonstrated through an experiment showcasing the robot ' s ability to control a medical needle ' s movement through soft material. surena - v ' s operational framework emphasizes stability and collaboration, employing various optimization - based control strategies such as zero moment point ( zmp ) modification through upper body movement and stepping. notably, the robot ' s interaction with the environment is improved by detecting and interpreting external forces at their point of effect, allowing for more agile responses compared to methods that control overall balance based on external forces. the efficacy of this architecture is substantiated through an experiment illustrating the robot ' s collaboration with a human in moving a bar. this work contributes to the field of humanoid robotics by presenting a comprehensive system design and control architecture focused on human - robot collaboration and environmental adaptability. | arxiv:2501.17313 |
let $ \ widetilde { \ cal j } ( s ^ { 2n } ) $ be the set of orthogonal complex structures on $ ts ^ { 2n } $. we show that the twistor space $ \ widetilde { \ cal j } ( s ^ { 2n } ) $ is a kaehler manifold. then we show that an orthogonal almost complex structure $ j _ f $ on $ s ^ { 2n } $ is integrable if and only if the corresponding section $ f \ colon \ ; s ^ { 2n } \ to \ widetilde { \ cal j } ( s ^ { 2n } ) $ is holomorphic. these shows there is no integrable orthogonal complex structure on the sphere $ s ^ { 2n } $ for $ n > 1 $. we also show that there is no complex structure in a neighborhood of the space $ \ widetilde { \ cal j } ( s ^ { 2n } ) $. the method is to study the first chern class of $ t ^ { ( 1, 0 ) } s ^ { 2n } $. | arxiv:math/0608368 |
takes place in a vacuum and produces a thin film of solar cells by depositing thin layers of metals onto a backing structure. electron - beam evaporation uses thermionics emission to create a stream of electrons that are accelerated by a high - voltage cathode and anode arrangement. electrostatic and magnetic fields focus and direct the electrons to strike a target. the kinetic energy is transformed into thermal energy at or near the surface of the material. the resulting heating causes the material to melt and then evaporate. temperatures in excess of 3500 degrees celsius can be reached. the vapor from the source condenses onto a substrate, creating a thin film of high - purity material. film thicknesses from a single atomic layer to many micrometers can be achieved. this technique is used in microelectronics, optics, and material research, and to produce solar cells and many other products. = = = curing and sterilization = = = electron - beam curing is a method of curing paints and inks without the need for traditional solvent. electron - beam curing produces a finish similar to that of traditional solvent - evaporation processes, but achieves that finish through a polymerization process. e - beam processing is also used to cross - link polymers to make them more resistant to thermal, mechanical or chemical stresses. e - beam processing has been used for the sterilization of medical products and aseptic packaging materials for foods, as well as disinfestation, the elimination of live insects from grain, tobacco, and other unprocessed bulk crops. = = = electron microscopes = = = an electron microscope uses a controlled beam of electrons to illuminate a specimen and produce a magnified image. two common types are the scanning electron microscope ( sem ) and the transmission electron microscope ( tem ). = = = medical radiation therapy = = = electron beams impinging on metal produce x - rays. the x - rays may be diagnostic, e. g., dental or limb images. often in these x - ray tubes the metal is a spinning disk so that it doesn ' t melt ; the disk is spun in vacuum via a magnetic motor. the x - rays may also be used to kill cancerous tissue. the therac - 25 machine is an infamous example of this. = = history = = electron beam technology ultimately derives from work that lead to the discovery of the electron at a time when electron beams were called cathode rays. key advances in | https://en.wikipedia.org/wiki/Electron-beam_technology |
highly nonlinear optical phenomena can provide access to properties of electronic systems which are otherwise difficult to access through conventional linear optical spectroscopies. in particular, high harmonic generation ( hhg ) in crystalline solids is strikingly different from that in atomic gases, and it enables us to access electronic properties such as the band structure, berry curvature, and valence electron density. here, we show that polarization - resolved hhg measurements can be used to probe the transition dipole moment ( tdm ) texture in momentum space in two dimensional semiconductors. tdm is directly related to the internal structure of the electronic system and governs the optical properties. we study hhg in black phosphorus, which offers a simple two - band system, with bandgap resonant excitation. we observed a unique crystal - orientation dependence of the hhg yields and polarizations and succeeded in reconstructing the tdm texture related to the inter - atomic bonding structure. our results demonstrate the potential of high harmonic spectroscopy for probing electronic wavefunctions in crystalline solids. | arxiv:2006.09376 |
we prove the analogue of the riemann - roch formula for the noncommutative two torus $ a _ { \ theta } = c ( \ mathbb { t } _ { \ theta } ^ 2 ) $ equipped with an arbitrary translation invariant complex structure and a weyl factor represented by a positive element $ k \ in c ^ { \ infty } ( \ mathbb { t } _ { \ theta } ^ 2 ) $. we consider a topologically trivial line bundle equipped with a general holomorphic structure and the corresponding twisted dolbeault laplacians. we define an spectral triple ( $ a _ { \ theta }, \ mathcal { h }, d ) $ that encodes the twisted dolbeault complex of $ a _ { \ theta } $ and whose index gives the left hand side of the riemann - roch formula. using connes ' pseudodifferential calculus and heat equation techniques, we explicitly compute the $ b _ 2 $ terms of the asymptotic expansion of $ \ text { tr } ( e ^ { - td ^ 2 } ) $. we find that the curvature term on the right hand side of the riemann - roch formula coincides with the scalar curvature of the noncommutative torus recently defined and computed in \ cite { cm1 } and \ cite { fk2 }. | arxiv:1307.5367 |
the detection of contextual anomalies is a challenging task for surveillance since an observation can be considered anomalous or normal in a specific environmental context. an unmanned aerial vehicle ( uav ) can utilize its aerial monitoring capability and employ multiple sensors to gather contextual information about the environment and perform contextual anomaly detection. in this work, we introduce a deep neural network - based method ( cadnet ) to find point anomalies ( i. e., single instance anomalous data ) and contextual anomalies ( i. e., context - specific abnormality ) in an environment using a uav. the method is based on a variational autoencoder ( vae ) with a context sub - network. the context sub - network extracts contextual information regarding the environment using gps and time data, then feeds it to the vae to predict anomalies conditioned on the context. to the best of our knowledge, our method is the first contextual anomaly detection method for uav - assisted aerial surveillance. we evaluate our method on the au - air dataset in a traffic surveillance scenario. quantitative comparisons against several baselines demonstrate the superiority of our approach in the anomaly detection tasks. the codes and data will be available at https : / / bozcani. github. io / cadnet. | arxiv:2104.06781 |
we study the phases of a spin system on the kagome lattice with nearest - neighbor $ xxz $ interactions with anisotropy ratio $ \ delta $ and dzyaloshinsky - moriya interactions with strength $ d $. in the classical limit where the spin $ s $ at each site is very large, we find a rich phase diagram of the ground state as a function of $ \ delta $ and $ d $. there are five distinct phases which correspond to different ground state spin configurations in the classical limit. we use spin wave theory to find the bulk energy bands of the magnons in some of these phases. we also study a strip of the system which has infinite length and finite width ; we find modes which are localized on one of the edges of the strip with energies which lie in the gaps of the bulk modes. in the ferromagnetic phase in which all the spins point along the $ + \ hat z $ or $ - \ hat z $ direction, the bulk bands are separated from each other by finite energy gaps. this makes it possible to calculate the berry curvature at all momenta, and hence the chern numbers for every band ; the number of edge states is related to the chern numbers. interestingly, we find that there are four different regions in this phase where the chern numbers are different. hence there are four distinct topological phases even though the ground state spin configuration is identical in all these phases. we calculate the thermal hall conductivity of the magnons as a function of the temperature in the above ferromagnetic phase ; we find that this can distinguish between the various topological phases. these results are valid for all values of $ s $. in the other phases, there are no gaps between the different bands ; hence the edge states are not topologically protected. | arxiv:1711.11232 |
discriminative models for object classification typically learn image - based representations that do not capture the compositional and 3d nature of objects. in this work, we show that explicitly integrating 3d compositional object representations into deep networks for image classification leads to a largely enhanced generalization in out - of - distribution scenarios. in particular, we introduce a novel architecture, referred to as novum, that consists of a feature extractor and a neural object volume for every target object class. each neural object volume is a composition of 3d gaussians that emit feature vectors. this compositional object representation allows for a highly robust and fast estimation of the object class by independently matching the features of the 3d gaussians of each category to features extracted from an input image. additionally, the object pose can be estimated via inverse rendering of the corresponding neural object volume. to enable the classification of objects, the neural features at each 3d gaussian are trained discriminatively to be distinct from ( i ) the features of 3d gaussians in other categories, ( ii ) features of other 3d gaussians of the same object, and ( iii ) the background features. our experiments show that novum offers intriguing advantages over standard architectures due to the 3d compositional structure of the object representation, namely : ( 1 ) an exceptional robustness across a spectrum of real - world and synthetic out - of - distribution shifts and ( 2 ) an enhanced human interpretability compared to standard models, all while maintaining real - time inference and a competitive accuracy on in - distribution data. | arxiv:2305.14668 |
a hamiltonian approach to the solution of the vlasov - poisson equations has been developed. based on a nonlinear canonical transformation, the rapidly oscillating terms in the original hamiltonian are transformed away, yielding a new hamiltonian that contains slowly varying terms only. the formalism has been applied to the dynamics of an intense beam propagating through a periodic focusing lattice, and to the coherent beam - beam interaction. a stationary solution to the transformed vlasov equation has been obtained. | arxiv:physics/0110014 |
we present the theoretical status of the lifetimes of weakly decaying heavy hadrons containing a bottom or a charm quark, and discuss the current predictions, based on the framework of the heavy quark expansion ( hqe ), for both mesons and baryons. potential improvements to reduce the theoretical uncertainties are also highlighted. | arxiv:2302.14590 |
this article has been withdrawn. | arxiv:gr-qc/0312111 |
we present exact results for the periodic anderson model for finite hubbard interaction 0 < = u < + infinity on certain restricted domains of the model ' s phase diagram, in d = 1 dimension. decomposing the hamiltonian into positive semidefinite terms we find two quantum states to be ground state, an insulating and a metallic one. the ground state energy and several ground state expectation values are calculated. | arxiv:cond-mat/9906129 |
in this work we study both the index coding with side information ( icsi ) problem introduced by birk and kol in 1998 and the more general problem of index coding with coded side information ( iccsi ), described by shum et al in 2012. we estimate the optimal rate of an instance of the index coding problem. in the icsi problem case, we characterize those digraphs having min - rank one less than their order and we give an upper bound on the min - rank of a hypergraph whose incidence matrix can be associated with that of a 2 - design. security aspects are discussed in the particular case when the design is a projective plane. for the coded side information case, we extend the graph theoretic upper bounds given by shanmugam et al in 2014 on the optimal rate of index code. | arxiv:1604.05991 |
this paper presents a novel optimization framework of formulating the three - phase optimal power flow that involves uncertainty. the proposed uncertainty - aware optimization ( uao ) framework is : 1 ) a deterministic framework that is less complex than the existing optimization frameworks involving uncertainty, and 2 ) convex such that it admits polynomial - time algorithms and mature distributed optimization methods. to construct this uao framework, a methodology of learning - aided uncertainty - aware modeling, with prediction errors of stochastic variables as the measurement of uncertainty, and a theory of data - driven convexification are proposed. theoretically, the uao framework is applicable for modeling general optimization problems under uncertainty. | arxiv:2005.13075 |
on june 26th, 2004, central bank governors and the heads of bank supervisory authorities in the group of ten ( g10 ) countries issued a press release and endorsed the publication of " international convergence of capital measurement and capital standards : a revised framework ", the new capital adequacy framework commonly known as basel ii. according to jean claude trichet, chairman of the g10 group of central bank governors and heads of bank supervisory authorities and president of the european central bank : ` ` basel ii embraces a comprehensive approach to risk management and bank supervision. it will enhance banks ' safety and soundness, strengthen the stability of the financial system as a whole, and improve the financial sector ' s ability to serve as a source for sustainable growth for the broader economy. ' ' the negotial process is likely to lead to the adoption of the new rules within 2007. in 1996, after the " amendment to the capital accord to incorporate market risks ", a new wave of physicists entered risk management offices of large banks, that had to develop internal models of market risk. which will be the challenges and opportunities for physicists in the financial sector in the years to come? this paper is a first modest contribution for starting a debate within the econophysics community. | arxiv:cond-mat/0501320 |
gaas - based photocathodes are the only viable source capable of providing spin - polarized electrons for accelerator applications. this type of photocathode requires a thin surface layer, in order to achieve negative electron affinity ( nea ) for efficient photo - emission. however, this layer is vulnerable to environmental and operational effects, leading to a decay of the quantum efficiency $ \ eta $ characterized by a decay constant or lifetime $ \ tau $. in order to increase $ \ tau $, additional agents can be introduced during the activation procedure to improve the chemical robustness of the surface layer. this paper presents the results of recent research on li as enhancement agent for photocathode activation using cs and o $ _ 2 $, forming cs - o $ _ 2 $ - li as enhanced nea layer. measurements yielded an increase in lifetime by a factor of up to 19 $ \ pm $ 2 and an increase in extracted charge by a factor of up to 16. 5 $ \ pm $ 2. 4, without significant reduction of $ \ eta $. this performance is equal to or better than that reported for other enhanced nea layers so far. | arxiv:2409.04319 |
the broad motivation of this work is a rigorous understanding of reversible, local markov dynamics of interfaces, and in particular their speed of convergence to equilibrium, measured via the mixing time $ t _ { mix } $. in the $ ( d + 1 ) $ - dimensional setting, $ d \ ge2 $, this is to a large extent mathematically unexplored territory, especially for discrete interfaces. on the other hand, on the basis of a mean - curvature motion heuristics and simulations, one expects convergence to equilibrium to occur on time - scales of order $ \ approx \ delta ^ { - 2 } $ in any dimension, with $ \ delta \ to0 $ the lattice mesh. we study the single - flip glauber dynamics for lozenge tilings of a finite domain of the plane, viewed as $ ( 2 + 1 ) $ - dimensional surfaces. the stationary measure is the uniform measure on admissible tilings. at equilibrium, by the limit shape theorem, the height function concentrates as $ \ delta \ to0 $ around a deterministic profile $ \ phi $, the unique minimizer of a surface tension functional. despite some partial mathematical results, the conjecture $ t _ { mix } = \ delta ^ { - 2 + o ( 1 ) } $ has been proven, so far, only in the situation where $ \ phi $ is an affine function. in this work, we prove the conjecture under the sole assumption that the limit shape $ \ phi $ contains no frozen regions ( facets ). | arxiv:2207.01444 |
we probe the phase structure of the regular ads black holes using the null geodesics. the radius of photon orbit and minimum impact parameter shows a non - monotonous behaviour below the critical values of the temperature and the pressure, corresponding to the phase transition in extended phase space. the respective differences of the radius of unstable circular orbit and the minimum impact parameter can be seen as the order parameter for the small - large black hole phase transition, with a critical exponent $ 1 / 2 $. our study shows that there exists a close relationship between the gravity and thermodynamics for the regular ads black holes. | arxiv:1912.11909 |
the last decade has seen an enormous increase of activity in the field of gravitational lensing, mainly driven by improvements of observational capabilities. i will review the basics of gravitational lens theory, just enough to understand the rest of this contribution, and will then concentrate on several of the main applications in cosmology. cluster lensing, and weak lensing, will constitute the main part of this review. | arxiv:astro-ph/9512047 |
we suggest that the broad distribution of time scales in financial markets could be a crucial ingredient to reproduce realistic price dynamics in stylised agent - based models. we propose a fractional reaction - diffusion model for the dynamics of latent liquidity in financial markets, where agents are very heterogeneous in terms of their characteristic frequencies. several features of our model are amenable to an exact analytical treatment. we find in particular that the impact is a concave function of the transacted volume ( aka the " square - root impact law " ), as in the normal diffusion limit. however, the impact kernel decays as $ t ^ { - \ beta } $ with $ \ beta = 1 / 2 $ in the diffusive case, which is inconsistent with market efficiency. in the sub - diffusive case the decay exponent $ \ beta $ takes any value in $ [ 0, 1 / 2 ] $, and can be tuned to match the empirical value $ \ beta \ approx 1 / 4 $. numerical simulations confirm our theoretical results. several extensions of the model are suggested. | arxiv:1704.02638 |
a novel thomas - fermi ( tf ) approach to inhomogeneous superfluid fermi - systems is presented and shown that it works well also in cases where the local density approximation ( lda ) breaks down. the novelty lies in the fact that the semiclassical approximation is applied to the pairing matrix elements not implying a local version of the chemical potential as with lda. applications will be given to the generic fact that if a fermionic superfluid in the bcs regime overflows from a narrow container into a much wider one, pairing is substantially reduced at the overflow point. two examples pertinent to the physics of the outer crust of neutron stars and superfluid fermionic atoms in traps will be presented. the tf results will be compared to quantal and lda ones. | arxiv:1204.3429 |
the effect of a magnetic field on the energy spectrum and on the wave functions of an electron in spherical nano - structures such as single quantum dot and spherical layer is investigated. it is shown that the magnetic field removes the spectrum degeneration with respect to the magnetic quantum number. an increasing magnetic field induction entails a monotonous character of electron energy for the states with $ m \ geqslant 0 $ and a non - monotonous one for the states with $ m < 0 $. the electron wave functions of the ground state and several excited states are studied considering the effect of the magnetic field. it is shown that $ 1s $ and $ 1p $ states are degenerated in the spherical layer driven by a strong magnetic field. in the limit case, a series of size - quantized levels produce the landau levels which are typical of bulk crystals. | arxiv:1403.1685 |
we have been studying the index theory for some special infinite - dimensional manifolds with a " proper cocompact " actions of the loop group lt of the circle t, from the viewpoint of the noncommutative geometry. in this paper, we will introduce the lt - equivariant kk - theory and we will construct three kk - elements : the index element, the clifford symbol element and the dirac element. these elements satisfy a certain relation, which should be called the ( kk - theoretical ) index theorem, or the kk - theoretical poincar \ ' e duality for infinite - dimensional manifolds. we will also discuss the assembly maps. | arxiv:1811.06811 |
accurate origin - destination ( od ) flow prediction is of great importance to developing cities, as it can contribute to optimize urban structures and layouts. however, with the common issues of missing regional features and lacking od flow data, it is quite daunting to predict od flow in developing cities. to address this challenge, we propose a novel causality - enhanced od flow prediction ( ce - ofp ), a unified framework that aims to transfer urban knowledge between cities and achieve accuracy improvements in od flow predictions across data - scarce cities. in specific, we propose a novel reinforcement learning model to discover universal causalities among urban features in data - rich cities and build corresponding causal graphs. then, we further build causality - enhanced variational auto - encoder ( ce - vae ) to incorporate causal graphs for effective feature reconstruction in data - scarce cities. finally, with the reconstructed features, we devise a knowledge distillation method with a graph attention network to migrate the od prediction model from data - rich cities to data - scare cities. extensive experiments on two pairs of real - world datasets validate that the proposed ce - ofp remarkably outperforms state - of - the - art baselines, which can reduce the rmse of od flow prediction for data - scarce cities by up to 11 %. | arxiv:2503.06398 |
the purpose of the present paper is to show that in certain classes of real ( or complex ) functions, the bernoulli polynomials are essentially the only ones satisfying the raabe functional equation. for the class of the real $ 1 $ - periodic functions which are expandable as fourier series, we point out new solutions of the raabe functional equation, not relating to the bernoulli polynomials. furthermore, we will give for the considered classes various proofs, making the mathematical content of the paper quite rich. | arxiv:2303.14492 |
in this paper, we propose a novel sparse learning based feature selection method that directly optimizes a large margin linear classification model sparsity with l _ ( 2, p ) - norm ( 0 < p < 1 ) subject to data - fitting constraints, rather than using the sparsity as a regularization term. to solve the direct sparsity optimization problem that is non - smooth and non - convex when 0 < p < 1, we provide an efficient iterative algorithm with proved convergence by converting it to a convex and smooth optimization problem at every iteration step. the proposed algorithm has been evaluated based on publicly available datasets, and extensive comparison experiments have demonstrated that our algorithm could achieve feature selection performance competitive to state - of - the - art algorithms. | arxiv:1504.00430 |
for efficient neural network inference, it is desirable to achieve state - of - the - art accuracy with the simplest networks requiring the least computation, memory, and power. quantizing networks to lower precision is a powerful technique for simplifying networks. as each layer of a network may have different sensitivity to quantization, mixed precision quantization methods selectively tune the precision of individual layers to achieve a minimum drop in task performance ( e. g., accuracy ). to estimate the impact of layer precision choice on task performance, two methods are introduced : i ) entropy approximation guided layer selection ( eagl ) is fast and uses the entropy of the weight distribution, and ii ) accuracy - aware layer precision selection ( alps ) is straightforward and relies on single epoch fine - tuning after layer precision reduction. using eagl and alps for layer precision selection, full - precision accuracy is recovered with a mix of 4 - bit and 2 - bit layers for resnet - 50, resnet - 101 and bert - base transformer networks, demonstrating enhanced performance across the entire accuracy - throughput frontier. the techniques demonstrate better performance than existing techniques in several commensurate comparisons. notably, this is accomplished with significantly lesser computational time required to reach a solution. | arxiv:2301.13330 |
we discuss a general framework for the realization of a family of abelian lattice gauge theories, i. e., link models or gauge magnets, in optical lattices. we analyze the properties of these models that make them suitable to quantum simulations. within this class, we study in detail the phases of a u ( 1 ) - invariant lattice gauge theory in 2 + 1 dimensions originally proposed by orland. by using exact diagonalization, we extract the low - energy states for small lattices, up to 4x4. we confirm that the model has two phases, with the confined entangled one characterized by strings wrapping around the whole lattice. we explain how to study larger lattices by using either tensor network techniques or digital quantum simulations with rydberg atoms loaded in optical lattices where we discuss in detail a protocol for the preparation of the ground state. we also comment on the relation between standard compact u ( 1 ) lgt and the model considered. | arxiv:1205.0496 |
researches on leveraging big artificial intelligence model ( baim ) technology to drive the intelligent evolution of wireless networks are emerging. however, since the breakthrough in generalization brought about by baim techniques mainly occurs in natural language processing, there is still a lack of a clear technical roadmap on how to efficiently apply baim techniques to wireless systems with many additional peculiarities. to this end, this paper first reviews recent research works on baim for wireless and assesses the current research situation. then, this paper analyzes and compares the differences between language intelligence and wireless intelligence on multiple levels, including scientific foundations, core usages, and technical details. it highlights the necessity and scientific significance of developing baim technology in a wireless - native way, as well as new issues that need to be considered in specific technical implementation. finally, by synthesizing the evolutionary laws of language models with the particularities of wireless system, this paper provides several instructive methodologies for how to develop wireless - native baim. | arxiv:2412.09041 |
today, almost all banks have adopted ict as a means of enhancing their banking service quality. these banks provide ict based electronic service which is also called electronic banking, internet banking or online banking etc to their customers. despite the increasing adoption of electronic banking and it relevance towards end users satisfaction, few investigations has been conducted on factors that enhanced end users satisfaction perception. in this research, an empirical analysis has been conducted on factors that influence electronic banking user ' s satisfaction perception and the relationship between these factors and the customer ' s satisfaction. the study will help bank industries in improving the level of their customer ' s satisfaction and increase the bond between a bank and its customer. | arxiv:2105.11184 |
we clarify the confusion, misunderstanding and misconception that the physical finiteness of the universe, if the universe is indeed finite, would rule out all hypercomputation, the kind of computation that exceeds the turing computability, while maintaining and defending the validity of turing computation and the church - turing thesis. | arxiv:quant-ph/0403045 |
we propose a three dimensional generalization of the geometric mckay correspondence described by gonzales - sprinberg and verdier in dimension two. we work it out in detail when g is abelian and c ^ 3 / g has a single isolated singularity. more precisely, we show that the bridgeland - king - reid derived category equivalence induces a natural geometric correspondence between irreducible representations of g and subschemes of the exceptional set of g - hilb ( c ^ 3 ). this correspondence appears to be related to reid ' s recipe. | arxiv:0803.2990 |
ontology matching ( om ) plays an important role in many domains such as bioinformatics and the semantic web, and its research is becoming increasingly popular, especially with the application of machine learning ( ml ) techniques. although the ontology alignment evaluation initiative ( oaei ) represents an impressive effort for the systematic evaluation of om systems, it still suffers from several limitations including limited evaluation of subsumption mappings, suboptimal reference mappings, and limited support for the evaluation of ml - based systems. to tackle these limitations, we introduce five new biomedical om tasks involving ontologies extracted from mondo and umls. each task includes both equivalence and subsumption matching ; the quality of reference mappings is ensured by human curation, ontology pruning, etc. ; and a comprehensive evaluation framework is proposed to measure om performance from various perspectives for both ml - based and non - ml - based om systems. we report evaluation results for om systems of different types to demonstrate the usage of these resources, all of which are publicly available as part of the new bioml track at oaei 2022. | arxiv:2205.03447 |
we study superdense coding with uniformly accelerated particle in single mode approximation and beyond single mode approximation. we use four different functions, the capacity of superdense coding, negativity, discord and the probability of success for evaluating the final results. in single mode approximation, all the four functions behave as expected, however in beyond single mode approximation, except the probability of success, the other three functions represent peculiar behaviors at least for special ranges where the beyond single mode approximation is strong. | arxiv:1611.07775 |
theoretical predictions show that at low values of bjorken $ x $ the spin structure function, $ g _ 1 $ is influenced by large logarithmic corrections, $ ln ^ 2 ( 1 / x ) $, which may be predominant in this region. these corrections are also partially contained in the nlo part of the standard dglap evolution. here we calculate the non - singlet component of the nucleon structure function, $ g _ 1 ^ { ns } = g _ 1 ^ p - g _ 1 ^ n $, and its first moment, using a unified evolution equation. this equation incorporates the terms describing the nlo dglap evolution and the terms contributing to the $ ln ^ 2 ( 1 / x ) $ resummation. in order to avoid double counting in the overlapping regions of the phase - space, a unique way of including the nlo terms into the unified evolution equation is proposed. the scheme - independent results obtained from this unified evolution are compared to the nlo fit to experimental data, grsv ' 2000. analysis of the first moments of $ g _ 1 ^ { ns } $ shows that the unified evolution including the $ ln ^ 2 ( 1 / x ) $ resummation goes beyond the nlo dglap analysis. corrections generated by double logarithms at low $ x $ influence the $ q ^ 2 $ - dependence of the first moments strongly. | arxiv:hep-ph/0206303 |
while the strategy for the first applications of weak lensing has been to ` ` go deep ' ' it is equally interesting to use one ' s telescope time to instead ` ` go wide ' '. the sloan survey ( sdss ) provides a natural framework for a very wide area weak lensing survey. | arxiv:astro-ph/9510012 |
we present a method to compute thermodynamic quantities within functional continuum frameworks that is independent of the employed truncation. as a proof of principle, we first apply it to a nambu - jona - lasinio model in mean - field approximation. then, we use the method with solutions obtained from a coupled set of truncated dyson - schwinger equations for the quark and gluon propagators of ( 2 + 1 ) - flavor quantum chromodynamics in landau gauge to obtain the pressure, entropy density, energy density, and interaction measure across the phase diagram of strong - interaction matter. we also discuss the limitation of the proposed method. | arxiv:2012.04991 |
improvements in the performance of computing systems, driven by moore ' s law, have transformed society. as such hardware - driven gains slow down, it becomes even more important for software developers to focus on performance and efficiency during development. while several studies have demonstrated the potential from such improved code efficiency ( e. g., 2x better generational improvements compared to hardware ), unlocking these gains in practice has been challenging. reasoning about algorithmic complexity and the interaction of coding patterns on hardware can be challenging for the average programmer, especially when combined with pragmatic constraints around development velocity and multi - person development. this paper seeks to address this problem. we analyze a large competitive programming dataset from the google code jam competition and find that efficient code is indeed rare, with a 2x runtime difference between the median and the 90th percentile of solutions. we propose using machine learning to automatically provide prescriptive feedback in the form of hints, to guide programmers towards writing high - performance code. to automatically learn these hints from the dataset, we propose a novel discrete variational auto - encoder, where each discrete latent variable represents a different learned category of code - edit that increases performance. we show that this method represents the multi - modal space of code efficiency edits better than a sequence - to - sequence baseline and generates a distribution of more efficient solutions. | arxiv:2208.05297 |
the semantic web effort has steadily been gaining traction in the recent years. in particular, web search companies are recently realizing that their products need to evolve towards having richer semantic search capabilities. description logics ( dls ) have been adopted as the formal underpinnings for semantic web languages used in describing ontologies. reasoning under uncertainty has recently taken a leading role in this arena, given the nature of data found on theweb. in this paper, we present a probabilistic extension of the dl el + + ( which underlies the owl2 el profile ) using markov logic networks ( mlns ) as probabilistic semantics. this extension is tightly coupled, meaning that probabilistic annotations in formulas can refer to objects in the ontology. we show that, even though the tightly coupled nature of our language means that many basic operations are data - intractable, we can leverage a sublanguage of mlns that allows to rank the atomic consequences of an ontology relative to their probability values ( called ranking queries ) even when these values are not fully computed. we present an anytime algorithm to answer ranking queries, and provide an upper bound on the error that it incurs, as well as a criterion to decide when results are guaranteed to be correct. | arxiv:1210.4894 |
much is known about the adele ring of an algebraic number field from the perspective of harmonic analysis and class field theory. however, its ring - theoretical aspects are often ignored. here we present a description of the prime spectrum of this ring and study some of the algebraic and topological properties of these prime ideals. we also study how they behave under separable extensions of the base field and give an indication of how this study can be applied in adele rings not of number fields. | arxiv:2110.15736 |
drug repositioning ( dr ) refers to identification of novel indications for the approved drugs. the requirement of huge investment of time as well as money and risk of failure in clinical trials have led to surge in interest in drug repositioning. dr exploits two major aspects associated with drugs and diseases : existence of similarity among drugs and among diseases due to their shared involved genes or pathways or common biological effects. existing methods of identifying drug - disease association majorly rely on the information available in the structured databases only. on the other hand, abundant information available in form of free texts in biomedical research articles are not being fully exploited. word - embedding or obtaining vector representation of words from a large corpora of free texts using neural network methods have been shown to give significant performance for several natural language processing tasks. in this work we propose a novel way of representation learning to obtain features of drugs and diseases by combining complementary information available in unstructured texts and structured datasets. next we use matrix completion approach on these feature vectors to learn projection matrix between drug and disease vector spaces. the proposed method has shown competitive performance with state - of - the - art methods. further, the case studies on alzheimer ' s and hypertension diseases have shown that the predicted associations are matching with the existing knowledge. | arxiv:1705.05183 |
the osp ( 2 | 2 ) - invariant planar dynamics of a d = 4 superparticle near the horizon of a large mass extreme black hole is described by an n = 2 superconformal mechanics, with the so ( 2 ) charge being the superparticle ' s angular momentum. the { \ it non - manifest } superconformal invariance of the superpotential term is shown to lead to a shift in the so ( 2 ) charge by the value of its coefficient, which we identify as the orbital angular momentum. the full su ( 1, 1 | 2 ) - invariant dynamics is found from an extension to n = 4 superconformal mechanics. | arxiv:hep-th/9810230 |
the manipulation of the personality traits of large language models ( llms ) has emerged as a key area of research. methods like prompt - based in - context knowledge editing ( ike ) and gradient - based model editor networks ( mend ) have been explored but show irregularity and variability ; ike depends on the prompt, leading to variability and sensitivity, while mend yields inconsistent and gibberish outputs. to address this, we employed opinion qa based parameter - efficient fine - tuning ( peft ), specifically quantized low - rank adaptation ( qlora ), to manipulate the big five personality traits : openness, conscientiousness, extraversion, agreeableness, and neuroticism. after peft, models such as mistral - 7b - instruct and llama - 2 - 7b - chat showed a latent behaviour by generating emojis for certain traits, despite no emojis being present in the peft data. for instance, llama - 2 - 7b - chat generated emojis in 99. 5 \ % of extraversion - related test instances, while mistral - 7b - instruct did so in 92. 5 \ % of openness - related test instances. icl explainability analysis indicated that the llms used emojis intentionally to express these traits. mechanistic interpretability analysis showed that this latent behaviour of llms could be traced to specific neurons that became activated or amplified after peft. this paper provides a number of novel contributions. first, introducing an opinion qa dataset for peft - driven personality manipulation ; second, developing metric models to benchmark llm personality traits ; third, demonstrating peft ' s superiority over ike in personality manipulation ; and finally, analysing and validating emoji usage through explainability methods such as mechanistic interpretability and in - context learning explainability methods. | arxiv:2409.10245 |
many data sets ( e. g., reviews, forums, news, etc. ) exist parallelly in multiple languages. they all cover the same content, but the linguistic differences make it impossible to use traditional, bag - of - word - based topic models. models have to be either single - language or suffer from a huge, but extremely sparse vocabulary. both issues can be addressed by transfer learning. in this paper, we introduce a zero - shot cross - lingual topic model. our model learns topics on one language ( here, english ), and predicts them for unseen documents in different languages ( here, italian, french, german, and portuguese ). we evaluate the quality of the topic predictions for the same document in different languages. our results show that the transferred topics are coherent and stable across languages, which suggests exciting future research directions. | arxiv:2004.07737 |
this work presents a mathematical model that establishes an interesting connection between nucleotide frequencies in human single - stranded dna and the famous fibonacci ' s numbers. the model relies on two assumptions. first, chargaff ' s second parity rule should be valid, and, second, the nucleotide frequencies should approach limit values when the number of bases is sufficiently large. under these two hypotheses, it is possible to predict the human nucleotide frequencies with accuracy. it is noteworthy, that the predicted values are solutions of an optimization problem, which is commonplace in many nature ' s phenomena. | arxiv:q-bio/0611041 |
nearly all autonomous robotic systems use some form of motion planning to compute reference motions through their environment. an increasing use of autonomous robots in a broad range of applications creates a need for efficient, general purpose motion planning algorithms that are applicable in any of these new application domains. this thesis presents a resolution complete optimal kinodynamic motion planning algorithm based on a direct forward search of the set of admissible input signals to a dynamical model. the advantage of this generalized label correcting method is that it does not require a local planning subroutine as in the case of related methods. preliminary material focuses on new topological properties of the canonical problem formulation that are used to show continuity of the performance objective. these observations are used to derive a generalization of bellman ' s principle of optimality in the context of kinodynamic motion planning. a generalized label correcting algorithm is then proposed which leverages these results to prune candidate input signals from the search when their cost is greater than related signals. the second part of this thesis addresses admissible heuristics for kinodynamic motion planning. an admissibility condition is derived that can be used to verify the admissibility of candidate heuristics for a particular problem. this condition also characterizes a convex set of admissible heuristics. a linear program is formulated to obtain a heuristic which is as close to the optimal cost - to - go as possible while remaining admissible. this optimization is justified by showing its solution coincides with the solution to the hamilton - jacobi - bellman equation. lastly, a sum - of - squares relaxation of this infinite - dimensional linear program is proposed for obtaining provably admissible approximate solutions. | arxiv:1705.04721 |
high pressure nuclear magnetic resonance is among the most challenging fields of research for every nmr spectroscopist due to inherently low signal intensities, inaccessible and ultra - small samples, and overall extremely harsh conditions in the sample cavity of modern high pressure vessels. this review aims to provide a comprehensive overview of the topic of high pressure research and its fairly young and brief relationship with nmr. | arxiv:1803.04643 |
the frequencies and damping times of the non radial oscillations of neutron stars are computed for a set of recently proposed equations of state ( eos ) which describe matter at supranuclear densites. these eos are obtained within two different approaches, the nonrelativistic nuclear many - body theory and the relativistic mean field theory, that model hadronic interactions in different ways leading to different composition and dynamics. being the non radial oscillations associated to the emission of gravitational waves, we fit the eigenfrequencies of the fundamental mode and of the first pressure and gravitational - wave mode ( polar and axial ) with appropriate functions of the mass and radius of the star, comparing the fits, when available, with those obtained by andersson and kokkotas in 1998. we show that the identification in the spectrum of a detected gravitational signal of a sharp pulse corresponding to the excitation of the fundamental mode or of the first p - mode, combined with the knowledge of the mass of the star - the only observable on which we may have reliable information - would allow to gain interesting information on the composition of the inner core. we further discuss the detectability of these signals by gravitational detectors. | arxiv:astro-ph/0407529 |
we study the following combinatorial counting and sampling problems : can we efficiently sample from the erd \ h { o } s - r \ ' { e } nyi random graph $ g ( n, p ) $ conditioned on triangle - freeness? can we efficiently approximate the probability that $ g ( n, p ) $ is triangle - free? these are prototypical instances of forbidden substructure problems ubiquitous in combinatorics. the algorithmic questions are instances of approximate counting and sampling for a hypergraph hard - core model. estimating the probability that $ g ( n, p ) $ has no triangles is a fundamental question in probabilistic combinatorics and one that has led to the development of many important tools in the field. through the work of several authors, the asymptotics of the logarithm of this probability are known if $ p = o ( n ^ { - 1 / 2 } ) $ or if $ p = \ omega ( n ^ { - 1 / 2 } ) $. the regime $ p = \ theta ( n ^ { - 1 / 2 } ) $ is more mysterious, as this range witnesses a dramatic change in the the typical structural properties of $ g ( n, p ) $ conditioned on triangle - freeness. as we show, this change in structure has a profound impact on the performance of sampling algorithms. we give two different efficient sampling algorithms for triangle - free graphs ( and complementary algorithms to approximate the triangle - freeness large deviation probability ), one that is efficient when $ p < c / \ sqrt { n } $ and one that is efficient when $ p > c / \ sqrt { n } $ for constants $ c, c > 0 $. the latter algorithm involves a new approach for dealing with large defects in the setting of sampling from low - temperature spin models. | arxiv:2410.22951 |
this paper applies the pareto - optimal concept to lc ( lane - changing ) motion planning in the presence of mixed traffic including manual and autonomous vehicles. firstly, a multiobjective optimization problem is presented, in which the comfort, efficiency and safety of the lc vehicle and the surrounding vehicles are jointly modelled. thereafter, the pareto - optimal solutions are obtained through employing the nsga - ii ( non - dominated sorting genetic - ii ) algorithm. finally, the experiment section analyzes the ( macroscopic and microscopic ) lane - changing impact from a pareto - optimal perspective. also, a comprehensive sensitivity analysis is conducted. our results demonstrate that our algorithm could significantly reduce the lane - changing impact within its region, and the total costs are reduced in the range of 10. 94 % to 48. 66 %. this paper could be considered as a preliminary research framework for the application of the pareto - optimal concept. we hope this research will provide valuable insights into autonomous driving technology. | arxiv:2109.06080 |
the behaviour of a particle with a spin 1 / 2 and a dipole magnetic moment in a time - varying magnetic field in the form $ ( h _ 0 cn ( \ omega t, k ), h _ 0 sn ( \ omega t, k ), h _ 0 dn ( \ omega t, k ) ) $, where $ \ omega $ is the driving field frequency, $ t $ is the time, $ h _ 0 $ and $ h _ 0 $ are the field amplitudes, $ cn $, $ sn $, $ dn $ are jacobi elliptic functions, $ k $ is the modulus of the elliptic functions has been considered. the variation parameter $ k $ from zero to 1 gives rise to a wide set of functions from trigonometric shapes to exponential pulse shapes modulating the field. the problem was reduced to the solution of general heun ' equation. the exact solution of the wave function was found at resonance for any $ k $. it has been shown that the transition probability in this case does not depend on $ k $. the present study may be useful for analysis interference experiments, improving magnetic spectrometers and the field of quantum computing. | arxiv:quant-ph/0404114 |
electron - phase modulation in magnetic and electric fields will be presented in in _ ( 0. 75 ) ga _ ( 0. 25 ) as aharonov - bohm ( ab ) rings. the zero schottky barrier of this material made it possible to nanofabricate devices with radii down to below 200 nm without carrier depletion. we shall present a fabrication scheme based on wet and dry etching that yielded excellent reproducibility, very high contrast of the oscillations and good electrical gating. the operation of these structures is compatible with closed - cycle refrigeration and suggests that this process can yield coherent electronic circuits that do not require cryogenic liquids. the ingaas / alinas heterostructure was grown by mbe on a gaas substrate [ 1 ], and in light of the large effective g - factor and the absence of the schottky barrier is a material system of interest for the investigation of spin - related effects [ 2 - 4 ] } and the realization of hybrid superconductor / semiconductor devices [ 5 ]. | arxiv:cond-mat/0510139 |
metric learning aims to learn a distance metric such that semantically similar instances are pulled together while dissimilar instances are pushed away. many existing methods consider maximizing or at least constraining a distance margin in the feature space that separates similar and dissimilar pairs of instances to guarantee their generalization ability. in this paper, we advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms. we first show that, the adversarial margin, defined as the distance between training instances and their closest adversarial examples in the input space, takes account of both the distance margin in the feature space and the correlation between the metric and triplet constraints. next, to enhance robustness to instance perturbation, we propose to enlarge the adversarial margin through minimizing a derived novel loss function termed the perturbation loss. the proposed loss can be viewed as a data - dependent regularizer and easily plugged into any existing metric learning methods. finally, we show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness. experimental results on 16 datasets demonstrate the superiority of the proposed method over existing state - of - the - art methods in both discrimination accuracy and robustness against possible noise. | arxiv:2006.05945 |
we consider the long standing problem in field theories of bosons that the boson vacuum does not consist of a ` sea ', unlike the fermion vacuum. we show with the help of supersymmetry considerations that the boson vacuum indeed does also consist of a sea in which the negative energy states are all " filled ", analogous to the dirac sea of the fermion vacuum, and that a hole produced by the annihilation of one negative energy boson is an anti - particle. here, we must admit that it is only possible if we allow - - as occurs in the usual formalism anyway - - that the " hilbert space " for the single particle bosons is not positive definite. this might be formally coped with by introducing the notion of a double harmonic oscillator, which is obtained by extending the condition imposed on the wave function. this double harmonic oscillator includes not only positive energy states but also negative energy states. we utilize this method to construct a general formalism for a boson sea analogous to the dirac sea, irrespective of the existence of supersymmetry. the physical result is consistent with that of the ordinary second quantization formalism. we finally suggest applications of our method to the string theories. | arxiv:hep-th/0312302 |
using a tadpole improved su ( 2 ) gluodynamics action, the nonabelian potential and the abelian potential after the abelian projection are computed. rotational invariance is found restored at coarse lattices both in the nonabelian theory and in the effective abelian theory resulting from maximal abelian projection. asymptotic scaling is tested for the su ( 2 ) string tension. deviation of the order of $ 6 % $ is found, for lattice spacings between 0. 27 and 0. 06 fm. evidence for asymptotic scaling and scaling of the monopole density in maximal abelian projection is also seen, but not at coarse lattices. the scaling behavior is compared with analyses of wilson action results, using bare and renormalized coupling schemes. using extended monopoles, evidence is found that the gauge dependence of the abelian projection reflects short distance fluctuations, and may thus disappear at large scales. | arxiv:hep-lat/9704006 |
photons are the elementary quantum excitations of the electromagnetic field. quantization is usually constructed on the basis of an expansion in eigenmodes, in the form of plane waves. since they form a basis, other electromagnetic configurations can be constructed by linear combinations. in this presentation we discuss a formulation constructed in the general formalism of bosonic fock space, in which the quantum excitation can be constructed directly on localized pulses of arbitrary shape. although the two formulations are essentially equivalent, the direct formulation in terms of pulses has some conceptual and practical advantages, which we illustrate with some examples. the first one is the passage of a single photon pulse through a beam splitter. the analysis of this formulation in terms of pulses in fock space shows that there is no need to introduce " vacuum fluctuations entering through the unused port ", as is often done in the literature. another example is the hong - ou - mandel effect. it is described as a time dependent process in the schr \ " odinger representation in fock space. the analysis shows explicitly how the two essential ingredients of the hong - ou - mandel effect are the same shape of the pulses and the bosonic nature of photons. this formulation shows that all the phenomena involving linear quantum optical devices can be described and calculated on the basis of the time dependent solution of the corresponding classical maxwell ' s equations for pulses, from which the quantum dynamics in fock space can be immediately constructed. | arxiv:2212.03203 |
recent studies of o - type stars demonstrated that discrepant mass - loss rates are obtained when different diagnostic methods are employed - fitting the unsaturated uv resonance lines ( e. g. p v ) gives drastically lower values than obtained from the h { \ alpha } emission. wind clumping may be the main cause for this discrepancy. in a previous paper, we have presented 3 - d monte - carlo calculations for the formation of scattering lines in a clumped stellar wind. in the present paper we select five o - type supergiants ( from o4 to o7 ) and test whether the reported discrepancies can be resolved this way. in the first step, the analyses start with simulating the observed spectra with potsdam wolf - rayet ( powr ) non - lte model atmospheres. the mass - loss rates are adjusted to fit best to the observed h { \ alpha } emission lines. for the unsaturated uv resonance lines ( i. e. p v ) we then apply our 3 - d monte - carlo code, which can account for wind clumps of any optical depths, a non - void inter - clump medium, and a velocity dispersion inside the clumps. the ionization stratifications and underlying photospheric spectra are adopted from the powr models. from fitting the observed resonance line profiles, the properties of the wind clumps are constrained. our results show that with the mass - loss rates that fit h { \ alpha } ( and other balmer and he ii lines ), the uv resonance lines ( especially the unsaturated doublet of p v ) can also be reproduced without problem when macroclumping is taken into account. there is no need to artificially reduce the mass - loss rates, nor to assume a sub - solar phosphorus abundance or an extremely high clumping factor, contrary to what was claimed by other authors. these consistent mass - loss rates are lower by a factor of 1. 3 to 2. 6, compared to the mass - loss rate recipe from vink et al. macroclumping resolves the previously reported discrepancy between h { \ alpha } and p v mass - loss diagnostics. | arxiv:1310.0449 |
in a light - pulse atom interferometer, we use a tip - tilt mirror to remove the influence of the coriolis force from earth ' s rotation and to characterize configuration space wave packets. for interferometers with large momentum transfer and large pulse separation time, we improve the contrast by up to 350 % and suppress systematic effects. we also reach what is to our knowledge the largest spacetime area enclosed in any atom interferometer to date. we discuss implications for future high performance instruments. | arxiv:1110.6910 |
it has been an open question in deep learning if fault - tolerant computation is possible : can arbitrarily reliable computation be achieved using only unreliable neurons? in the grid cells of the mammalian cortex, analog error correction codes have been observed to protect states against neural spiking noise, but their role in information processing is unclear. here, we use these biological error correction codes to develop a universal fault - tolerant neural network that achieves reliable computation if the faultiness of each neuron lies below a sharp threshold ; remarkably, we find that noisy biological neurons fall below this threshold. the discovery of a phase transition from faulty to fault - tolerant neural computation suggests a mechanism for reliable computation in the cortex and opens a path towards understanding noisy analog systems relevant to artificial intelligence and neuromorphic computing. | arxiv:2202.12887 |
for a generic dynamical decoupling sequence employing a single - axis control, we study its efficiency in the presence of small errors in direction of the controlling - pulses. in the case that the corresponding ideal dynamical - decoupling sequence produces good results, the impact of the errors is found to scale as $ \ xi ^ 2 $, with negligible first - order effect, where $ \ xi $ is the dispersion of the random errors. this analytical prediction is numerically tested in a model, in which the environment is modeled by one qubit coupled to a quantum kicked rotator in chaotic motion. in this model, with periodic pulses applied to the qubit in the environment, it is shown numerically that uhrig dynamical decoupling is not necessarily better than the bang - bang control. | arxiv:1101.5430 |
in this paper, a continuous hybrid differentiator is presented based on a strong lyapunov function. the differentiator design can not only reduce sufficiently chattering phenomenon of derivative estimation by introducing a perturbation parameter, but also the dynamical performances are improved by adding linear correction terms to the nonlinear ones. moreover, strong robustness ability is obtained by integrating sliding mode items and the linear filter. frequency analysis is applied to compare the hybrid continuous differentiator with sliding mode differentiator. the merits of the continuous hybrid differentiator include the excellent dynamical performances, restraining noises sufficiently, and avoiding the chattering phenomenon. | arxiv:1103.4311 |
we discuss the weak gravitational field created by isolated matter sources in the randall - sundrum brane - world. in the case of two branes of opposite tension, linearized brans - dicke ( bd ) gravity is recovered on either wall, with different bd parameters. on the wall with positive tension the bd parameter is larger than 3000 provided that the separation between walls is larger than 4 times the ads radius. for the wall of negative tension, the bd parameter is always negative but greater than - 3 / 2. in either case, shadow matter from the other wall gravitates upon us. for equal newtonian mass, light deflection from shadow matter is 25 % weaker than from ordinary matter. hence, the effective mass of a clustered object containing shadow dark matter would be underestimated if naively measured through its lensing effect. for the case of a single wall of positive tension, einstein gravity is recovered on the wall to leading order, and if the source is stationary the field stays localized near the wall. we calculate the leading kaluza - klein corrections to the linearized gravitational field of a non - relativistic spherical object and find that the metric is different from the schwarzschild solution at large distances. we believe that our linearized solution corresponds to the field far from the horizon after gravitational collapse of matter on the brane. | arxiv:hep-th/9911055 |
a body $ \ mathscr b $ moves in an unbounded navier - stokes liquid by time - independent translatory motion. suppose that at time $ t = 0 $, $ \ mathscr b $ smoothly changes its motion to an arbitrary rigid motion, reached at time $ t = 1 $. we then show that the associated navier - stokes problem has a unique solution connecting the two steady - states generated by the motion of $ \ mathscr b $, provided all the involved velocities of $ \ mathscr b $ are sufficiently small. | arxiv:2503.00571 |
we consider a finite quantum system s coupled to two environments of different nature. one is a heat reservoir r ( continuous interaction ) and the other one is a chain c of independent quantum systems e ( repeated interaction ). the interactions of s with r and c lead to two simultaneous dynamical processes. we show that for generic such systems, any initial state approaches an asymptotic state in the limit of large times. we express the latter in terms of the resonance data of a reduced propagator of s + r and show that it satisfies a second law of thermodynamics. we analyze a model where both s and e are two - level systems and obtain the asymptotic state explicitly ( lowest order in the interaction strength ). even though r and c are not direcly coupled, we show that they exchange energy, and we find the dependence of this exchange in terms of the thermodynamic parameters. we formulate the problem in the framework of w * - dynamical systems and base the analysis on a combination of spectral deformation methods and repeated interaction model techniques. we do not use master equation approximations. | arxiv:0905.2558 |
theoretical datum and is maintained under the deformation ), ( c ) the associator. it is shown that $ c ^ \ bullet ( c, d ) ( f, f ) ( \ mathrm { id }, \ mathrm { id } ) $ is a homotopy $ e _ 2 $ - algebra. conjecturally, $ c ^ \ bullet ( c, c ) ( \ mathrm { id }, \ mathrm { id } ) ( \ mathrm { id }, \ mathrm { id } ) $ is a homotopy $ e _ 3 $ - algebra ; however the proof requires more sophisticated methods and we hope to complete it in our next paper. | arxiv:2210.01664 |
we calculate magnetic anisotropy energy of fe and ni by taking into account the effects of strong electronic correlations, spin - orbit coupling, and non - collinearity of intra - atomic magnetization. the lda + u method is used and its equivalence to dynamical mean - field theory in the static limit is emphasized. both experimental magnitude of mae and direction of magnetization are predicted correctly near u = 4 ev for ni and u = 3. 5 ev for fe. correlations modify one - electron spectra which are now in better agreement with experiments. | arxiv:cond-mat/0006385 |
objective : organ deformation models have the potential to improve delivery and reduce toxicity of radiotherapy, but existing data - driven motion models are based on either patient - specific or population data. we propose to combine population and patient - specific data using a bayesian framework. our goal is to accurately predict individual motion patterns while using fewer scans than previous models. approach : we have derived and evaluated two bayesian deformation models. the models were applied retrospectively to the rectal wall from a cohort of prostate cancer patients. these patients had repeat ct scans evenly acquired throughout radiotherapy. each model was used to create coverage probability matrices ( cpms ). the spatial correlations between these cpms and ` ` true ' ' cpms, derived from independent scans of the same patient, were calculated. main results : spatial correlation with ground truth were significantly higher for the bayesian deformation models than both patient - specific and population - derived models with 1, 2 or 3 patient - specific scans as input. statistical motion simulations indicate that this result will also hold for more than 3 scans. significance : the improvement over known models means that fewer scans per patient are needed to achieve accurate deformation predictions. the models have applications in robust radiotherapy planning and evaluation, among others. | arxiv:2210.15296 |
let $ \ m $ be a type $ { \ rm ii _ 1 } $ factor and let $ \ tau $ be the faithful normal tracial state on $ \ m $. in this paper, we prove that given finite elements $ x _ 1, \ cdots x _ n \ in \ m $, there is a finite decomposition of the identity into $ n \ in \ nnn $ mutually orthogonal nonzero projections $ e _ j \ in \ m $, $ i = \ sum _ { j = 1 } ^ ne _ j $, such that $ e _ jx _ ie _ j = \ tau ( x _ i ) e _ j $ for all $ j = 1, \ cdots, n $ and $ i = 1, \ cdots, n $. equivalently, there is a unitary operator $ u \ in \ m $ such that $ \ frac { 1 } { n } \ sum _ { j = 0 } ^ { n - 1 } { u ^ * } ^ jx _ iu ^ j = \ tau ( x _ i ) i $ for $ i = 1, \ cdots, n $. this result is a stronger version of dixmier ' s averaging theorem for type $ { \ rm ii } _ 1 $ factors. as the first application, we show that all elements of trace zero in a type $ { \ rm ii } _ 1 $ factor are single commutators and any self - adjoint elements of trace zero are single self - commutators. this result answers affirmatively question 1. 1 in [ 10 ]. as the second application, we prove that any self - adjoint element in a type $ { \ rm ii } _ 1 $ factor can be written a linear combination of 4 projections. this result answers affirmatively question 6 ( 2 ) in [ 15 ]. as the third application, we show that if $ ( \ mathcal { m }, \ tau ) $ is a finite factor, $ x \ in \ mathcal { m } $, then there exists a normal operator $ n \ in \ mathcal { m } $ and a nilpotent operator $ k $ such that $ x = n + k $. this result answers affirmatively question 1. 1 in [ 9 ]. | arxiv:2303.10602 |
the teichm \ " uller harmonic map flow, introduced in [ 9 ], evolves both a map from a closed riemann surface to an arbitrary compact riemannian manifold, and a constant curvature metric on the domain, in order to reduce its harmonic map energy as quickly as possible. in this paper, we develop the geometric analysis of holomorphic quadratic differentials in order to explain what happens in the case that the domain metric of the flow degenerates at infinite time. we obtain a branched minimal immersion from the degenerate domain. | arxiv:1209.3783 |
we study the transport, decoherence and dissipation of an impurity interacting with a bath of free fermions in a one - dimensional lattice. numerical simulations are made with the time - evolving block decimation method. we introduce a mass imbalance between the impurity and bath particles and find that the fastest decoherence occurs for a light impurity in a bath of heavy particles. by contrast, the fastest dissipation of energy occurs when the masses are equal. we present a simple model for decoherence in the heavy bath limit, and a linear density response description of the interaction which predicts maximum dissipation for equal masses. | arxiv:1604.06638 |
we study weil - petersson ( wp ) geodesics with narrow end invariant and develop techniques to control length - functions and twist parameters along them and prescribe their itinerary in the moduli space of riemann surfaces. this class of geodesics is rich enough to provide for examples of closed wp geodesics in the thin part of the moduli space, as well as divergent wp geodesic rays with minimal filling ending lamination. some ingredients of independent interest are the following : a strength version of wolpert ' s geodesic limit theorem proved in sec. 4. the stability of hierarchy resolution paths between narrow pairs of partial markings or laminations in the pants graph proved in sec. 5. a kind of symbolic coding for laminations in terms of subsurface coefficients presented in sec. 7. | arxiv:1212.0051 |
this paper addresses a factorization method for imaging the support of a wave - number - dependent source function from multi - frequency data measured at a finite pair of symmetric receivers in opposite directions. the source function is given by the inverse fourier transform of a compactly supported time - dependent source whose initial moment or terminal moment for radiating is unknown. using the multi - frequency far - field data at two opposite observation directions, we provide a computational criterion for characterizing the smallest strip containing the support and perpendicular to the directions. a new parameter is incorporated into the design of test functions for indicating the unknown moment. the data from a finite pair of opposite directions can be used to recover the $ \ theta $ - convex polygon of the support. uniqueness in recovering the convex hull of the support is obtained as a by - product of our analysis using all observation directions. similar results are also discussed with the multi - frequency near - field data from a finite pair of observation positions in three dimensions. we further comment on possible extensions to source functions with two disconnected supports. extensive numerical tests in both two and three dimensions are implemented to show effectiveness and feasibility of the approach. the theoretical framework explored here should be seen as the frequency - domain analysis for inverse source problems in the time domain. | arxiv:2401.07193 |
despite substantial advances in scaling test - time compute, an ongoing debate in the community is how it should be scaled up to enable continued and efficient improvements with scaling. there are largely two approaches : first, distilling successful search or thinking traces ; and second, using verification ( e. g., 0 / 1 outcome rewards, reward models, or verifiers ) to guide reinforcement learning ( rl ) and search algorithms. in this paper, we prove that finetuning llms with verifier - based ( vb ) methods based on rl or search is far superior to verifier - free ( vf ) approaches based on distilling or cloning search traces, given a fixed amount of compute / data budget. further, we show that as we scale test - time compute ( measured as the output token length ) and training data, suboptimality of vf methods scales poorly compared to vb when the base pre - trained llm presents a heterogeneous distribution over correct solution traces ( e. g., different lengths, styles, etc. ) and admits a non - sharp distribution over rewards on traces sampled from it. we formalize this condition using anti - concentration [ erd \ h { o } s, 1945 ]. this implies a stronger result that vb methods scale better asymptotically, with the performance gap between vb and vf methods widening as test - time budget grows. we corroborate our theory empirically on both didactic and math reasoning problems with 3 / 8 / 32b - sized pre - trained llms, where we find verification is crucial for scaling test - time compute. | arxiv:2502.12118 |
we study active run - and - tumble particles with an additional two - state internal variable characterizing their motile or non - motile state. motile particles change irreversibly into non - motile ones upon collision with a non - motile particle. the system evolves towards an absorbing state where all particles are non - motile. we initialize the system with one non - motile particles in a bath of motile ones and study numerically the kinetics of relaxation to absorbing state and its structure as function of the density of the initial bath of motile particles and of their tumbling rate. we find a crossover from fractal aggregates at low density to homogeneous ones at high density. the persistence of single - particle dynamics as quantified by the tumbling rate pushes this crossover to higher density and can be used to tune the porosity of the aggregate. at the lowest density the fractal dimension of the aggregate approaches that obtained in single - particle diffusion limited aggregation. our results could be exploited for the design of structures of desired porosity. the model is a first step towards the study of the collective dynamics of active particles that can exchange biological information. | arxiv:1807.02928 |
in graphene nanoribbon junctions, the nearly perfect transmission occurs in some junctions while the zero conductance dips due to anti - resonance appear in others. we have classified the appearance of zero conductance dips for all combinations of ribbon and junction edge structures. these transport properties do not attribute to the whole junction structure but the partial corner edge structure, which indicates that one can control the electric current simply by cutting a part of nanoribbon edge. the ribbon width is expected to be narrower than 10 nm in order to observe the zero conductance dips at room temperature. | arxiv:0908.0176 |
we present godot reinforcement learning ( rl ) agents, an open - source interface for developing environments and agents in the godot game engine. the godot rl agents interface allows the design, creation and learning of agent behaviors in challenging 2d and 3d environments with various on - policy and off - policy deep rl algorithms. we provide a standard gym interface, with wrappers for learning in the ray rllib and stable baselines rl frameworks. this allows users access to over 20 state of the art on - policy, off - policy and multi - agent rl algorithms. the framework is a versatile tool that allows researchers and game designers the ability to create environments with discrete, continuous and mixed action spaces. the interface is relatively performant, with 12k interactions per second on a high end laptop computer, when parallized on 4 cpu cores. an overview video is available here : https : / / youtu. be / g1mlzsfqij4 | arxiv:2112.03636 |
we study what happens to d and d _ s mesons as the temperature increases, using lattice qcd simulations with n _ f = 2 + 1 dynamical flavours on anistropic lattices. we have access to five temperatures in the hadronic phase. using the determined groundstate mass at the lowest temperature, we investigate the effect of rising temperature by analysing ratios of mesonic correlators, without the need for further fitting or spectral reconstruction. in the pseudoscalar and vector channels, we demonstrate that temperature effects are at the percent level and can be captured by a reduction of the groundstate mass as the thermal crossover is approached. in the axial - vector and scalar channels on the other hand, temperature effects are prominent throughout the hadronic phase. | arxiv:2209.14681 |
a novel approach to design the feedback control based on past states is proposed for hybrid stochastic differential equations ( hsdes ). this new theorem builds up the connection between the delay feedback control and the control function without delay terms, which enables one to construct the delay feedback control using the existing results on stabilities of hsdes. methods to find the upper bound of the length of the time delay are also investigated. numerical simulations are presented to demonstrate the new theorem. | arxiv:1907.12080 |
crystallization of supersaturated liquids usually starts by heterogeneous nucleation. mounting evidence shows that even homogeneous nucleation in simple liquids takes place in two steps ; first a dense amorphous precursor forms, and the crystalline phase appears via heterogeneous nucleation in / on the precursor cluster. herein, we review recent results by a simple dynamical density functional theory, the phase - field crystal model, for ( precursor - mediated ) homogeneous and heterogeneous nucleation of nanocrystals. it will be shown that the mismatch between the lattice constants of the nucleating crystal and the substrate plays a decisive role in determining the contact angle and nucleation barrier, which were found to be non - monotonic functions of the lattice mismatch. time dependent studies are essential as investigations based on equilibrium properties often cannot identify the preferred nucleation pathways. modeling of these phenomena is essential for designing materials on the basis of controlled nucleation and / or nano - patterning. | arxiv:1407.3627 |
currently, depression treatment relies on closely monitoring patients response to treatment and adjusting the treatment as needed. using self - reported or physician - administrated questionnaires to monitor treatment response is, however, burdensome, costly and suffers from recall bias. in this paper, we explore using location sensory data collected passively on smartphones to predict treatment outcome. to address heterogeneous data collection on android and ios phones, the two predominant smartphone platforms, we explore using domain adaptation techniques to map their data to a common feature space, and then use the data jointly to train machine learning models. our results show that this domain adaptation approach can lead to significantly better prediction than that with no domain adaptation. in addition, our results show that using location features and baseline self - reported questionnaire score can lead to f1 score up to 0. 67, comparable to that obtained using periodic self - reported questionnaires, indicating that using location data is a promising direction for predicting depression treatment outcome. | arxiv:2503.07883 |
the resurgence and asymptotic resurgence of an ideal in a polynomial ring are two statistics which measure the relationship between its regular and symbolic powers. we address two aspects of resurgence which can be studied via asymptotic resurgence. first, we show that if an ideal has noetherian symbolic rees algebra then its resurgence is rational. second, we derive two bounds on asymptotic resurgence given a single known containment between a symbolic and regular power. from these bounds we recover and extend criteria for the resurgence of an ideal to be strictly less than its big height recently derived by grifo, huneke, and mukundan. we achieve the reduction to asymptotic resurgence by showing that if the asymptotic resurgence and resurgence are different, then resurgence is a maximum instead of a supremum. | arxiv:2003.06980 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.