text
stringlengths
1
3.65k
source
stringlengths
15
79
stochastic pde eigenvalue problems often arise in the field of uncertainty quantification, whereby one seeks to quantify the uncertainty in an eigenvalue, or its eigenfunction. in this paper we present an efficient multilevel quasi - monte carlo ( mlqmc ) algorithm for computing the expectation of the smallest eigenvalue of an elliptic eigenvalue problem with stochastic coefficients. each sample evaluation requires the solution of a pde eigenvalue problem, and so tackling this problem in practice is notoriously computationally difficult. we speed up the approximation of this expectation in four ways : we use a multilevel variance reduction scheme to spread the work over a hierarchy of fe meshes and truncation dimensions ; we use qmc methods to efficiently compute the expectations on each level ; we exploit the smoothness in parameter space and reuse the eigenvector from a nearby qmc point to reduce the number of iterations of the eigensolver ; and we utilise a two - grid discretisation scheme to obtain the eigenvalue on the fine mesh with a single linear solve. the full error analysis of a basic mlqmc algorithm is given in the companion paper [ gilbert and scheichl, 2022 ], and so in this paper we focus on how to further improve the efficiency and provide theoretical justification for using nearby qmc points and two - grid methods. numerical results are presented that show the efficiency of our algorithm, and also show that the four strategies we employ are complementary.
arxiv:2103.03407
the quasi - classical limit of the scalar nonlocal dbar - problem is derived and a quasi - classical version of the dbar - dressing method is presented. dispersionless kp, mkp and 2dtl hierarchies are discussed as illustrative examples. it is shown that the universal whitham hierarchy it is nothing but the ring of symmetries for the quasi - classical dbar - problem. the reduction problem is discussed and, in particular, the d2dtl equation of b type is derived.
arxiv:nlin/0105071
over the projective plane and at most two - step blowups of hirzebruch surfaces, where there are strong full exceptional sequences of line bundles, we obtain foundational results about gaeta resolutions of coherent sheaves by these line bundles. under appropriate conditions, we show the locus of semistable sheaves not admitting gaeta resolutions has codimension at least 2. we then study le potier ' s strange duality conjecture. over these surfaces, for two orthogonal numerical classes where one has rank one and the other has sufficiently positive first chern class, we show that the strange morphism is injective. the main step in the proof is to use gaeta resolutions to show that certain relevant quot schemes are finite and reduced, allowing them to be enumerated using the authors ' previous paper.
arxiv:2205.14827
a model is developed to explain rise time of a spherical intruder placed in a granular bed, which is considered as fluid. phenomenon of rising intruder in a granular bed is well known as brazil nut effect. radius of the intruder is varied with $ r _ n = r _ 0 n, n = 1.. 10 $. an approximation for $ t < < \ tau $ is chosen in order to simplify the solution of second order differential equation of intruder vertical position to obtain the rise time t. a non - physical parameter $ \ alpha $ and also transformation from $ t $ to $ t ' $ are needed to be introduced in order to control the results to mimic reported experiment qualitatively. several forms of rise time $ t _ n $ and also transformed rise time $ t _ n ' $ against $ n $ are presented and discussed.
arxiv:1105.2987
let x be an irreducible symplectic variety defined over a number field k. assume either that x has picard number at least two or that x has even second betti number. we prove that there exist a finite algebraic field extension l / k and a density 1 set s of non - archimedean places of l such that the reduction of x at any place in s has nonzero hasse - witt invariant.
arxiv:1001.2929
in industry as well as education as well as academics we see a growing need for knowledge on how to apply machine learning in software applications. with the educational programme ict & ai at fontys uas we had to find an answer to the question : " how should we educate software engineers to become ai engineers? " this paper describes our educational programme, the open source tools we use, and the literature it is based on. after three years of experience, we present our lessons learned for both educational institutions and software engineers in practice.
arxiv:2011.01590
immersive virtual - and augmented - reality headsets can overlay a flat image against any surface or hang virtual objects in the space around the user. the technology is rapidly improving and may, in the long term, replace traditional flat panel displays in many situations. when displays are no longer intrinsically flat, how should we use the space around the user for abstract data visualisation? in this paper, we ask this question with respect to origin - destination flow data in a global geographic context. we report on the findings of three studies exploring different spatial encodings for flow maps. the first experiment focuses on different 2d and 3d encodings for flows on flat maps. we find that participants are significantly more accurate with raised flow paths whose height is proportional to flow distance but fastest with traditional straight line 2d flows. in our second and third experiment, we compared flat maps, 3d globes and a novel interactive design we call mapslink, involving a pair of linked flat maps. we find that participants took significantly more time with mapslink than other flow maps while the 3d globe with raised flows was the fastest, most accurate, and most preferred method. our work suggests that careful use of the third spatial dimension can resolve visual clutter in complex flow maps.
arxiv:1908.02089
we discuss the possibility to construct an effective quantum field theory for an axial vector coupled to a dirac spinor field. a massive axial vector describes antisymmetric torsion. the consistency conditions include unitarity and renormalizability in the low - energy region. the investigation of the ward identities and the one - and two - loop divergences indicate serious problems arising in the theory. the final conclusion is that torsion may exist as a string excitation, but there are very severe restrictions for the existence of a propagating torsion field, subject to the quantization procedure, at low energies.
arxiv:hep-th/9910168
existing few - shot learning ( fsl ) methods make the implicit assumption that the few target class samples are from the same domain as the source class samples. however, in practice this assumption is often invalid - - the target classes could come from a different domain. this poses an additional challenge of domain adaptation ( da ) with few training samples. in this paper, the problem of domain - adaptive few - shot learning ( da - fsl ) is tackled, which requires solving fsl and da in a unified framework. to this end, we propose a novel domain - adversarial prototypical network ( dapn ) model. it is designed to address a specific challenge in da - fsl : the da objective means that the source and target data distributions need to be aligned, typically through a shared domain - adaptive feature embedding space ; but the fsl objective dictates that the target domain per class distribution must be different from that of any source domain class, meaning aligning the distributions across domains may harm the fsl performance. how to achieve global domain distribution alignment whilst maintaining source / target per - class discriminativeness thus becomes the key. our solution is to explicitly enhance the source / target per - class separation before domain - adaptive feature embedding learning in the dapn, in order to alleviate the negative effect of domain alignment on fsl. extensive experiments show that our dapn outperforms the state - of - the - art fsl and da models, as well as their na \ " ive combinations. the code is available at https : / / github. com / dingmyu / dapn.
arxiv:2003.08626
experimentation. he proposed a method to distinguish between genuine empirical, nonempirical or even pseudoempirical methods. the latter case was exemplified by astrology, which appeals to observation and experimentation. while it had empirical evidence based on observation, on horoscopes and biographies, it crucially failed to use acceptable scientific standards. popper proposed falsifiability as an important criterion in distinguishing science from pseudoscience. to demonstrate this point, popper gave two cases of human behavior and typical explanations from sigmund freud and alfred adler ' s theories : " that of a man who pushes a child into the water with the intention of drowning it ; and that of a man who sacrifices his life in an attempt to save the child. " from freud ' s perspective, the first man would have suffered from psychological repression, probably originating from an oedipus complex, whereas the second man had attained sublimation. from adler ' s perspective, the first and second man suffered from feelings of inferiority and had to prove himself, which drove him to commit the crime or, in the second case, drove him to rescue the child. popper was not able to find any counterexamples of human behavior in which the behavior could not be explained in the terms of adler ' s or freud ' s theory. popper argued it was that the observation always fitted or confirmed the theory which, rather than being its strength, was actually its weakness. in contrast, popper gave the example of einstein ' s gravitational theory, which predicted " light must be attracted by heavy bodies ( such as the sun ), precisely as material bodies were attracted. " following from this, stars closer to the sun would appear to have moved a small distance away from the sun, and away from each other. this prediction was particularly striking to popper because it involved considerable risk. the brightness of the sun prevented this effect from being observed under normal circumstances, so photographs had to be taken during an eclipse and compared to photographs taken at night. popper states, " if observation shows that the predicted effect is definitely absent, then the theory is simply refuted. " popper summed up his criterion for the scientific status of a theory as depending on its falsifiability, refutability, or testability. paul r. thagard used astrology as a case study to distinguish science from pseudoscience and proposed principles and criteria to delineate them. first, astrology has not progressed in that it has not been updated nor
https://en.wikipedia.org/wiki/Pseudoscience
central jet vetoes are powerful tools for reducing qcd background in measurements and searches for electroweak and colorless, new physics processes in hadron collisions. in this letter, we report the key findings of a new philosophy to designing searches for such phenomena at hadron colliders, one designed and centered around a dynamical jet veto instead a static veto applied independently of other selection criteria. specifically, we investigate the theoretical and phenomenological consequences of setting the jet veto scale to the transverse momentum $ ( p _ t ) $ of the leading charged lepton $ \ ell $ in multi - lepton processes on an event - by - event basis. we consider the case of a tev - scale heavy neutrino $ n $ decaying to the trilepton final state and find the following : ( i ) perturbative uncertainties associated with the veto greatly reduce due to tying the veto scale to the hard process scale. ( ii ) the signal efficiency for passing the veto jumps to $ \ gtrsim95 \ % $ and exhibits little - to - no dependence on the neutrino mass scale. ( iii ) top quark and ` fake ' lepton rejection capabilities also improve compared to only vetoing heavy flavor - tagged jets above a fixed $ p _ t $. this results in an increased sensitivity to active - sterile neutrino mixing by approximately an order of magnitude over the lhc ' s lifetime. for a dirac neutrino with mass $ m _ n = 150 - 1000 $ gev and the representative active - sterile mixing hypothesis $ \ vert v _ { e4 } \ vert = \ vert v _ { \ tau 4 } \ vert $ with $ \ vert v _ { \ mu 4 } \ vert = 0 $, we find that lhc experiments can probe $ \ vert v _ { e4 } \ vert ^ 2, \ vert v _ { \ tau 4 } \ vert ^ 2 \ lesssim 6 \ times10 ^ { - 4 } - 8 \ times10 ^ { - 3 } $, surpassing the global upper limit for $ m _ n < 450 $ gev, with $ \ mathcal { l } = 3 $ ab $ ^ { - 1 } $ of data at $ \ sqrt { s } = 14 $ tev. due to the color structures of the heavy $ n $ production mechanisms considered, we argue that our results hold broadly for other color - singlet processes
arxiv:1805.09335
this paper concerns elliptic systems of $ p $ - laplace type with complex valued coefficient and source term. we extend the real valued theory of the elliptic $ p $ - laplace equation to the complex valued case. we establish the existence and uniqueness of solutions to the dirichlet problem and prove the schauder estimate in the case of h \ " older continuous coefficients and source terms. we also consider families of coefficient functions parametrized by a complex variable and prove a differentiability result for the map taking the complex parameter to the corresponding solution.
arxiv:2503.18932
the supernemo experiment is being designed to search for neutrinoless double beta decay to test if neutrinos are majorana particles. the experimental technique follows that of the currently running nemo - 3 experiment, which successfully combines tracking and calorimetry to measure the topology and energy of the final state electrons. unique particle identification capabilities of supernemo will be employed with about 100 kg of 82 se and will reach sensitivity to a half - life of about 2 x 10 ^ 26 years, which corresponds to majorana neutrino masses of about 50 mev, depending on the calculated value of the nuclear matrix element. in this poster, the current status of the supernemo project is presented.
arxiv:0909.3167
we present a unified method of construction of surfaces associated with grassmannian sigma models, expressed in terms of an orthogonal projector. this description leads to compact formulae for structural equations of two - dimensional surfaces immersed in the su ( n ) algebra. in the special case of the cp ^ 1 sigma model we obtain constant negative gaussian curvature surfaces. as a consequence, this leads us to an explicit relation between the cp ^ 1 sigma model and the sine - gordon equation.
arxiv:math/0601302
( 15 ) $ and $ d _ { \ rm b } ( 2 \ rm d ) = 1. 732 \, 1 ( 4 ) $. the estimates of the universal wrapping probabilities for the 3d ising model and of the geometric critical exponents $ d _ { \ rm min } $ and $ d _ { \ rm b } $ either improve over the existing results or have not been reported yet.
arxiv:1811.03358
constraints on the geometry of a static spherically symmetric black hole are obtained by requiring the spacetime curvature to be analytic at the event horizon. for a zero temperature black hole further constraints are obtained by also requiring that the semiclassical trace equation be satisfied when conformally invariant fields are present. it is found that zero temperature black holes whose sizes lie within a certain range do not exist. the range depends on the numbers and types of conformally invariant quantized fields that are present.
arxiv:gr-qc/9707026
a scaling theory of long - wavelength electrostatic turbulence in a magnetised, weakly collisional plasma ( e. g., itg turbulence ) is proposed, with account taken both of the nonlinear advection of the perturbed particle distribution by fluctuating exb flows and of its phase mixing, which is caused by the streaming of the particles along the mean magnetic field and, in a linear problem, would lead to landau damping. it is found that it is possible to construct a consistent theory in which very little free energy leaks into high velocity moments of the distribution function, rendering the turbulent cascade in the energetically relevant part of the wave - number space essentially fluid - like. the velocity - space spectra of free energy expressed in terms of hermite - moment orders are steep power laws and so the free - energy content of the phase space does not diverge at infinitesimal collisionality ( while it does for a linear problem ) ; collisional heating due to long - wavelength perturbations vanishes in this limit ( also in contrast with the linear problem, in which it occurs at the finite rate equal to the landau - damping rate ). the ability of the free energy to stay in the low velocity moments of the distribution function is facilitated by the " anti - phase - mixing " effect, whose presence in the nonlinear system is due to the stochastic version of the plasma echo ( the advecting velocity couples the phase - mixing and anti - phase - mixing perturbations ). the partitioning of the wave - number space between the ( energetically dominant ) region where this is the case and the region where linear phase mixing wins its competition with nonlinear advection is governed by the " critical balance " between linear and nonlinear timescales ( which for high hermite moments splits into two thresholds, one demarcating the wave - number region where phase mixing predominates, the other where plasma echo does ).
arxiv:1508.05988
the smallest known example of a family of modular categories that is not determined by its modular data are the rank 49 categories $ \ mathcal { z } ( \ text { vec } _ g ^ { \ omega } ) $ for $ g = \ mathbb { z } _ { 11 } \ rtimes \ mathbb { z } _ { 5 } $. however, these categories can be distinguished with the addition of a matrix of invariants called the $ w $ - matrix that contains intrinsic information about punctured $ s $ - matrices. here we show that it is a common occurrence for knot and link invariants to carry more information than the modular data. we present the results of a systematic investigation of the invariants for small knots and links. we find many small knots and links that are complete invariants of the $ \ mathcal { z } ( \ text { vec } _ g ^ { \ omega } ) $ when $ g = \ mathbb { z } _ { 11 } \ rtimes \ mathbb { z } _ { 5 } $, including the $ 5 _ 2 $ knot.
arxiv:1806.02843
we demonstrate highly transparent silicon - vanadium and silicon - aluminum tunnel junctions with relatively low sub - gap leakage current and discuss how a trade - off typically encountered between transparency and leakage affects their refrigeration performance. we theoretically investigate cascaded superconducting tunnel junction refrigerators with two or more refrigeration stages. in particular, we develop an approximate method that takes into account self - heating effects but still allows us to optimize the cascade a single stage at a time. we design a cascade consisting of energy - efficient refrigeration stages, which makes cooling of, e. g., quantum devices from above 1 k to below 100 mk a realistic experimental target.
arxiv:2009.14166
we consider a model of active brownian particles with velocity - alignment in two spatial dimensions with passive and active fluctuations. hereby, active fluctuations refers to purely non - equilibrium stochastic forces correlated with the heading of an individual active particle. in the simplest case studied here, they are assumed as independent stochastic forces parallel ( speed noise ) and perpendicular ( angular noise ) to the velocity of the particle. on the other hand, passive fluctuations are defined by a noise vector independent of the direction of motion of a particle, and may account for example for thermal fluctuations. we derive a macroscopic description of the active brownian particle gas with velocity - alignment interaction. hereby, we start from the individual based description in terms of stochastic differential equations ( langevin equations ) and derive equations of motion for the coarse grained kinetic variables ( density, velocity and temperature ) via a moment expansion of the corresponding probability density function. we focus here in particular on the different impact of active and passive fluctuations on the onset of collective motion and show how active fluctuations in the active brownian dynamics can change the phase - transition behaviour of the system. in particular, we show that active angular fluctuation lead to an earlier breakdown of collective motion and to emergence of a new bistable regime in the mean - field case.
arxiv:1204.4304
the symmetric studies on the structure - property relationship of the unpoled and poled states of 0. 67bifeo3 - 0. 33batio3 ( 0. 67bf - 0. 33bt ) were conducted to understand the origin of the morphotropic phase boundary ( mpb ) in bf - bt. a typical relaxor - type dielectric anomaly was observed ( tf, ~ 627 k ). the remnant polarization ( pr ) and maximum value of electro - strain ( sm ) increase clearly during heating ( pr, ~ 40 uc / cm2 ; sm, 0. 191 % under 40 kv / cm at 453 k ). the first - cycle electro - strain loops indicate the difference in the polar state between 0. 67bf - 0. 33bt and 0. 94binatio3 - 0. 06batio3. both the unpoled and poled samples have the similar frequency dispersion behaviors. even in the poled samples, the transition between the ergodic relaxor state and ferroelectric - like state does not involve a clear dielectric anomaly. analyses based on the rietveld refinement of xrd patterns, bright - field images and selected - area electron diffractions ( saed ) demonstrated that the formation of the long - range ferroelectric domains was difficult under the poling field.
arxiv:2002.05312
we show that the decay rates of the higgs boson to a pseudoscalar quarkonium and a pair of leptons, $ h \ to p \ ell ^ + \ ell ^ - $ ( $ p \ in \ { \ eta _ c, \ eta _ b \ } $ ), can be substantially enhanced in a scenario with two higgs doublets with a softly broken $ \ mathbb { z } _ 2 $ symmetry ( 2hdm ) when the cp - odd higgs a is light, i. e. $ m _ a \ lesssim m _ h $. depending on the type of 2hdm the enhancement of $ \ mathcal { b } ( h \ to \ eta _ { c, b } \ tau ^ + \ tau ^ - ) $ with respect to its standard model value can be an order of magnitude larger, i. e. $ \ mathcal { o } ( 10 ^ { - 6 } \ div10 ^ { - 5 } ) $. the decays $ h \ to p \ ell ^ + \ ell ^ - $ could therefore provide an efficient channel to investigate the presence of a light cp - odd higgs $ a $ and help to disentangle among various 2hdm scenarios.
arxiv:1705.01112
in this paper we give several independent extensions of the karlsson - minton summation formula for the generalized hypergeometric function with integral parameters differences. in particular, we examine the " prohibited " values for the integer top parameter in minton ' s formula, extend one unit negative difference in karlsson ' s formula to a finite number of integer negative differences and establish known and new summation and transformation formulas when the unit negative difference is allowed to take arbitrary values. we also present a recurrence relation reducing the case of integer negative difference to the karlsson - minton case of unit negative difference. further, we explore some alternative forms of the first miller - paris transformation, including one expressed in terms of meijer - norlund g function.
arxiv:1806.03434
hydrodynamic models of rr lyrae pulsation display a very rich behaviour. contrary to earlier expectations, high order resonances play a crucial role in the nonlinear dynamics representing the interacting modes. chaotic attractors can be found at different time scales : both in the pulsation itself and in the amplitude equations shaping the possible modulation of the oscillations. although there is no one - to - one connection between the nonlinear features found in the numerical models and the observed behaviour, the richness of the found phenomena suggests that the interaction of modes should be taken seriously in the study of the still unsolved puzzle of blazhko effect. one of the main lessons of this complex system is that we should rethink the simple interpretation of the observed effect of resonances.
arxiv:1601.06625
direct shooting is an efficient method to solve numerical optimal control. it utilizes the runge - kutta scheme to discretize a continuous - time optimal control problem making the problem solvable by nonlinear programming solvers. however, conventional direct shooting raises a contradictory dynamics issue when using an augmented state to handle { high - order } systems. this paper fills the research gap by considering the direct shooting method for { high - order } systems. we derive the modified euler and runge - kutta - 4 methods to transcribe the system dynamics constraint directly. additionally, we provide the global error upper bounds of our proposed methods. a set of benchmark optimal control problems shows that our methods provide more accurate solutions than existing approaches.
arxiv:2403.06167
flexible manufacturing processes demand robots to easily adapt to changes in the environment and interact with humans. in such dynamic scenarios, robotic tasks may be programmed through learning - from - demonstration approaches, where a nominal plan of the task is learned by the robot. however, the learned plan may need to be adapted in order to fulfill additional requirements or overcome unexpected environment changes. when the required adaptation occurs at the end - effector trajectory level, a human operator may want to intuitively show the robot the desired changes by physically interacting with it. in this scenario, the robot needs to understand the human intended changes from noisy haptic data, quickly adapt accordingly and execute the nominal task plan when no further adaptation is needed. this paper addresses the aforementioned challenges by leveraging lfd and bayesian optimization to endow the robot with data - efficient adaptation capabilities. our approach exploits the sensed interaction forces to guide the robot adaptation, and speeds up the optimization process by defining local search spaces extracted from the learned task model. we show how our framework quickly adapts the learned spatial - temporal patterns of the task, leading to deformed trajectory distributions that are consistent with the nominal plan and the changes introduced by the human.
arxiv:1908.07263
majorana ' zero - modes ' are expected to be immune to decoherence. the primary method for their characterization in a 1d topological superconductor, is measuring the tunneling current into the edge of the superconductor. presently, the hallmark of a localized majorana edge - state is an emergent quantized zero - bias conductance peak ( zbcp ). however, such a conductance peak can also result from other mechanisms ; e. g., crossing ( and sticking ) of two branches of a standard andreev bound state, or a soft potential at the edge of the superconductor. since the emergence of a ' majorana - zbcp ' must be accompanied by an opening of a topological gap in the bulk, we performed two simultaneous tunneling measurements : one in the bulk and another at the edge of the 1d superconductor. measurements were performed with an inas nanowire in - situ coated with epitaxial aluminum. for particular gate - tuning of the chemical potential in the wire and a zeeman field parallel to the wire, we observed a closing of the superconducting bulk - gap followed by its reopening concomitant with the appearance of a zbcp at the edge. we note that a zbcp could also be observed with different tuning parameters without an observed reopening of the bulk - gap. this demonstrates the importance of simultaneous probing of the bulk and edge when searching for a majorana edge - state.
arxiv:1807.06632
the study of the higgs boson properties is one of the main tasks of contemporary high - energy physics. among higgs properties, its interaction with gluons is interesting since it can be facilitated by yet unknown elementary particles. one of the major sources of uncertainty in the theoretical description of $ ggh $ coupling originates from mixed qcd - electroweak contributions. the nlo qcd corrections to these contributions were evaluated in the approximation where electroweak boson masses were considered to be significantly larger than the mass of the higgs boson and it is desirable to compute these corrections for physical masses of the gauge bosons and the higgs boson. we present a major step towards this goal and describe first the analytic evaluation of nlo mixed qcd - ew three - loop virtual corrections to $ gg \ to h $, and then their implementation in the evaluation of the total cross section for $ gg \ to h $ in the soft - gluon approximation for real corrections.
arxiv:1809.02450
the direct determination of the excitation level density and radiative strength functions of their exciting gamma - transitions is impossible for the larger part of the stable and long - life radioactive target nuclei. this circumstance is uniquely determined by the fact, that the level spacing much less than the resolution of the existing spectrometers of gamma - rays and charged particles. the extraction of these parameters of nucleus in this situation can be executed by their only fitting to the most probable values, reproducing the measured in the nuclear reactions spectra and sections. this inverse problem of mathematical analysis of its nature is principally ambiguous. moreover, system of equations, those connecting the number of excitable levels and probability of the emission of charge particles are assigned usually within the framework of some assumptions about the mechanism of nuclear reaction and factors, determining the dynamics of the studied process. the verification of these parameters can be partially executed by the calculation of total gamma - spectra for their different sets. as, in particular, the results of this analysis show, the calculation of the structure of the excited levels to a change in the form of the energy dependence of radiative strength functions most likely must be considered to the neutron - binding energy. it is not possible to exclude the possibility of that, that also radiative, and neutron strength functions depend on the structure of neutron resonance and for the higher excitation energies.
arxiv:0709.4302
we consider isolated compact remnants ( icors ), i. e. neutrons stars and black holes that do not reside in binary systems and therefore cannot be detected as x - ray binaries. icors may represent $ \ sim \, 5 $ percent of the stellar mass budget of the galaxy, but they are very hard to detect. here we explore the possibility of using microlensing to identify icors. in a previous paper we described a simulation of neutron star evolution in phase space in the galaxy, taking into account the distribution of the progenitors and the kick at formation. here we first reconsider the evolution and distribution of neutron stars and black holes adding a bulge component. from the new distributions we calculate the microlensing optical depth, event rate and distribution of event time scales, comparing and contrasting the case of icors and " normal stars ". we find that the contribution of remnants to optical depth is slightly lower than without kinematics, owing to the evaporation from the galaxy. on the other hand, the relative contribution to the rate of events is a factor $ \ sim \, 5 $ higher. in all, $ \ sim \, 6 - 7 $ percent of the events are likely related to icors. in particular, $ \ sim \, 30 - 40 $ percent of the events with duration $ > \, 100 $ days are possibly related to black holes. it seems therefore that microlensing observations are a suitable tool to probe the population of galactic icors.
arxiv:1009.0005
few days ago v. d. efros submitted a preprint to nucl - th containing criticisms of our recent research activity on bound and scattering states of a = 3, 4 nucleons. as a consequence, we are forced to examine the essence of the controversy and of the comments contained in that preprint. these are the motivations of the present paper.
arxiv:nucl-th/9804073
we consider noncommutative gravity on a space with canonical noncommutativity that is based on the commutative macdowell - mansouri action. gravity is treated as gauge theory of the noncommutative $ so ( 1, 3 ) _ \ star $ group and the seiberg - witten ( sw ) map is used to express noncommutative fields in terms of the corresponding commutative fields. in the commutative limit the noncommutative action reduces to the einstein - hilbert action plus the cosmological term and the topological gauss - bonnet term. after the sw expansion in the noncommutative parameter the first order correction to the action, as expected, vanishes. we calculate the second order correction and write it in a manifestly gauge covariant way.
arxiv:1207.4675
we perform a general analysis of the cosmological viability of geometric inflation. we show that the evolution of the universe, from inflation to the present day, can be seen from the addition of an infinite tower of curvature invariants into the hilbert - einstein action. the main epochs of the universe can be reproduced : inflation, big bang nucleosynthesis, and late - time acceleration driven by the cosmological constant. the slow - roll condition is a robust prediction of the theory. inflation possesses a graceful exit with enough number of $ e - $ folds between the limit imposed by the planck density and the exit of the exponential expansion to solve the horizon problem and the absence of topological defects. we also provide some scenarios where the energy scale of the theory can be calibrated.
arxiv:2109.11681
let $ m $ be any $ n $ dimensional smooth manifold and $ pm $ be the space of all smooth paths, then we showed that $ pm $ is a smooth manifold modelled over a complete normable space. we discussed many geometric structure on path spaces and its relation to ambient space.
arxiv:1108.2101
##3. 9 $ km s $ ^ { - 1 } $, and found a spin - dependent velocity modulation as well. the former is in perfect agreement with the mean velocity amplitude obtained by other researchers, confirming the published component masses $ m _ 1 \ simeq0. 79 m _ \ odot $ and $ m _ 2 \ simeq0. 11 m _ \ odot $.
arxiv:2402.13834
we investigate production and detection prospects of the quintuplet heavy leptons at the lhc in the context of a new model which is proposed as a viable and testable solution to the neutrino masses problem. we classify the signals, carry out a full simulation on the signals and the relevant backgrounds at the 14 tev lhc. after applying suitable kinematic cuts, the background events are substantially suppressed. the signals of the heavy leptons might be detected at the 14 tev lhc.
arxiv:1502.02801
aerogel and water cerenkov detectors were employed to tag kaons for a lambda hypernuclear spectroscopic experiment which used the ( e, e ' k + ) reaction in experimental hall c at jefferson lab ( jlab e05 - 115 ). fringe fields from the kaon spectrometer magnet yielded ~ 5 gauss at the photomultiplier tubes ( pmt ) for these detectors which could not be easily shielded. as this field results in a lowered kaon detection efficiency, we implemented a bucking coil on each photomultiplier tubes to actively cancel this magnetic field, thus maximizing kaon detection efficiency.
arxiv:1307.0896
we unify the resource - theoretic and the cohomological perspective on quantum contextuality. at the center of this unification stands the notion of the contextual fraction. for both symmetry and parity based contextuality proofs, we establish cohomological invariants which are witnesses of state - dependent contextuality. we provide two results invoking the contextual fraction, namely ( i ) refinements of logical contextuality inequalities, and ( ii ) upper bounds on the classical cost of boolean function evaluation, given the contextual fraction of the corresponding measurement - based quantum computation.
arxiv:1806.04657
issues of resonance that appear in non - standard random walk models are discussed. the first walk is called repulsive delayed random walk, which is described in the context of a stick balancing experiment. it will be shown that a type of " resonant " effect takes place to keep the stability of the fixed point better with tuned bias and delay. we also briefly discuss the second model called sticky random walk, which is introduced to model string entanglement. peculiar resonant effects with respect to these random walks are presented.
arxiv:cond-mat/0605682
optical properties of atomically thin transition metal dichalcogenides are controlled by robust excitons characterized by a very large oscillator strength. encapsulation of monolayers such as mose $ _ 2 $ in hexagonal boron nitride ( hbn ) yields narrow optical transitions approaching the homogenous exciton linewidth. we demonstrate that the exciton radiative rate in these van der waals heterostructures can be tailored by a simple change of the hbn encapsulation layer thickness as a consequence of the purcell effect. the time - resolved photoluminescence measurements together with cw reflectivity and photoluminescence experiments show that the neutral exciton spontaneous emission time can be tuned by one order of magnitude depending on the thickness of the surrounding hbn layers. the inhibition of the radiative recombination can yield spontaneous emission time up to $ 10 $ ~ ps. these results are in very good agreement with the calculated recombination rate in the weak exciton - photon coupling regime. the analysis shows that we are also able to observe a sizeable enhancement of the exciton radiative decay rate. understanding the role of these electrodynamical effects allow us to elucidate the complex dynamics of relaxation and recombination for both neutral and charged excitons.
arxiv:1902.00670
we constrain blastwave parameters and the circumburst media of a subsample of ten bepposax gamma - ray bursts. for this sample we derive the values of the injected electron energy distribution index, p, and the density structure index of the circumburst medium, k, from simultaneous spectral fits to their x - ray, optical and nir afterglow data. the spectral fits have been done in count space and include the effects of metallicity, and are compared with the previously reported optical and x - ray temporal behaviour. using the blastwave model and some assumptions which include on - axis viewing and standard jet structure, constant blastwave energy and no evolution of the microphysical parameters, we find a mean value of p for the sample as a whole of 2. 04 + 0. 02 / - 0. 03. a statistical analysis of the distribution demonstrates that the p values in this sample are inconsistent with a single universal value for p at the 3 - sigma level or greater, which has significant implications for particle acceleration models. this approach provides us with a measured distribution of circumburst density structures rather than considering only the cases of k = 0 ( homogeneous ) and k = 2 ( wind - like ). we find five grbs for which k can be well constrained, and in four of these cases the circumburst medium is clearly wind - like. the fifth source has a value of 0 < k < 1, consistent with a homogeneous circumburst medium.
arxiv:0704.3718
a model based on a convolutional neural network ( cnn ) is designed to reconstruct the three - dimensional turbulent flows beneath a free surface using surface measurements, including the surface elevation and surface velocity. trained on datasets obtained from the direct numerical simulation ( dns ) of turbulent open - channel flows with a deformable free surface, the proposed model can accurately reconstruct the near - surface flow field and capture the characteristic large - scale flow structures away from the surface. the reconstruction performance of the model, measured by metrics such as the normalised mean squared reconstruction errors and scale - specific errors, is considerably better than that of the traditional linear stochastic estimation ( lse ) method. we further analyse the saliency maps of the cnn model and the kernels of the lse model and obtain insights into how the two models utilise surface features to reconstruct subsurface flows. the importance of different surface variables is analysed based on the saliency map of the cnn, which reveals knowledge about the surface - subsurface relations. the cnn is also shown to have a good generalization capability with respect to the froude number if a model trained for a flow with a high froude number is applied to predict flows with lower froude numbers. the results presented in this work indicate that the cnn is effective regarding the detection of subsurface flow structures and by interpreting the surface - subsurface relations underlying the reconstruction model, the cnn can be a promising tool for assisting with the physical understanding of free - surface turbulence.
arxiv:2301.11710
precise measurements of the branching ratios for the flavor - changing neutral current decays $ k \ to \ pi \ nu \ bar { \ nu } $ can provide unique constraints on ckm unitarity and, potentially, evidence for new physics. it is important to measure both decay modes, $ k ^ + \ to \ pi ^ + \ nu \ bar { \ nu } $ and $ k _ l \ to \ pi ^ 0 \ nu \ bar { \ nu } $, since different new physics models affect the rates for each channel differently. the na62 experiment at the cern sps will measure the br for the charged channel to better than 20 %. the br for the neutral channel has never been measured. we are designing the klever experiment to measure br ( $ k _ l \ to \ pi ^ 0 \ nu \ bar { \ nu } $ ) to $ \ sim $ 20 % using a high - energy neutral beam at the cern sps. the boost from the high - energy beam facilitates the rejection of background channels such as $ k _ l \ to \ pi ^ 0 \ pi ^ 0 $ by detection of the additional photons in the final state. on the other hand, the layout poses particular challenges for the design of the small - angle vetoes, which must reject photons from $ k _ l $ decays escaping through the beam exit amid an intense background from soft photons and neutrons in the beam. we present findings from our design studies, with an emphasis on the challenges faced and the potential sensitivity for the measurement of br ( $ k _ l \ to \ pi ^ 0 \ nu \ bar { \ nu } $ ).
arxiv:1912.10037
federated learning seeks to foster collaboration among distributed clients while preserving the privacy of their local data. traditionally, federated learning methods assume a fixed setting in which client data and learning objectives remain constant. however, in real - world scenarios, new clients may join, and existing clients may expand the segmentation label set as task requirements evolve. in such a dynamic federated analysis setup, the conventional federated communication strategy of model aggregation per communication round is suboptimal. as new clients join, this strategy requires retraining, linearly increasing communication and computation overhead. it also imposes requirements for synchronized communication, which is difficult to achieve among distributed clients. in this paper, we propose a federated continual learning strategy that employs a one - time model aggregation at the server through multi - model distillation. this approach builds and updates the global model while eliminating the need for frequent server communication. when integrating new data streams or onboarding new clients, this approach efficiently reuses previous client models, avoiding the need to retrain the global model across the entire federation. by minimizing communication load and bypassing the need to put unchanged clients online, our approach relaxes synchronization requirements among clients, providing an efficient and scalable federated analysis framework suited for real - world applications. using multi - class 3d abdominal ct segmentation as an application task, we demonstrate the effectiveness of the proposed approach.
arxiv:2503.15414
as control - flow protection gets widely deployed, it is difficult for attackers to corrupt control - data and achieve control - flow hijacking. instead, data - oriented attacks, which manipulate non - control data, have been demonstrated to be feasible and powerful. in data - oriented attacks, a fundamental step is to identify non - control, security - critical data. however, critical data identification processes are not scalable in previous works, because they mainly rely on tedious human efforts to identify critical data. to address this issue, we propose a novel approach that combines traditional program analysis with deep learning. at a higher level, by examining how analysts identify critical data, we first propose dynamic analysis algorithms to identify the program semantics ( and features ) that are correlated with the impact of a critical data. then, motivated by the unique challenges in the critical data identification task, we formalize the distinguishing features and use customized program dependence graphs ( pdg ) to embed the features. different from previous works using deep learning to learn basic program semantics, this paper adopts a special neural network architecture that can capture the long dependency paths ( in the pdg ), through which a critical variable propagates its impact. we have implemented a fully - automatic toolchain and conducted comprehensive evaluations. according to the evaluations, our model can achieve 90 % accuracy. the toolchain uncovers 80 potential critical variables in google fuzzbench. in addition, we demonstrate the harmfulness of the exploits using the identified critical variables by simulating 7 data - oriented attacks through gdb.
arxiv:2108.12071
we compute the tail contributions to the gravitational - wave mode amplitudes for compact binaries in eccentric orbits at the third post - newtonian order of general relativity. we combine them with the already available instantaneous pieces and include the post - adiabatic corrections required to fully account for the effects of radiation - reaction forces on the motion. we compare the resulting waveform in the small eccentricity limit to the circular one, finding perfect agreement.
arxiv:1904.11814
after completing their undergraduate studies, many computer science ( cs ) students apply for competitive graduate programs in north america. their long - term goal is often to be hired by one of the big five tech companies or to become a faculty member. therefore, being aware of the role of admission criteria may help them choose the best path towards their goals. in this paper, we analyze the influence of students ' previous universities on their chances of being accepted to prestigious north american universities and returning to academia as professors in the future. our findings demonstrate that the ranking of their prior universities is a significant factor in achieving their goals. we then illustrate that there is a bias in the undergraduate institutions of students admitted to the top 25 computer science programs. finally, we employ machine learning models to forecast the success of professors at these universities. we achieved an rmse of 7. 85 for this prediction task.
arxiv:2311.02476
we consider space efficient implementations of some classical applications of dfs including the problem of testing biconnectivity and $ 2 $ - edge connectivity, finding cut vertices and cut edges, computing chain decomposition and $ st $ - numbering of a given undirected graph $ g $ on $ n $ vertices and $ m $ edges. classical algorithms for them typically use dfs and some $ \ omega ( \ lg n ) $ bits \ footnote { we use $ \ lg $ to denote logarithm to the base $ 2 $. } of information at each vertex. building on a recent $ o ( n ) $ - bits implementation of dfs due to elmasry et al. ( stacs 2015 ) we provide $ o ( n ) $ - bit implementations for all these applications of dfs. our algorithms take $ o ( m \ lg ^ c n \ lg \ lg n ) $ time for some small constant $ c $ ( where $ c \ leq 2 $ ). central to our implementation is a succinct representation of the dfs tree and a space efficient partitioning of the dfs tree into connected subtrees, which maybe of independent interest for designing other space efficient graph algorithms.
arxiv:1606.08645
we consider an asexual biological population of constant size $ n $ evolving in discrete time under the influence of selection and mutation. beneficial mutations appear at rate $ u $ and their selective effects $ s $ are drawn from a distribution $ g ( s ) $. after introducing the required models and concepts of mathematical population genetics, we review different approaches to computing the speed of logarithmic fitness increase as a function of $ n $, $ u $ and $ g ( s ) $. we present an exact solution of the infinite population size limit and provide an estimate of the population size beyond which it is valid. we then discuss approximate approaches to the finite population problem, distinguishing between the case of a single selection coefficient, $ g ( s ) = \ delta ( s - s _ b ) $, and a continuous distribution of selection coefficients. analytic estimates for the speed are compared to numerical simulations up to population sizes of order $ 10 ^ { 300 } $.
arxiv:0910.0219
it is well known that load balancing and low delivery communication cost are two critical issues in mapping requests to servers in content delivery networks ( cdns ). however, the trade - off between these two performance metrics has not been yet quantitatively investigated in designing efficient request mapping schemes. in this work, we formalize this trade - off through a stochastic optimization problem. while the solutions to the problem in the extreme cases of minimum communication cost and optimum load balancing can be derived in closed form, finding the general solution is hard to derive. thus we propose three heuristic mapping schemes and compare the trade - off performance of them through extensive simulations. our simulation results show that at the expense of high query cost, we can achieve a good trade - off curve. moreover, by benefiting from the power of multiple choices phenomenon, we can achieve almost the same performance with much less query cost. finally, we can handle requests with different delay requirements at the cost of degrading network performance.
arxiv:1610.04513
by numerically inverting the tolman - oppenheimer - volkov ( tov ) equation using an explicitly isospin - dependent parametric equation of state ( eos ) of dense neutron - rich nucleonic matter, a restricted eos parameter space is established using observational constraints on the radius, maximum mass, tidal polarizability and causality condition of neutron stars ( nss ). the constraining band obtained for the pressure as a function of energy ( baryon ) density is in good agreement with that extracted recently by the ligo + virgo collaborations from their improved analyses of the ns tidal polarizability in gw170817. rather robust upper and lower boundaries on nuclear symmetry energies are extracted from the observational constraints up to about twice the saturation density $ \ rho _ 0 $ of nuclear matter. more quantitatively, the symmetry energy at $ 2 \ rho _ 0 $ is constrained to $ e _ { \ rm { sym } } ( 2 \ rho _ 0 ) = 46. 9 \ pm10. 1 $ mev excluding many existing theoretical predictions scattered between $ e _ { \ rm { sym } } ( 2 \ rho _ 0 ) = 15 $ and 100 mev. moreover, by studying variations of the causality surface where the speed of sound equals that of light at central densities of the most massive neutron stars within the restricted eos parameter space, the absolutely maximum mass of neutron stars is found to be 2. 40 m $ _ { \ odot } $ approximately independent of the eoss used. this limiting mass is consistent with findings of several recent analyses and numerical general relativity simulations about the maximum mass of the possible super - massive remanent produced in the immediate aftermath of gw170817.
arxiv:1807.07698
this paper aims to propose a new solution for failure recovery ( dead - ends ) in vehicle to vehicle ( v2v ) communications through lte - assisted device - to - device communications ( d2d ). based on the enhanced networking capabilities offered by intelligent transportation systems ( its ) architecture, our solution can efficiently assist v2v communications in failure recovery situations. we also derive an analytical model to evaluate generic v2v routing recovery failures. moreover, the proposed hybrid model is simulated and compared to the generic model under different constrains of worst and best cases of d2d discovery and communication. according to our comparison and simulation results, the hybrid model decreases the delay for alarm message propagation to the destination ( typically the traffic control center tcc ) through the road side unit ( rsu )
arxiv:1502.01496
bulk rutile ruo $ _ 2 $ has long been considered a pauli paramagnet. here we report that ruo $ _ 2 $ exhibits a hitherto undetected lattice distortion below approximately 900 k. the distortion is accompanied by antiferromagnetic order up to at least 300 k with a small room temperature magnetic moment of approximately 0. 05 $ \ mu _ b $ as evidenced by polarized neutron diffraction. density functional theory plus $ u $ ( dft + $ u $ ) calculations indicate that antiferromagnetism is favored even for small values of the hubbard $ u $ of the order of 1 ev. the antiferromagnetism may be traced to a fermi surface instability, lifting the band degeneracy imposed by the rutile crystal field. the combination of high n \ ' eel temperature and small itinerant moments make ruo $ _ 2 $ unique among ruthenate compounds and among oxide materials in general.
arxiv:1612.09589
, which would exclude fungi and some algae. plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. the first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β€” of which around 1 million are insects β€” but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β€” pieces of dna that can move between cells β€” while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ",
https://en.wikipedia.org/wiki/Biology
quantum gravity is effective in domains where both quantum effects and gravity are essential, such as in the vicinity of space - time singularities. this paper will investigate the quantization of a black - hole gravity, particularly the region surrounding the singularity at the origin of the coordinate system. describing the system with a hamiltonian formalism, we apply the covariant integral quantization method to find the wheeler - dewitt equation of the model. we find that the quantized system has a discrete energy spectrum in the region inside the event horizon. through the kantowski - sachs metric, it is possible to correlate the entropic time, which gives the dynamics for this model, to the cosmic time in a non - trivial way. different configurations for the phase space of a schwarzschild black hole are obtained in a semi - classical analysis. for lower - energy states, the quantum corrections result in singularity removal and wormhole formation.
arxiv:2111.13575
understanding pedestrian behavior patterns is a key component to building autonomous agents that can navigate among humans. we seek a learned dictionary of pedestrian behavior to obtain a semantic description of pedestrian trajectories. supervised methods for dictionary learning are impractical since pedestrian behaviors may be unknown a priori and the process of manually generating behavior labels is prohibitively time consuming. we instead utilize a novel, unsupervised framework to create a taxonomy of pedestrian behavior observed in a specific space. first, we learn a trajectory latent space that enables unsupervised clustering to create an interpretable pedestrian behavior dictionary. we show the utility of this dictionary for building pedestrian behavior maps to visualize space usage patterns and for computing the distributions of behaviors. we demonstrate a simple but effective trajectory prediction by conditioning on these behavior labels. while many trajectory analysis methods rely on rnns or transformers, we develop a lightweight, low - parameter approach and show results comparable to sota on the eth and ucy datasets.
arxiv:2212.01426
we present an efficient sampling method for computing a partition function and accelerating configuration sampling. the method performs a random walk in the $ \ lambda $ space, with $ \ lambda $ being any thermodynamic variable that characterizes a canonical ensemble such as the reciprocal temperature $ \ beta $ or any variable that the hamiltonian explicitly depends on. the partition function is determined by minimizing the difference of the thermal conjugates of $ \ lambda $ ( the energy in the case of $ \ lambda = \ beta $ ), defined as the difference between the value from the dynamically updated derivatives of the partition function and the value directly measured from simulation. higher - order derivatives of the partition function are included to enhance the brownian motion in the $ \ lambda $ space. the method is much less sensitive to the system size, and the size of $ \ lambda $ window than other methods. on the two dimensional ising model, it is shown that the method asymptotically converges the partition function, and the error of the logarithm of the partition function is much smaller than the algorithm using the wang - landau recursive scheme. the method is also applied to off - lattice model proteins, the $ ab $ models, in which cases many low energy states are found in different models.
arxiv:0903.2195
vertically symmetric alternating sign matrices ( vsasms ) of order $ 2n + 1 $ are known to be equinumerous with lozenge tilings of a hexagon with side lengths $ 2n + 2, 2n, 2n + 2, 2n, 2n + 2, 2n $ and a central triangular hole of size $ 2 $ that exhibit a cyclical as well as a vertical symmetry, but no bijection between these two classes of objects has been constructed so far. in order to make progress towards finding such a bijection, we generalize this result by introducing certain natural extensions for both objects along with $ n + 3 $ parameters and show that the multivariate generating functions with respect to these parameters coincide. the equinumeracy of vsasms and the lozenge tilings is then an easy consequence of this result, which is obtained by specializing the generating functions to signed enumerations for both types of objects. in fact, we present several versions of such results ( one of which was independently conjectured by florian aigner ) but in all cases certain natural extensions of the original objects are necessary and that may hint at why it is so hard to come up with an explicit bijection for the original objects.
arxiv:2207.04469
the hilbert - smith conjecture states that if g is a locally compact group which acts effectively on a connected manifold as a topological transformation group, then g is a lie group. a rather straightforward proof of this conjecture is given. the motivation is work of cernavskii ( ` ` finite - to - one mappings of manifolds ' ', trans. of math. sk. 65 ( 107 ), 1964. ) his work is generalized to the orbit map of an effective action of a p - adic group on compact connected n - manifolds with the aid of some new ideas. there is no attempt to use smith theory even though there may be similarities.
arxiv:math/0103215
we propose a method for finding alternate features missing in the lasso optimal solution. in ordinary lasso problem, one global optimum is obtained and the resulting features are interpreted as task - relevant features. however, this can overlook possibly relevant features not selected by the lasso. with the proposed method, we can provide not only the lasso optimal solution but also possible alternate features to the lasso solution. we show that such alternate features can be computed efficiently by avoiding redundant computations. we also demonstrate how the proposed method works in the 20 newsgroup data, which shows that reasonable features are found as alternate features.
arxiv:1611.05940
the space mission plato will usher in a new era of exoplanetary science by expanding our current inventory of transiting systems and constraining host star ages, which are currently highly uncertain. this capability might allow plato to detect changes in planetary system architecture with time, particularly because planetary scattering due to lagrange instability may be triggered long after the system was formed. here, we utilize previously published instability timescale prescriptions to determine plato ' s capability to detect a trend of decreasing planet frequency with age for systems with equal - mass planets. for two - planet systems, our results demonstrate that plato may detect a trend for planet masses which are at least as massive as super - earths. for systems with three or more planets, we link their initial compactness to potentially detectable frequency trends in order to aid future investigations when these populations will be better characterized.
arxiv:1507.04272
the theory of interfacial properties in liquid - liquid or liquid - vapour systems is nearly 200 years old. the advent of computational tools has greatly advanced the field, mainly through the use of molecular dynamics simulations. despite the successes and advances in the theory of interfacial phenomena for liquid - liquid systems, the study of solid - liquid interfaces remains a challenge both theoretically and experimentally. the main reason why the treatment of solid - liquid systems has fallen behind that of liquid - liquid systems is that there are complications that arise whenever an interface involving solid systems is considered involving both theory of the solid - liquid interface and the calculations using md simulations. an example of the former is that, contrary to the liquid - liquid case, the interfacial properties of solids depend on the lattice orientation. the main complications in these calculations arise from the fact that for solids the ` ` mechanical route ' ' cannot be used. to overcome this problem, several numerical approaches were proposed. the main purpose of this review is to provide an overview of these different methodologies and to discuss their strengths and weaknesses. we classify these methodologies into two main groups : direct and indirect methods. direct methods are those that can calculate directly the properties of interfaces, while in indirect approaches the properties of the interface are not the primary result of the simulations. we also included a discussion on the origin of the difficulties in considering solid interfaces from a thermodynamic point of view. in the second part of the review, we discuss two key related topics : nucleation theory and curved interfaces. they both represent an important problem in the study of interfaces and in the context of solid - liquid ones for which the research is still extremely active.
arxiv:2411.06231
we study the action of the bms group in critical, bosonic string theory living on a target space of the form $ \ mathbb { m } ^ { d } \ times c $. here $ m ^ { d } $ is $ d $ - dimensional ( asymptotically ) flat spacetime and $ c $ is an arbitrary compactification. we provide a treatment of generalized ward - - takahashi identities and derive consistent boundary conditions for any $ d $ from string theory considerations. finally, we derive bms transformations in higher dimensional spacetimes and show that the generalized ward - - takahashi identity of bms produces weinberg ' s soft theorem in string theory.
arxiv:1506.05789
in this paper we introduce the notion of near semiring with involution. generalizing the theory of semirings we aim at represent quantum structures, such as basic algebras and orthomodular lattices, in terms of near semirings with involution. in particular, after discussing several properties of near semirings, we introduce the so - called \ l ukasiewicz near semirings, as a particular case of near semirings, and we show that every basic algebra is representable as ( precisely, it is term equivalent to ) a near semiring. in the particular case in which a \ l ukasiewicz near semiring is also a semiring, we obtain as a corollary a representation of mv - algebras as semirings. analogously, by introducing a particular subclass of \ l ukasiewicz near semirings, that we termed orthomodular near semirings, we obtain a representation of orthomodular lattices. in the second part of the paper, we discuss several universal algebraic properties of \ l ukasiewicz near semirings and we show that the variety of involutive integral near semirings is a church variety. this yields a neat equational characterization of central elements of this variety. as a byproduct of such, we obtain several direct decomposition theorems for this class of algebras.
arxiv:1810.09345
we report on the discovery and the timing analysis of the first eclipsing accretion - powered millisecond x - ray pulsar ( amxp ) : swift j1749. 4 - 2807. the neutron star rotates at a frequency of ~ 517. 9 hz and is in a binary system with an orbital period of 8. 8 hrs and a projected semi - major axis of ~ 1. 90 lt - s. assuming a neutron star between 0. 8 and 2. 2 m _ o and using the mass function of the system and the eclipse half - angle, we constrain the mass of the companion and the inclination of the system to be in the ~ 0. 46 - 0. 81 m _ o and $ \ sim74. 4 ^ o - 77. 3 ^ o range, respectively. to date, this is the tightest constraint on the orbital inclination of any amxp. as in other amxps, the pulse profile shows harmonic content up to the 3rd overtone. however, this is the first amxp to show a 1st overtone with rms amplitudes between ~ 6 % and ~ 23 %, which is the strongest ever seen, and which can be more than two times stronger than the fundamental. the fact that swift j1749. 4 - 2807 is an eclipsing system which shows uncommonly strong harmonic content suggests that it might be the best source to date to set constraints on neutron star properties including compactness and geometry.
arxiv:1005.3527
release. over time, the level of maintenance becomes increasingly restricted before being cut off entirely when the product is withdrawn from the market. as software ages, it becomes known as legacy software and can remain in use for decades, even if there is no one left who knows how to fix it. over the lifetime of the product, software maintenance is estimated to comprise 75 percent or more of the total development cost. completing a software project involves various forms of expertise, not just in software programmers but also testing, documentation writing, project management, graphic design, user experience, user support, marketing, and fundraising. = = quality and security = = software quality is defined as meeting the stated requirements as well as customer expectations. quality is an overarching term that can refer to a code ' s correct and efficient behavior, its reusability and portability, or the ease of modification. it is usually more cost - effective to build quality into the product from the beginning rather than try to add it later in the development process. higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. software failures in safety - critical systems can be very serious including death. by some estimates, the cost of poor quality software can be as high as 20 to 40 percent of sales. despite developers ' goal of delivering a product that works entirely as intended, virtually all software contains bugs. the rise of the internet also greatly increased the need for computer security as it enabled malicious actors to conduct cyberattacks remotely. if a bug creates a security risk, it is called a vulnerability. software patches are often released to fix identified vulnerabilities, but those that remain unknown ( zero days ) as well as those that have not been patched are still liable for exploitation. vulnerabilities vary in their ability to be exploited by malicious actors, and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system. although some vulnerabilities can only be used for denial of service attacks that compromise a system ' s availability, others allow the attacker to inject and run their own code ( called malware ), without the user being aware of it. to thwart cyberattacks, all software in the system must be designed to withstand and recover from external attack. despite efforts to ensure security, a significant fraction of computers are infected with malware. = = encoding and execution = = = = = programming languages = = = programming languages are the format in which software is written
https://en.wikipedia.org/wiki/Software
in many interacting particle systems, tagged particles move diffusively upon subtracting a drift. general techniques to prove such ` invariance principles ' are available for reversible processes ( kipnis - varadhan ) and for non - reversible processes in dimension $ d > 2 $. the interest of our paper is that it considers a non - reversible one - dimensional process : the toom model. the reason that we can prove the invariance principle is that in this model, push - tagged particles move manifestly slower than second - class particles.
arxiv:1610.07765
the parallel sum $ a : b $ of two bounded positive linear operators $ a, b $ on a hilbert space $ h $ is defined to be the positive operator having the quadratic form \ begin { equation * } \ inf \ { ( a ( x - y ) \, | \, x - y ) + ( by \, | \, y ) \, | \, y \ in h \ } \ end { equation * } for fixed $ x \ in h $. the purpose of this paper is to provide a factorization of the parallel sum of the form $ j _ apj _ a ^ * $ where $ j _ a $ is the embedding operator of an auxiliary hilbert space associated with $ a $ and $ b $, and $ p $ is an orthogonal projection onto a certain linear subspace of that hilbert space. we give similar factorizations of the parallel sum of nonnegative hermitian forms, positive operators of a complex banach space $ e $ into its topological anti - dual $ \ bar { e } ' $, and of representable positive functionals on a $ ^ * $ - algebra.
arxiv:1501.01922
adversarial learning is one of the most successful approaches to modelling high - dimensional probability distributions from data. the quantum computing community has recently begun to generalize this idea and to look for potential applications. in this work, we derive an adversarial algorithm for the problem of approximating an unknown quantum pure state. although this could be done on universal quantum computers, the adversarial formulation enables us to execute the algorithm on near - term quantum computers. two parametrized circuits are optimized in tandem : one tries to approximate the target state, the other tries to distinguish between target and approximated state. supported by numerical simulations, we show that resilient backpropagation algorithms perform remarkably well in optimizing the two circuits. we use the bipartite entanglement entropy to design an efficient heuristic for the stopping criterion. our approach may find application in quantum state tomography.
arxiv:1806.00463
in this paper, we provide an overview of the wnut - 2020 shared task on the identification of informative covid - 19 english tweets. we describe how we construct a corpus of 10k tweets and organize the development and evaluation phases for this task. in addition, we also present a brief summary of results obtained from the final system evaluation submissions of 55 teams, finding that ( i ) many systems obtain very high performance, up to 0. 91 f1 score, ( ii ) the majority of the submissions achieve substantially higher results than the baseline fasttext ( joulin et al., 2017 ), and ( iii ) fine - tuning pre - trained language models on relevant language data followed by supervised training performs well in this task.
arxiv:2010.08232
energetics and conductance in jellium modelled nanowires are investigated using the local - density - functional - based shell correction method. in analogy with studies of other finite - size fermion systems, e. g., simple - metal clusters or he - 3 clusters, we find that the energetics of the wire as a function of its radius ( transverse reduced dimension ) leads to formation of self - selecting magic wire configurations ( mwc ' s, i. e., discrete sequence of wire radii with enhanced stability ), originating from quantization of the electronic spectrum, namely formation of subbands which are the analogs of electronic shells in clusters. these variations in the energy result in oscillations in the force required to affect a transition from one mwc of the nanowire to another, and are correlated directly with step - wise variations of the quantized conductance of the nanowire in units of 2 * e ^ 2 / h.
arxiv:cond-mat/9906018
scanning transmission electron microscopy ( stem ) combined with electron energy loss spectroscopy ( eels ) has become a standard technique to map localized surface plasmon resonances with a nanometer spatial and a sufficient energy resolution over the last 15 years. however, no experimental work discussing the influence of experimental conditions during the measurement has been published up to now. we present an experimental study of the influence of the primary beam energy and the collection semi - angle on the plasmon resonances measurement by stem - eels. to explore the influence of these two experimental parameters we study a series of gold rods and gold bow - tie and diabolo antennas. we discuss the impact on experimental characteristics which are important for successful detection of the plasmon peak in eels, namely : the intensity of plasmonic signal, the signal to background ratio, and the signal to zero - loss peak ratio. we show that the best results are obtained using a medium primary beam energy, in our case 120 kev, and an arbitrary collection semi - angle, as it is not a critical parameter at this primary beam energy. our instructive overview will help microscopists in the field of plasmonics to arrange their experiments.
arxiv:2002.04260
the traditional paradigm of applying deep learning - - collect, annotate and train on data - - is not applicable to image - based plant phenotyping as almost 400, 000 different plant species exists. data costs include growing physical samples, imaging and labelling them. model performance is impacted by the species gap between the domain of each plant species, it is not generalisable and may not transfer to unseen plant species. in this paper, we investigate the use of synthetic data for leaf instance segmentation. we study multiple synthetic data training regimes using mask - rcnn when few or no annotated real data is available. we also present upgen : a universal plant generator for bridging the species gap. upgen leverages domain randomisation to produce widely distributed data samples and models stochastic biological variation. our methods outperform standard practices, such as transfer learning from publicly available plant data, by 26. 6 % and 51. 46 % on two unseen plant species respectively. we benchmark upgen by competing in the cvppp leaf segmentation challenge and set a new state - of - the - art, a mean of 88 % across a1 - 4 test datasets. this study is applicable to use of synthetic data for automating the measurement of phenotypic traits. our synthetic dataset and pretrained model are available at https : / / csiro - robotics. github. io / upgen _ webpage /.
arxiv:2003.10757
the higher - power derivative terms involved in both faddeev and skyrme energy functionals correspond to $ \ sigma _ 2 $ - energy, introduced by eells and sampson. the paper provides a detailed study of the first and second variation formulae associated to this energy. some classes of ( stable ) critical maps are outlined.
arxiv:0809.4864
the fluctuating gunn - peterson approximation ( fgpa ) is a commonly - used method to generate mock lyman - $ \ alpha $ ( ly $ \ alpha $ ) forest absorption skewers at cosmic noon ( $ z \ gtrsim 2 $ ) from the matter - density field of $ n $ - body simulations without running expensive hydrodynamical simulations. motivated by recent developments in 3d igm tomography observations as well as matter density field reconstruction techniques applied to galaxy redshift samples at $ z \ sim 2 $, we examine the possibility of observationally testing fgpa by directly examining the relationship between the ly $ \ alpha $ transmission and the underlying matter density field. specifically, we analyze the eagle, illustris, illustristng and nyx cosmological hydrodynamic simulations, that were run with different codes and sub - grid models. while the fgpa is an excellent description of the igm in lower - density regions, the slope of the transmission - density distribution at higher densities is significantly affected by feedback processes causing the fgpa to break down in that regime. even without added feedback, we find significant deviations caused by hydrodynamical effects arising from non - linear structure growth. we then proceed to make comparisons using realistic mock data assuming the sightline sampling and spectral properties of the recent clamato survey, and find that it would be challenging to discern between the fgpa and hydrodynamical models with current data sets. however, the improved sightline sampling from future extremely large telescopes or large volumes from multiplexed spectroscopic surveys such as subaru pfs should allow for stringent tests of the fgpa, and make it possible to detect the effect of galaxy feedback on the igm.
arxiv:2201.10169
in this paper, we obtain a new generalization of chebyshev ' s inequality for random elements taking values in a separate banach space.
arxiv:1106.0955
the milagro telescope monitors the northern sky for 100 gev - 100 tev transient emission through continuous very high energy wide - field observations. the large effective area and low energy threshold of milagro allow it to detect very high energy gamma - ray burst emission with much higher sensitivity than previous instruments, and a fluence sensitivity at tev energies comparable to dedicated gamma - ray burst satellites at kev - mev energies. observation of gamma - ray burst emission at tev energies could place important constraints on gamma - ray burst progenitor and emission models. this study details the development of a weighted analysis technique ; the implementation of this technique to perform a real time search for tev transients of 40 seconds to 3 hours duration in the milagro data ; and the results from more than one year of observation. between may 2nd, 2001, and may 22nd, 2002, no tev transients of 40 seconds to 3 hours duration were observed. upper limits on both observed and emitted high energy gamma - ray burst emission are presented.
arxiv:astro-ph/0308100
in narrow spaces, motion planning based on the traditional hierarchical autonomous system could cause collisions due to mapping, localization, and control noises, especially for car - like ackermann - steering robots which suffer from non - convex and non - holonomic kinematics. to tackle these problems, we leverage deep reinforcement learning which is verified to be effective in self - decision - making, to self - explore in narrow spaces without a given map and destination while avoiding collisions. specifically, based on our ackermann - steering rectangular - shaped zebrat robot and its gazebo simulator, we propose the rectangular safety region to represent states and detect collisions for rectangular - shaped robots, and a carefully crafted reward function for reinforcement learning that does not require the waypoint guidance. for validation, the robot was first trained in a simulated narrow track. then, the well - trained model was transferred to other simulation tracks and could outperform other traditional methods including classical and learning methods. finally, the trained model is demonstrated in the real world with our zebrat robot.
arxiv:2209.08349
there is a well - known discrepancy in the distance estimation for m60, a giant elliptical galaxy in virgo : the planetary nebula luminosity function ( pnlf ) distance moduli for this galaxy are, on average, $ ~ 0. 4 $ mag smaller than the values based on the surface brightness fluctuation ( sbf ) in the literature. we present photometry of the resolved stars in an outer field of m60 based on deep f775w and f850lp images in the hubble space telescope obtained as part of the pure parallel program in the archive. detected stars are mostly old red giants in the halo of m60. with this photometry we determine a distance to m60 using the tip of the red giant branch ( trgb ). a trgb is detected at $ f850lp _ { \ rm trgb } = 26. 70 \ pm0. 06 $ mag, in the luminosity function of the red giants. this value corresponds to $ f814w _ { 0, \ rm trgb } = 27. 13 \ pm0. 06 $ mag and $ qt _ { \ rm trgb } = 27. 04 \ pm0. 07 $ mag, where $ qt $ is a color - corrected f814w magnitude. from this we derive a distance modulus, $ ( m - m ) _ 0 = 31. 05 \ pm0. 07 { \ rm ( ran ) } \ pm0. 06 { \ rm ( sys ) } $ ( $ d = 16. 23 \ pm0. 50 { \ rm ( ran ) } \ pm0. 42 { \ rm ( sys ) } $ mpc ). this value is $ 0. 3 $ mag larger than the pnlf distances and $ 0. 1 $ mag smaller than the sbf distances in the previous studies, indicating that the pnlf distances to m60 in the literature have larger uncertainties than the suggested values.
arxiv:1705.02389
we extract ratios of $ b \ to k ^ * $ form factors at low hadronic recoil from recent data on $ b \ to k ^ * \ mu ^ + \ mu ^ - $ decays in a model - independent way. the presented method will improve in the future with further ( angular ) studies in semileptonic rare b - decays and advance our understanding of form factors, which are important inputs in precision tests of the standard model.
arxiv:1204.4444
this paper develops robust test procedures for testing the intercept of a simple regression model when it is \ textit { apriori } suspected that the slope has a specified value. defining unrestricted test ( ut ), restricted test ( rt ) and pre - test test ( ptt ) corresponding to the unrestricted ( ue ), restricted ( re ), and preliminary test estimators ( pte ) in the estimation case, the m - estimation methodology is used to formulate the m - tests and derive their asymptotic power functions. analytical and graphical comparisons of the three tests are obtained by studying the power functions with respect to size and power of the tests. it is shown that ptt achieves a reasonable dominance over the others asymptotically.
arxiv:0710.1919
in this paper, we propose and analyze a numerically stable and convergent scheme for a convection - diffusion - reaction equation in the convection - dominated regime. discontinuous galerkin ( dg ) methods are considered since standard finite element methods for the convection - dominated equation cause spurious oscillations. we choose to follow a novel dg finite element differential calculus framework introduced in feng et al. ( 2016 ) and approximate the infinite - dimensional operators in the equation with the finite - dimensional dg differential operators. specifically, we construct the numerical method by using the dual - wind discontinuous galerkin ( dwdg ) formulation for the diffusive term and the average discrete gradient operator for the convective term along with standard dg stabilization. we prove that the method converges optimally in the convection - dominated regime. numerical results are provided to support the theoretical findings.
arxiv:2404.06490
one - shot medical image segmentation ( mis ) is crucial for medical analysis due to the burden of medical experts on manual annotation. the recent emergence of the segment anything model ( sam ) has demonstrated remarkable adaptation in mis but cannot be directly applied to one - shot medical image segmentation ( mis ) due to its reliance on labor - intensive user interactions and the high computational cost. to cope with these limitations, we propose a novel sam - guided robust representation learning framework, named rrl - medsam, to adapt sam to one - shot 3d mis, which exploits the strong generalization capabilities of the sam encoder to learn better feature representation. we devise a dual - stage knowledge distillation ( dskd ) strategy to distill general knowledge between natural and medical images from the foundation model to train a lightweight encoder, and then adopt a mutual exponential moving average ( mutual - ema ) to update the weights of the general lightweight encoder and medical - specific encoder. specifically, pseudo labels from the registration network are used to perform mutual supervision for such two encoders. moreover, we introduce an auto - prompting ( ap ) segmentation decoder which adopts the mask generated from the general lightweight model as a prompt to assist the medical - specific model in boosting the final segmentation performance. extensive experiments conducted on three public datasets, i. e., oasis, ct - lung demonstrate that the proposed rrl - medsam outperforms state - of - the - art one - shot mis methods for both segmentation and registration tasks. especially, our lightweight encoder uses only 3 \ % of the parameters compared to the encoder of sam - base.
arxiv:2504.20501
we numerically study the relaxation dynamics of a single, heavy impurity atom interacting with a finite one - or two - dimensional, ultracold bose - gas. while there is a clear separation of time scales between processes resulting from single - and two - phonon scattering in three spatial dimensions, the thermalization in lower dimensions is dominated by two - phonon processes. this is due to infrared divergencies in the corresponding scattering rates in the thermodynamic limit, which are a manifestation of the mermin - wagner - hohenberg theorem. it makes it necessary to include second - order phonon scattering in one - dimensional systems even at $ t = 0 $ and above a crossover temperature $ t _ \ textrm { 2ph } $ in two spatial dimensions. $ t _ \ textrm { 2ph } $ scales inversely with the system size and is much smaller than currently experimentally accessible.
arxiv:1712.07912
forced detachment of a single polymer chain, strongly - adsorbed on a solid substrate, is investigated by two complementary methods : a coarse - grained analytical dynamical model, based on the onsager stochastic equation, and molecular dynamics ( md ) simulations with langevin thermostat. the suggested approach makes it possible to go beyond the limitations of the conventional bell - evans model. we observe a series of characteristic force spikes when the pulling force is measured against the cantilever displacement during detachment at constant velocity $ v _ c $ ( displacement control mode ) and find that the average magnitude of this force increases as $ v _ c $ grows. the probability distributions of the pulling force and the end - monomer distance from the surface at the moment of final detachment are investigated for different adsorption energy $ \ epsilon $ and pulling velocity $ v _ c $. our extensive md - simulations validate and support the main theoretical findings. moreover, the simulation reveals a novel behavior : for a strong - friction and massive cantilever the force spikes pattern is smeared out at large $ v _ c $. as a challenging task for experimental bio - polymers sequencing in future we suggest the fabrication of stiff, super - light, nanometer - sized afm probe.
arxiv:1310.3876
we have measured the magneto - resistance of a two - dimensional electron gas ( 2deg ) under continuous microwave irradiation as a function of electron density and mobility tuned with a metallic top - gate. in the entire range of density and mobility we have investigated, we observe microwave induced oscillations of large amplitude that are b - periodic. these b - periodic oscillations are reminiscent of the ones reported by kukushkin \ textit { et al } [ 1 ] and which were attributed to the presence of edge - magneto - plasmons. we have found that the b - periodicity does not increase linearly with the density in our sample but shows a plateau in the range ( 2. 4 - 3 ) 10 ^ { 11 } \ rm cm ^ { - 2 } $. in this regime, the phase of the b - periodic oscillations is found to shift continuously by two periods.
arxiv:cond-mat/0502350
thermoelectric effects are more sensitive and promising probes to topological properties of emergent materials, but much less addressed compared to other physical properties. zirconium pentatelluride ( zrte $ _ { 5 } $ ) has inspired active investigations recently because of its multiple topological nature. we study the thermoelectric effects of zrte $ _ { 5 } $ in a magnetic field and find several anomalous behaviors. the nernst response has a steplike profile near zero field when the charge carriers are electrons only, suggesting the anomalous nernst effect arising from a nontrivial profile of berry curvature. both the thermopower and nernst signal exhibit exotic peaks in the strong - field quantum limit. at higher magnetic fields, the nernst signal has a sign reversal at a critical field where the thermopower approaches to zero. we propose that these anomalous behaviors can be attributed to the landau index inversion, which is resulted from the competition of the $ \ sqrt { b } $ dependence of the dirac - type landau bands and linear - $ b $ dependence of the zeeman energy ( $ b $ is the magnetic field ). our understanding to the anomalous thermoelectric properties in zrte $ _ { 5 } $ opens a new avenue for exploring dirac physics in topological materials.
arxiv:1904.00417
recent simulations indicate that streamwise - preferential porous materials have the potential to reduce drag in wall - bounded turbulent flows ( gomez - de - segura & garcia - mayoral 2019 ). this paper extends the resolvent formulation to study the effect of such anisotropic permeable substrates on turbulent channel flow. under the resolvent formulation, the fourier - transformed navier - stokes equations are interpreted as a linear forcing - response system. the nonlinear terms are considered the endogenous forcing in the system that gives rise to a velocity and pressure response. a gain - based decomposition of the forcing - response transfer function - - - the resolvent operator - - - identifies response modes ( resolvent modes ) that are known to reproduce important structural and statistical features of wall - bounded turbulent flows. the effect of permeable substrates is introduced in this framework using the volume - averaged navier - stokes equations and a generalized form of darcy ' s law. substrates with high streamwise permeability and low spanwise permeability are found to suppress the forcing - response gain for the resolvent mode that serves as a surrogate for the energetic near - wall cycle. this reduction in mode gain is shown to be consistent with the drag reduction trends predicted by theory and observed in numerical simulations. simulation results indicate that drag reduction is limited by the emergence of spanwise rollers resembling kelvin - helmholtz vortices beyond a threshold value of wall - normal permeability. the resolvent framework also predicts the conditions in which such energetic spanwise - coherent rollers emerge. these findings suggest that a limited set of resolvent modes can serve as the building blocks for computationally - efficient models that enable the design and optimization of permeable substrates for passive turbulence control.
arxiv:2006.01378
497 – 503. doi : 10. 1145 / 1135777. 1135850. isbn 978 - 1 - 59593 - 323 - 2. s2cid 14184354. mustafa jarrar and robert meersman ( 2008 ). " ontology engineering - the dogma approach ". book chapter ( chapter 3 ). in advances in web semantics i. volume lncs 4891, springer. riichiro mizoguchi ( 2004 ). " tutorial on ontological engineering : part 3 : advanced course of ontological engineering " archived 2013 - 03 - 09 at the wayback machine. in : new generation computing. ohmsha & springer - verlag, 22 ( 2 ) : 198 - 220. elena paslaru bontas simperl and christoph tempich ( 2006 ). " ontology engineering : a reality check " devedzic, vladan ( 2002 ). " understanding ontological engineering ". communications of the acm. 45 ( 4 ) : 136 – 144. citeseerx 10. 1. 1. 218. 7546. doi : 10. 1145 / 505248. 506002. s2cid 5352880. sure, york, staab, steffen and studer, rudi ( 2009 ). ontology engineering methodology. in staab, steffen & studer, rudi ( eds. ) handbook on ontologies ( 2nd edition ), springer - verlag, heidelberg. isbn 978 - 3 - 540 - 70999 - 2 = = external links = = ontopia. net : metadata? thesauri? taxonomies? topic maps! making sense of it all, by lars marius garshol, 2004. ontologyengineering. org : ontology engineering with diagrams
https://en.wikipedia.org/wiki/Ontology_engineering
single - image depth estimation is essential for endoscopy tasks such as localization, reconstruction, and augmented reality. most existing methods in surgical scenes focus on in - domain depth estimation, limiting their real - world applicability. this constraint stems from the scarcity and inferior labeling quality of medical data for training. in this work, we present endoomni, the first foundation model for zero - shot cross - domain depth estimation for endoscopy. to harness the potential of diverse training data, we refine the advanced self - learning paradigm that employs a teacher model to generate pseudo - labels, guiding a student model trained on large - scale labeled and unlabeled data. to address training disturbance caused by inherent noise in depth labels, we propose a robust training framework that leverages both depth labels and estimated confidence from the teacher model to jointly guide the student model training. moreover, we propose a weighted scale - and - shift invariant loss to adaptively adjust learning weights based on label confidence, thus imposing learning bias towards cleaner label pixels while reducing the influence of highly noisy pixels. experiments on zero - shot relative depth estimation show that our endoomni improves state - of - the - art methods in medical imaging for 33 \ % and existing foundation models for 34 \ % in terms of absolute relative error on specific datasets. furthermore, our model provides strong initialization for fine - tuning metric depth estimation, maintaining superior performance in both in - domain and out - of - domain scenarios. the source code is publicly available at https : / / github. com / tiancuteqy / endoomni.
arxiv:2409.05442
in which plants were cultivated for suspected medicinal uses. they supported the growth of botany as an academic subject. lectures were given about the plants grown in the gardens. botanical gardens came much later to northern europe ; the first in england was the university of oxford botanic garden in 1621. german physician leonhart fuchs ( 1501 – 1566 ) was one of " the three german fathers of botany ", along with theologian otto brunfels ( 1489 – 1534 ) and physician hieronymus bock ( 1498 – 1554 ) ( also called hieronymus tragus ). fuchs and brunfels broke away from the tradition of copying earlier works to make original observations of their own. bock created his own system of plant classification. physician valerius cordus ( 1515 – 1544 ) authored a botanically and pharmacologically important herbal historia plantarum in 1544 and a pharmacopoeia of lasting importance, the dispensatorium in 1546. naturalist conrad von gesner ( 1516 – 1565 ) and herbalist john gerard ( 1545 – c. 1611 ) published herbals covering the supposed medicinal uses of plants. naturalist ulisse aldrovandi ( 1522 – 1605 ) was considered the father of natural history, which included the study of plants. in 1665, using an early microscope, polymath robert hooke discovered cells ( a term he coined ) in cork, and a short time later in living plant tissue. = = = early modern botany = = = during the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups ( e. g. family, genus and species ) by making a series of choices between pairs of characters. the choice and sequence of the characters may be artificial in keys designed purely for identification ( diagnostic keys ) or more closely related to the natural or phyletic order of the taxa in synoptic keys. by the 18th century, new plants for study were arriving in europe in increasing numbers from newly discovered countries and the european colonies worldwide. in 1753, carl linnaeus published his species plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. this established a standardised binomial or two - part naming scheme where the first name represented the genus and the second identified the species within the genus. for the purposes of identification, linnaeus ' s systema sexuale classified plants into 24 groups according to the
https://en.wikipedia.org/wiki/Botany
exact ( hartree fock ) exchange is needed to overcome some of the limitations of local and semilocal approximations of density functional theory ( dft ). so far, however, computational cost has limited the use of exact exchange in plane wave calculations for extended systems. we show that this difficulty can be overcome by performing a unitary transformation from bloch to maximally localized wannier functions in combination with an efficient technique to compute real space coulomb integrals. the resulting scheme scales linearly with system size. we validate the scheme with representative applications.
arxiv:0812.1322
everyday mathematics is a pre - k and elementary school mathematics curriculum, developed by the university of chicago school mathematics project ( not to be confused with the university of chicago school of mathematics ). the program, now published by mcgraw - hill education, has sparked debate. = = company history = = everyday mathematics curriculum was developed by the university of chicago school math project ( or ucsmp ) which was founded in 1983. work on it started in the summer of 1985. the 1st edition was released in 1998 and the 2nd in 2002. a third edition was released in 2007 and a fourth in 2014 - 2015. a new one was released in 2020, dropping pre - k. for pre - k, schools use a 2012 pre - k version. = = curriculum structure = = below is an outline of the components of em as they are generally seen throughout the curriculum. lessons a typical lesson outlined in one of the teacher ’ s manuals includes three parts teaching the lesson β€” provides main instructional activities for the lesson. ongoing learning and practice β€” supports previously introduced concepts and skills ; essential for maintaining skills. differentiation options β€” includes options for supporting the needs of all students ; usually an extension of part 1, teaching the lesson. daily routines every day, there are certain things that each em lesson requires the student to do routinely. these components can be dispersed throughout the day or they can be part of the main math lesson. math messages β€” these are problems, displayed in a manner chosen by the teacher, that students complete before the lesson and then discuss as an opener to the main lesson. mental math and reflexes β€” these are brief ( no longer than 5 min ) sessions β€œ … designed to strengthen children ' s number sense and to review and advance essential basic skills … ” ( program components 2003 ). math boxes β€” these are pages intended to have students routinely practice problems independently. home links β€” everyday homework is sent home. they are called home links. they are meant to reinforce instruction as well as connect home to the work at school. supplemental aspects beyond the components already listed, there are supplemental resources to the program. the two most common are games and explorations. games β€” β€œ … everyday mathematics sees games as enjoyable ways to practice number skills, especially those that help children develop fact power … ” ( program components 2003 ). therefore, authors of the series have interwoven games throughout daily lessons and activities. = = scientific support for the curriculum = = what works clearinghouse ( or wwc ) reviewed the evidence in support of the everyday mathematics program. of the 61 pieces
https://en.wikipedia.org/wiki/Everyday_Mathematics
a graph g is { \ xi } - nearly planar if it can be embedded in the sphere so that each of its edges is crossed at most { \ xi } times. the family of { \ xi } - nearly planar graphs is widely extending the notion of planarity. we introduce an alternative parameterized graph family extending the notion of planarity, the { \ lambda } - flat graphs, this time defined as powers of plane graphs in regard to a novel notion of distance, the wall - by - wall distance. we show that the two parameterized graph classes are parametrically equivalent.
arxiv:1311.0137
several benchmark datasets for visual tracking research have been proposed in recent years. despite their usefulness, whether they are sufficient for understanding and diagnosing the strengths and weaknesses of different trackers remains questionable. to address this issue, we propose a framework by breaking a tracker down into five constituent parts, namely, motion model, feature extractor, observation model, model updater, and ensemble post - processor. we then conduct ablative experiments on each component to study how it affects the overall result. surprisingly, our findings are discrepant with some common beliefs in the visual tracking research community. we find that the feature extractor plays the most important role in a tracker. on the other hand, although the observation model is the focus of many studies, we find that it often brings no significant improvement. moreover, the motion model and model updater contain many details that could affect the result. also, the ensemble post - processor can improve the result substantially when the constituent trackers have high diversity. based on our findings, we put together some very elementary building blocks to give a basic tracker which is competitive in performance to the state - of - the - art trackers. we believe our framework can provide a solid baseline when conducting controlled experiments for visual tracking research.
arxiv:1504.06055
natural language generation ( nlg ) models have emerged as a focal point of research within natural language processing ( nlp ), exhibiting remarkable performance in tasks such as text composition and dialogue generation. however, their intricate architectures and extensive model parameters pose significant challenges to interpretability, limiting their applicability in high - stakes decision - making scenarios. to address this issue, human - computer interaction ( hci ) and visualization techniques offer promising avenues to enhance the transparency and usability of nlg models by making their decision - making processes more interpretable. in this paper, we provide a comprehensive investigation into the roles, limitations, and impact of hci and visualization in facilitating human understanding and control over nlg systems. we introduce a taxonomy of interaction methods and visualization techniques, categorizing three major research domains and their corresponding six key tasks in the application of nlg models. finally, we summarize the shortcomings in the existing work and investigate the key challenges and emerging opportunities in the era of large language models ( llms ).
arxiv:2410.08723
maintaining situational awareness of what is happening within a network is challenging, not least because the behaviour happens within computers and communications networks, but also because data traffic speeds and volumes are beyond human ability to process. visualisation is widely used to present information about the dynamics of network traffic dynamics. although it provides operators with an overall view and specific information about particular traffic or attacks on the network, it often fails to represent the events in an understandable way. visualisations require visual attention and so are not well suited to continuous monitoring scenarios in which network administrators must carry out other tasks. situational awareness is critical and essential for decision - making in the domain of computer network monitoring where it is vital to be able to identify and recognize network environment behaviours. here we present sonstar ( sonification of networks for situational awareness ), a real - time sonification system to be used in the monitoring of computer networks to support the situational awareness of network administrators. sonstar provides an auditory representation of all the tcp / ip protocol traffic within a network based on the different traffic flows between between network hosts. sonstar raises situational awareness levels for computer network defence by allowing operators to achieve better understanding and performance while imposing less workload compared to visual techniques. sonstar identifies the features of network traffic flows by inspecting the status flags of tcp / ip packet headers and mapping traffic events to recorded sounds to generate a soundscape representing the real - time status of the network traffic environment. listening to the soundscape allows the administrator to recognise anomalous behaviour quickly and without having to continuously watch a computer screen.
arxiv:1712.07029
the domains of mesh functions are strict subsets of the underlying space of continuous independent variables. spaces of partial maps between topological spaces admit topologies which do not depend on any metric. such topologies geometrically generalize the usual numerical analysis definitions of convergence.
arxiv:1909.11183
we study the b - model chiral ring of calabi - yau hypersurfaces in batyrev ' s mirror construction. the main result is an explicit description of a subring of the chiral ring of semiample regular ( transversal to torus orbits ) calabi - yau hypersurfaces. this subring includes the marginal operators and contains all information about the correlation functions used by physicists. computation of the chiral ring passes through a description of cohomology of semiample hypersurfaces. here, we develop the techniques for calculating of the cohomology of resolutions.
arxiv:math/0010318
in the paper, we obtain an expression for a two - loop master - diagram by using the mellin $ - $ barnes transformation. in the two - dimensional case we managed to factorize the answer and write it as a bilinear combination of hypergeometric functions $ { } _ 3f _ 2 $.
arxiv:2303.09203