text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
our work becomes integrated into the general problem of the stability of the network ad hoc. some, works attacked ( affected ) this problem. among these works, we find the modelling of the network ad hoc in the form of a graph. we can resume the problem of coherence of the network ad hoc of a problem of allocation of frequency we study a new class of graphs, the fat - extended p4 graphs, and we give a polynomial time algorithm to calculate the grundy number of the graphs in this class. this result implies that the grundy number can be found in polynomial time for many graphs | arxiv:1401.7826 |
we report computations of the production cross - section of longitudinal electroweak and higgs boson pairs within effective field theory for the electroweak sector ( including the higgs ). we have recently reported theoretical studies of gauge boson - gauge boson and higgs - higgs resonance production with various quantum numbers. we are now focusing on gauge boson - higgs boson two - body axial - vector resonances and show a typical cross section. at last, we point out that photon - photon production has also been studied ( in $ e ^ - e ^ + $ as well as pp machines ), as this is a very clean process allowing access to scalar and tensor resonances. | arxiv:1710.02481 |
one - stage object detectors such as ssd or yolo already have shown promising accuracy with small memory footprint and fast speed. however, it is widely recognized that one - stage detectors have difficulty in detecting small objects while they are competitive with two - stage methods on large objects. in this paper, we investigate how to alleviate this problem starting from the ssd framework. due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics ( e. g contextual information ). we address this problem by introducing a feature combining module that spreads out the strong semantics in a top - down manner. our final model stairnet detector unifies the multi - scale representations and semantic distribution effectively. experiments on pascal voc 2007 and pascal voc 2012 datasets demonstrate that stairnet significantly improves the weakness of ssd and outperforms the other state - of - the - art one - stage detectors. | arxiv:1709.05788 |
##olem corrected leopold lowenheim ' s proof of what is now called the downward lowenheim – skolem theorem, leading to skolem ' s paradox discussed in 1922, namely the existence of countable models of zf, making infinite cardinalities a relative property. 1922 : proof by abraham fraenkel that the axiom of choice cannot be proved from the axioms of zermelo set theory with urelements. 1931 : publication of godel ' s incompleteness theorems, showing that essential aspects of hilbert ' s program could not be attained. it showed how to construct, for any sufficiently powerful and consistent recursively axiomatizable system – such as necessary to axiomatize the elementary theory of arithmetic on the ( infinite ) set of natural numbers – a statement that formally expresses its own unprovability, which he then proved equivalent to the claim of consistency of the theory ; so that ( assuming the consistency as true ), the system is not powerful enough for proving its own consistency, let alone that a simpler system could do the job. it thus became clear that the notion of mathematical truth cannot be completely determined and reduced to a purely formal system as envisaged in hilbert ' s program. this dealt a final blow to the heart of hilbert ' s program, the hope that consistency could be established by finitistic means ( it was never made clear exactly what axioms were the " finitistic " ones, but whatever axiomatic system was being referred to, it was a ' weaker ' system than the system whose consistency it was supposed to prove ). 1936 : alfred tarski proved his truth undefinability theorem. 1936 : alan turing proved that a general algorithm to solve the halting problem for all possible program - input pairs cannot exist. 1938 : godel proved the consistency of the axiom of choice and of the generalized continuum hypothesis. 1936 – 1937 : alonzo church and alan turing, respectively, published independent papers showing that a general solution to the entscheidungsproblem is impossible : the universal validity of statements in first - order logic is not decidable ( it is only semi - decidable as given by the completeness theorem ). 1955 : pyotr novikov showed that there exists a finitely presented group g such that the word problem for g is undecidable. 1963 : paul cohen showed that the continuum hypothesis is unprovable from zfc. cohen ' s proof developed the method of forcing, which is | https://en.wikipedia.org/wiki/Foundations_of_mathematics |
this paper proposes a low - cost and highly accurate ecg - monitoring system intended for personalized early arrhythmia detection for wearable mobile sensors. earlier supervised approaches for personalized ecg monitoring require both abnormal and normal heartbeats for the training of the dedicated classifier. however, in a real - world scenario where the personalized algorithm is embedded in a wearable device, such training data is not available for healthy people with no cardiac disorder history. in this study, ( i ) we propose a null space analysis on the healthy signal space obtained via sparse dictionary learning, and investigate how a simple null space projection or alternatively regularized least squares - based classification methods can reduce the computational complexity, without sacrificing the detection accuracy, when compared to sparse representation - based classification. ( ii ) then we introduce a sparse representation - based domain adaptation technique in order to project other existing users ' abnormal and normal signals onto the new user ' s signal space, enabling us to train the dedicated classifier without having any abnormal heartbeat of the new user. therefore, zero - shot learning can be achieved without the need for synthetic abnormal heartbeat generation. an extensive set of experiments performed on the benchmark mit - bih ecg dataset shows that when this domain adaptation - based training data generator is used with a simple 1 - d cnn classifier, the method outperforms the prior work by a significant margin. ( iii ) then, by combining ( i ) and ( ii ), we propose an ensemble classifier that further improves the performance. this approach for zero - shot arrhythmia detection achieves an average accuracy level of 98. 2 % and an f1 - score of 92. 8 %. finally, a personalized energy - efficient ecg monitoring scheme is proposed using the above - mentioned innovations. | arxiv:2207.07089 |
we investigate the elliptic umbilic canonical integral with an approach based on a series expansion of its initial distribution shifted to the caustic points. an absolutely convergent integral representation for the elliptic umbilic is obtained. using it, we find the elliptic umbilic particular values in terms of 2f2 hypergeometric functions. we also derive an integral over the product of gaussian and two airy functions in terms of bessel functions of fractional orders. some other corollaries including 3f2 hypergeometric function special values and the airy polynomials relations are also discussed. | arxiv:2011.03672 |
the possible contributions of the various classes of extragalactic sources ( including, in addition to the canonical radio sources, ghz peaked spectrum sources, advection - dominated sources, starburst galaxies, high - redshift proto - spheroidal galaxies ) to the arcminute scale fluctuations measured by the cbi, bima, and acbar experiments are discussed. at 30 ghz, fluctuations due to radio sources undetected by ancillary low - frequency surveys may be higher than estimated by the cbi and bima groups. high - redshift dusty galaxies, whose fluctuations may be strongly enhanced by the effect of clustering, could contribute to the bima excess signal, and dominate at 150 ghz ( the acbar frequency ). moreover, in the present data situation, the dust emission of these high - redshift sources set an unavoidable limit to the detection of primordial cmb anisotropies at high multipoles, even at frequencies as low as $ \ simeq 30 $ ghz. it is concluded that the possibility that the excess power at high multipoles is dominated by unsubtracted extragalactic sources cannot be ruled out. on the other hand, there is room for a contribution from the sunyaev - zeldovich effect within clusters of galaxies, with a density fluctuation amplitude parameter $ \ sigma _ 8 $ consistent with the values preferred by current data. | arxiv:astro-ph/0410605 |
experiential learning ( exl ) is the process of learning through experience or more specifically " learning through reflection on doing ". in this paper, we propose a simulation of these experiences, in augmented reality ( ar ), addressing the problem of language learning. such systems provide an excellent setting to support " adaptive guidance ", in a digital form, within a real environment. adaptive guidance allows the instructions and learning content to be customised for the individual learner, thus creating a unique learning experience. we developed an adaptive guidance ar system for language learning, we call arigat \ = o ( augmented reality instructional guidance & tailored omniverse ), which offers immediate assistance, resources specific to the learner ' s needs, manipulation of these resources, and relevant feedback. considering guidance, we employ this prototype to investigate the effect of the amount of guidance ( fixed vs. adaptive - amount ) and the type of guidance ( fixed vs. adaptive - associations ) on the engagement and consequently the learning outcomes of language learning in an ar environment. the results for the amount of guidance show that compared to the adaptive - amount, the fixed - amount of guidance group scored better in the immediate and delayed ( after 7 days ) recall tests. however, this group also invested a significantly higher mental effort to complete the task. the results for the type of guidance show that the adaptive - associations group outperforms the fixed - associations group in the immediate, delayed ( after 7 days ) recall tests, and learning efficiency. the adaptive - associations group also showed significantly lower mental effort and spent less time to complete the task. | arxiv:2207.00798 |
we prove that a general determinantal hypersurface of dimension 3 is nodal. moreover, in terms of chern classes associated with bundle morphisms, we derive a formula for the intersection homology euler characteristic of a general determinantal hypersurface. | arxiv:1912.00929 |
diamond is an excellent band insulator. however, boron ( b ) doping is known to induce superconductivity. we present two interesting effects in superconducting b doped diamond ( bdd ) thin films : i ) wohlleben effect ( paramagnetic meissner effect, pme ) and ii ) a low field spin glass like susceptibility anomaly. we have performed electrical and magnetic measurements ( under pressure in one sample ) at dopings ( 1. 4, 2. 6 and 3. 6 ) x 1021 cm - 3, in a temperature range 2 - 10 k. pme, a low field anomaly in inhomogeneous superconductors could arise from flux trapping, flux compression, or for non - trivial reason such as emergent josephson pi junctions. joint occurrence of pme and spin glass type anomalies points to possible emergence of pi junctions. bdd is a disordered s - wave superconductor ; and pi junctions could be produced by spin flip scattering of spin half moments when present at weak superconducting regions ( bulaevski et al. 1978 ). a frustrated network of 0 and pi junctions will result ( kusmartsev et al. 1992 ) in a distribution of spontaneous equilibrium supercurrents, a phase glass state. anderson localized spin half spinons embedded in a metallic fluid ( two fluid model of bhatt et al. ) could create pi junction by spin flip scattering. our findings are consistent with presence of pi junctions, invoked to explain their ( bhattacharyya et al. ) observation of certain resistance anomaly in bdd. | arxiv:2006.12775 |
reading comprehension is a challenging task in natural language processing and requires a set of skills to be solved. while current approaches focus on solving the task as a whole, in this paper, we propose to use a neural network ` skill ' transfer approach. we transfer knowledge from several lower - level language tasks ( skills ) including textual entailment, named entity recognition, paraphrase detection and question type classification into the reading comprehension model. we conduct an empirical evaluation and show that transferring language skill knowledge leads to significant improvements for the task with much fewer steps compared to the baseline model. we also show that the skill transfer approach is effective even with small amounts of training data. another finding of this work is that using token - wise deep label supervision for text classification improves the performance of transfer learning. | arxiv:1711.03754 |
a study by w. r. magro and d. m. ceperley [ phys. rev. lett. { \ bf 73 }, 826 ( 1994 ) ] has shown that the ground state of the two - dimensional fluid of charged bosons with logarithmic interactions is not bose - condensed, but exhibits algebraic off - diagonal order in the single - particle density matrix $ \ rho ( r ) $. we use a hydrodynamic hamiltonian expressed in terms of density and phase operators, in combination with an $ f $ - sum rule on the superfluid fraction, to reproduce these results and to extend the evaluation of the density matrix to finite temperature $ t $. this approach allows us to treat the liquid as a superfluid in the absence of a condensate. we find that ( i ) the off - diagonal order arises from the correlations between phase fluctuations ; and ( ii ) the exponent in the power - law decay of $ \ rho ( r ) $ is determined by the superfluid density $ n _ s ( t ) $. we also find that the plasmon gap in the single - particle energy spectrum at long wavelengths decreases with increasing $ t $ and closes at the critical temperature for the onset of superfluidity. | arxiv:cond-mat/0203594 |
in this paper we give a short proof of the $ \ ell ^ p $ - improving property of the average operator along the square integers and more general quadratic polynomials. moreover we obtain a similar result for some higher degree polynomials. we also show an elementary proof of the $ \ ell ^ p $ - improving property of the average operator along primes. | arxiv:1910.12448 |
modeling stochastic traffic behaviors at the microscopic level, such as car - following and lane - changing, is a crucial task to understand the interactions between individual vehicles in traffic streams. leveraging a recently developed theory named physics regularized gaussian process ( prgp ), this study presents a stochastic microscopic traffic model that can capture the randomness and measure errors in the real world. physical knowledge from classical car - following models is converted as physics regularizers, in the form of shadow gaussian process ( gp ), of a multivariate prgp for improving the modeling accuracy. more specifically, a bayesian inference algorithm is developed to estimate the mean and kernel of gps, and an enhanced latent force model is formulated to encode physical knowledge into stochastic processes. also, based on the posterior regularization inference framework, an efficient stochastic optimization algorithm is developed to maximize the evidence lower - bound of the system likelihood. to evaluate the performance of the proposed models, this study conducts empirical studies on real - world vehicle trajectories from the ngsim dataset. since one unique feature of the proposed framework is the capability of capturing both car - following and lane - changing behaviors with one single model, numerical tests are carried out with two separated datasets, one contains lane - changing maneuvers and the other doesn ' t. the results show the proposed method outperforms the previous influential methods in estimation precision. | arxiv:2007.10109 |
we address the general question of how the molecular weight dependence of chain dynamics in unentangled polymers is modified by blending. by dielectric spectroscopy we measure the normal mode relaxation of polyisoprene in blends with a slow matrix of poly ( ter - butylstyrene ). unentangled polyisoprene in the blend exhibits strong deviations from rouse scaling, approaching ' entangled - like ' behavior at low temperatures in concomitance with the increase of the dynamic asymmetry in the blend. the obtained results are discussed in the framework of the generalized langevin equation formalism. on this basis, a non trivial relationship between the molecular weight dependence of the longest chain relaxation time and the nonexponentiality of the corresponding rouse correlator is found. this result is confirmed by molecular dynamics simulations. | arxiv:1110.1247 |
the bilevel optimization problem is a hierarchical optimization problem with two agents, a leader and a follower. the leader make their own decisions first, and the followers make the best choices accordingly. the leader knows the information of the followers, and the goal of the problem is to find the optimal solution by considering the reactions of the followers from the leader ' s point of view. for the bilevel optimization problem, there are no general and efficient algorithms or commercial solvers to get an optimal solution, and it is very difficult to get a good solution even for a simple problem. in this paper, we propose a deep learning approach using graph neural networks to solve the bilevel knapsack problem. we train the model to predict the leader ' s solution and use it to transform the hierarchical optimization problem into a single - level optimization problem to get the solution. our model found the feasible solution that was about 500 times faster than the exact algorithm with $ 1. 7 \ % $ optimal gap. also, our model performed well on problems of different size from the size it was trained on. | arxiv:2211.13436 |
this paper reviews the ntire 2022 challenge on super - resolution and quality enhancement of compressed video. in this challenge, we proposed the ldv 2. 0 dataset, which includes the ldv dataset ( 240 videos ) and 95 additional videos. this challenge includes three tracks. track 1 aims at enhancing the videos compressed by hevc at a fixed qp. track 2 and track 3 target both the super - resolution and quality enhancement of hevc compressed video. they require x2 and x4 super - resolution, respectively. the three tracks totally attract more than 600 registrations. in the test phase, 8 teams, 8 teams and 12 teams submitted the final results to tracks 1, 2 and 3, respectively. the proposed methods and solutions gauge the state - of - the - art of super - resolution and quality enhancement of compressed video. the proposed ldv 2. 0 dataset is available at https : / / github. com / renyang - home / ldv _ dataset. the homepage of this challenge ( including open - sourced codes ) is at https : / / github. com / renyang - home / ntire22 _ venh _ sr. | arxiv:2204.09314 |
the nonparametric problem of detecting existence of an anomalous interval over a one dimensional line network is studied. nodes corresponding to an anomalous interval ( if exists ) receive samples generated by a distribution q, which is different from the distribution p that generates samples for other nodes. if anomalous interval does not exist, then all nodes receive samples generated by p. it is assumed that the distributions p and q are arbitrary, and are unknown. in order to detect whether an anomalous interval exists, a test is built based on mean embeddings of distributions into a reproducing kernel hilbert space ( rkhs ) and the metric of maximummean discrepancy ( mmd ). it is shown that as the network size n goes to infinity, if the minimum length of candidate anomalous intervals is larger than a threshold which has the order o ( log n ), the proposed test is asymptotically successful, i. e., the probability of detection error approaches zero asymptotically. an efficient algorithm to perform the test with substantial computational complexity reduction is proposed, and is shown to be asymptotically successful if the condition on the minimum length of candidate anomalous interval is satisfied. numerical results are provided, which are consistent with the theoretical results. | arxiv:1404.0298 |
motivated by studies on fully discrete numerical schemes for linear hyperbolic conservation laws, we present a framework on analyzing the strong stability of explicit runge - kutta ( rk ) time discretizations for semi - negative autonomous linear systems. the analysis is based on the energy method and can be performed with the aid of a computer. strong stability of various rk methods, including a sixteen - stage embedded pair of order nine and eight, has been examined under this framework. based on numerous numerical observations, we further characterize the features of strongly stable schemes. a both necessary and sufficient condition is given for the strong stability of rk methods of odd linear order. | arxiv:1811.10680 |
detailed study of the solar magnetic field is crucial to understand its generation, transport and reversals. the timing of the reversals may have implications on space weather and thus identification of the temporal behavior of the critical surges that lead to the polar field reversals is important. we analyze the evolution of solar activity and magnetic flux transport in cycles 21 - - 24. we identify critical surges of remnant flux that reach the sun ' s poles and lead to the polar field reversals. we reexamine the polar field buildup and reversals in their causal relation to the sun ' s low - latitude activity. we further identify the major remnant flux surges and their sources in the time - latitude aspect. we find that special characteristics of individual 11 - year cycles are generally determined by the spatiotemporal organization of emergent magnetic flux and its unusual properties. we find a complicated restructuring of high - latitude magnetic fields in cycle ~ 21. the global rearrangements of solar magnetic fields were caused by surges of trailing and leading polarities that occurred near the activity maximum. the decay of non - joy and anti - hale active regions resulted in the remnant flux surges that disturbed the usual order in magnetic flux transport. we finally show that the leading - polarity surges during cycle minima sometimes link the following cycle and a collective effect of these surges may lead to secular changes in solar activity. the magnetic field from a babcock - - leighton dynamo model generally agrees with these observations. | arxiv:2111.15585 |
( abridged ) compact groups, with their high number densities, small velocity dispersions, and an interstellar medium that has not been fully processed, provide a local analog to conditions of galaxy interactions in the earlier universe. the frequent and prolonged gravitational encounters that occur in compact groups affect the evolution of the constituent galaxies in a myriad of ways, for example gas processing and star formation. recently, a statistically significant " gap " has been discovered mid - infrared irac colorspace of compact group galaxies. this gap is not seen in field samples and is a new example of how the compact group environment may affect the evolution of member galaxies. in order to investigate the origin and nature of this gap, we have compiled a sample of 49 compact groups. we find that a statistically significant deficit of galaxies in this gap region of irac colorspace is persistant in this sample, lending support to the hypothesis that the compact group environment inhibits moderate ssfrs. we note a curvature in the colorspace distribution, which is fully consistent with increasing dust temperature as the activity in a galaxy increases. this full sample of 49 compact groups allows us to subdivide the data according to physical properties of the groups. an analysis of these subsamples indicates that neither projected physical diameter nor density show a trend in colorspace within the values represented by this sample. we hypothesize that the apparent lack of a trend is due to the relatively small range of properties in this sample. thus, the relative influence of stochastic effects becomes dominant. we analyze spectral energy distributions of member galaxies as a function of their location in colorspace and find that galaxies in different regions of mir colorspace contain dust with varying temperatures and / or pah emission. | arxiv:1201.1287 |
we compute the spectral distortions of the cosmic microwave background ( cmb ) arising during the epoch of cosmological hydrogen recombination within the standard cosmological ( concordance ) model for frequencies in the range 1 ghz - 3500 ghz. we follow the evolution of the populations of the hydrogen levels including states up to principle quantum number $ n = 30 $ in the redshift range $ 500 \ leq z \ leq 3500 $. all angular momentum sub - states are treated individually, resulting in a total number of 465 hydrogen levels. the evolution of the matter temperature and the fraction of electrons coming from heii are also included. we present a detailed discussion of the distortions arising from the main dipolar transitions, e. g. lyman and balmer series, as well as the emission due to the two - photon decay of the hydrogen 2s level. furthermore, we investigate the robusteness of the results against changes in the number of shells considered. the resulting spectral distortions have a characteristic oscillatory behaviour, which might allow experimentalists to separate them from other backgrounds. the relative distortion of the spectrum exceeds a value of $ 10 ^ { - 7 } $ at wavelengths longer than 21cm. our results also show the importance of detailed follow - up of the angular momentum sub - states, and their effect on the amplitude of the lines. the effect on the residual electron fraction is only moderate, and mainly occurs at low redshifts. the cmb angular power spectrum is changed by less than 1 %. finally, our computations show that if the primordial radiation field is described by a pure blackbody, then there is no significant emission from any hydrogen transition at redshifts greater than $ z \ sim 2000 $. this is in contrast to some earlier works, where the existence of a ` pre - recombination ' peak was claimed. | arxiv:astro-ph/0607373 |
the acceleration theorem for bloch electrons in a homogenous external field is usually presented using quasiclassical arguments. in quantum mechanical versions the heisenberg equations of motion for an operator $ \ hat { \ vec k } ( t ) $ are presented mostly without properly defining this operator. this leads to the surprising fact that the generally accepted version of the theorem is incorrect for the most natural definition of $ \ hat { \ vec k } $. this operator is shown not to obey canonical commutation relations with the position operator. a similar result is shown for the phase operators defined via the klein factors which take care of the change of particle number in the bosonization of the field operator in the description of interacting fermions in one dimension. the phase operators are also shown not to obey canonical commutation relations with the corresponding particle number operators. implications of this fact are discussed for tomonaga - luttinger type models. | arxiv:cond-mat/0104360 |
large language models ( llms ) excel in various tasks, including personalized recommendations. existing evaluation methods often focus on rating prediction, relying on regression errors between actual and predicted ratings. however, user rating bias and item quality, two influential factors behind rating scores, can obscure personal preferences in user - item pair data. to address this, we introduce perrecbench, disassociating the evaluation from these two factors and assessing recommendation techniques on capturing the personal preferences in a grouped ranking manner. we find that the llm - based recommendation techniques that are generally good at rating prediction fail to identify users ' favored and disfavored items when the user rating bias and item quality are eliminated by grouping users. with perrecbench and 19 llms, we find that while larger models generally outperform smaller ones, they still struggle with personalized recommendation. our findings reveal the superiority of pairwise and listwise ranking approaches over pointwise ranking, perrecbench ' s low correlation with traditional regression metrics, the importance of user profiles, and the role of pretraining data distributions. we further explore three supervised fine - tuning strategies, finding that merging weights from single - format training is promising but improving llms ' understanding of user preferences remains an open research problem. code and data are available at https : / / github. com / tamsiuhin / perrecbench | arxiv:2501.13391 |
let f ( n ) denote the smallest positive integer such that every set of $ f ( n ) $ points in general position in the euclidean plane contains a convex n - gon. in a seminal paper published in 1935, erd \ h { o } s and szekeres proved that f ( n ) exists and provided an upper bound. in 1961, they also proved a lower bound, which they conjectured is optimal. their bounds are : $ 2 ^ { n - 2 } + 1 \ leq f ( n ) \ leq { 2n - 4 \ choose n - 2 } + 1 $. since then, the upper bound has been improved by rougly a factor of 2, to $ f ( n ) \ leq { 2n - 5 \ choose n - 2 } + 1 $. in the current paper, we further improve the upper bound by proving that : $ $ \ limsup \ limits _ { n \ rightarrow \ infty } \ frac { f ( n ) } { { 2n - 5 \ choose n - 2 } } \ leq \ frac { 29 } { 32 } $ $ | arxiv:1505.07549 |
artificial intelligence & nanotechnology are promising areas for the future of humanity. while deep learning based computer vision has found applications in many fields from medicine to automotive, its application in nanotechnology can open doors for new scientific discoveries. can we apply ai to explore objects that our eyes can ' t see such as nano scale sized objects? an ai platform to visualize nanoscale patterns learnt by a deep learning neural network can open new frontiers for nanotechnology. the objective of this paper is to develop a deep learning based visualization system on images of nanomaterials obtained by scanning electron microscope. this paper contributes an ai platform to enable any nanoscience researcher to use ai in visual exploration of nanoscale morphologies of nanomaterials. this ai is developed by a technique of visualizing intermediate activations of a convolutional autoencoder. in this method, a nano scale specimen image is transformed into its feature representations by a convolution neural network. the convolutional autoencoder is trained on 100 % sem dataset, and then cnn visualization is applied. this ai generates various conceptual feature representations of the nanomaterial. while deep learning based image classification of sem images are widely published in literature, there are not much publications that have visualized deep neural networks of nanomaterials. there is a significant opportunity to gain insights from the learnings extracted by machine learning. this paper unlocks the potential of applying deep learning based visualization on electron microscopy to offer ai extracted features and architectural patterns of various nanomaterials. this is a contribution in explainable ai in nano scale objects. this paper contributes an open source ai with reproducible results at url ( https : / / sites. google. com / view / aifornanotechnology ) | arxiv:2201.00966 |
three schemes, whose expressions are not too complex, are selected for the numerical integration of a system of stochastic differential equations in the stratonovich interpretation : the integration methods of heun, milstein, and derivative - free milstein. the strong ( path - wise ) convergence is studied for each method by comparing the final points after integrating with $ 2 ^ n $ and $ 2 ^ { n - 1 } $ time steps. we also compare the time that the computer takes to carry out the integration with each scheme. putting both things together, we conclude that, at least for our system, the heun method is by far the best performing one. | arxiv:1102.4401 |
intrusion detection systems ( ids ) have long been a hot topic in the cybersecurity community. in recent years, with the introduction of deep learning ( dl ) techniques, ids have made great progress due to their increasing generalizability. the rationale behind this is that by learning the underlying patterns of known system behaviors, ids detection can be generalized to intrusions that exploit zero - day vulnerabilities. in this survey, we refer to this type of ids as dl - based ids ( dl - ids ). from the perspective of dl, this survey systematically reviews all the stages of dl - ids, including data collection, log storage, log parsing, graph summarization, attack detection, and attack investigation. to accommodate current researchers, a section describing the publicly available benchmark datasets is included. this survey further discusses current challenges and potential future research directions, aiming to help researchers understand the basic ideas and visions of dl - ids research, as well as to motivate their research interests. | arxiv:2504.07839 |
rockets and satellites have previously observed small - scale alfv \ ' en waves inside large - scale downward field - aligned currents and numerical simulations have associated their formation with self - consistent magnetosphere - ionosphere coupling. the origin of these waves was previously attributed to ionospheric feedback instability, however we show that they arise in numerical experiments in which the instability is excluded. a new interpretation is proposed in which strong ionospheric depletion and associated current broadening ( a nonlinear steepening / wavebreaking process ) form magnetosphere - ionosphere waves inside a downward current region and these oscillations drive upgoing inertial alfv \ ' en waves in the overlying plasma. the resulting waves are governed by characteristic periods, which are a good match to previously observed periods for reasonable assumed conditions. meanwhile, wavelengths perpendicular to the magnetic field initially map to an ionospheric scale comparable to the electron inertial length for the low - altitude magnetosphere, but become shorter with time due to frequency - based phase mixing of boundary waves ( a new manifestation of phase mixing ). under suitable conditions, these could act as seeds for the ionospheric feedback instability. | arxiv:1302.3158 |
we show how the phenomenon of factorization in a quantum many body system is of collective nature. to this aim we study the quantum discord $ q $ in the one dimensional xy model in a transverse field. we analyze the behavior of $ q $ at both the critical point and at the non critical factorizing field. the factorization is found to be governed by an exponential scaling law for $ q $. we also address the thermal effects fanning out from the anomalies occurring at zero temperature. close to the quantum phase transition, $ q $ exhibits a finite - temperature crossover with universal scaling behavior, while the factorization phenomenon results in a non trivial pattern of correlations present at low temperature. | arxiv:1012.4270 |
point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. we present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. the first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. we derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data ( i. e., not limited to galaxy ellipticities ) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. we use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. with simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics. | arxiv:1411.2608 |
we examine the gravitational collapse of sphaleron type configurations in einstein - - yang - - mills - - higgs theory. working in spherical symmetry, we investigate the critical behavior in this model. we provide evidence that for various initial configurations, there can be three different critical transitions between possible endstates with different critical solutions sitting on the threshold between these outcomes. in addition, we show that within the dispersive and black hole regimes, there are new possible endstates, namely a stable, regular sphaleron and a stable, hairy black hole. | arxiv:gr-qc/0212015 |
the last decade has seen major progresses in studies of elementary mechanisms of deformation in amorphous materials. here, we start with a review of physically - based theories of plasticity, going back to the identification of " shear - transformations " as early as the 70 ' s. we show how constructive criticism of the theoretical models permits to formulate questions concerning the role of structural disorder, mechanical noise, and long - ranged elastic interactions. these questions provide the necessary context to understand what has motivated recent numerical studies. we then summarize their results, show why they had to focus on athermal systems, and point out the outstanding questions. | arxiv:1009.5774 |
large language models ( llms ) offer a range of new possibilities, including adapting the text to different audiences and their reading needs. but how well do they adapt? we evaluate the readability of answers generated by four state - of - the - art llms ( commercial and open - source ) to science questions when prompted to target different age groups and education levels. to assess the adaptability of llms to diverse audiences, we compare the readability scores of the generated responses against the recommended comprehension level of each age and education group. we find large variations in the readability of the answers by different llms. our results suggest llm answers need to be better adapted to the intended audience demographics to be more comprehensible. they underline the importance of enhancing the adaptability of llms in education settings to cater to diverse age and education levels. overall, current llms have set readability ranges and do not adapt well to different audiences, even when prompted. that limits their potential for educational purposes. | arxiv:2312.02065 |
binary interactions have been proposed to explain a variety of circumstellar structures seen around evolved stars, including asymptotic giant branch ( agb ) stars and planetary nebulae. studies resolving the circumstellar envelopes of agb stars have revealed spirals, discs and bipolar outflows, with shaping attributed to interactions with a companion. for the first time, we have used a combined chemical and dynamical analysis to reveal a highly eccentric and long - period orbit for w aquilae, a binary system containing an agb star and a main sequence companion. our results are based on anisotropic sin emission, the first detections of ns and sic towards an s - type star, and density structures observed in the co emission. these features are all interpreted as having formed during periastron interactions. our astrochemistry - based method can yield stringent constraints on the orbital parameters of long - period binaries containing agb stars, and will be applicable to other systems. | arxiv:2407.16979 |
we provide sharp lower and upper bounds for the gelfand widths of $ \ ell _ p $ - balls in the $ n $ - dimensional $ \ ell _ q ^ n $ - space for $ 0 < p \ leq 1 $ and $ p < q \ leq 2 $. such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area. | arxiv:1002.0672 |
given a del pezzo surface of degree d between 1 and 6, possibly with rational double points, we construct a " tautological " holomorphic g - bundle over x, where g is a reductive group which is an appropriate conformal form of the simply connected complex linear group whose coroot lattice is isomorphic to the primitive cohomology of the minimal resolution of x. for example, in case d = 3 and x is a smooth cubic surface, the rank 27 vector bundle over x associated to the g - bundle constructed above and the standard 27 - dimensional representation of e _ 6 is a direct sum of the line bundles associated to the 27 lines on x. we also discuss the restriction of the g - bundle to smooth hyperplane sections. | arxiv:math/0009155 |
despite the long history of modelling human mobility, we continue to lack a highly accurate approach with low data requirements for predicting mobility patterns in cities. here, we present a population - weighted opportunities model without any adjustable parameters to capture the underlying driving force accounting for human mobility patterns at the city scale. we use various mobility data collected from a number of cities with different characteristics to demonstrate the predictive power of our model. we find that insofar as the spatial distribution of population is available, our model offers universal prediction of mobility patterns in good agreement with real observations, including distance distribution, destination travel constraints and flux. in contrast, the models that succeed in modelling mobility patterns in countries are not applicable in cities, which suggests that there is a diversity of human mobility at different spatial scales. our model has potential applications in many fields relevant to mobility behaviour in cities, without relying on previous mobility measurements. | arxiv:1307.7502 |
##ing of the plane. in algebraic geometry, the concept of a curve can be extended to discrete geometries by taking the spectra of polynomial rings over finite fields to be models of the affine spaces over that field, and letting subvarieties or spectra of other rings provide the curves that lie in that space. although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. for example, every point of the form v ( x − c ) ⊂ spec k [ x ] = a 1 { \ displaystyle v ( x - c ) \ subset \ operatorname { spec } k [ x ] = \ mathbb { a } ^ { 1 } } for k { \ displaystyle k } a field can be studied either as spec k [ x ] / ( x − c ) spec k { \ displaystyle \ operatorname { spec } k [ x ] / ( x - c ) \ cong \ operatorname { spec } k }, a point, or as the spectrum spec k [ x ] ( x − c ) { \ displaystyle \ operatorname { spec } k [ x ] _ { ( x - c ) } } of the local ring at ( x - c ), a point together with a neighborhood around it. algebraic varieties also have a well - defined notion of tangent space called the zariski tangent space, making many features of calculus applicable even in finite settings. = = = = discrete modelling = = = = in applied mathematics, discrete modelling is the discrete analogue of continuous modelling. in discrete modelling, discrete formulae are fit to data. a common method in this form of modelling is to use recurrence relation. discretization concerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations. numerical analysis provides an important example. = = challenges = = the history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. in graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 ( by kenneth appel and wolfgang haken, using substantial computer assistance ). in logic, the second problem on david hilbert ' s list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. godel ' s second incompleteness theorem, proved in 1931, showed that this was not possible | https://en.wikipedia.org/wiki/Discrete_mathematics |
microlensing events towards the large magellanic cloud entail that a sizable fraction of dark matter is in the form of machos ( massive astrophysical compact halo objects ), presumably located in the halo of the galaxy. within the present uncertainties, brown dwarfs are a viable candidate for machos. various reasons strongly suggest that a large amount of machos should actually consist of binary brown dwarfs. yet, this circumstance looks in flat contradiction with the fact that machos have been detected as unresolved objects so far. we show that such an apparent paradox does not exist within a model in which machos are clumped into dark clusters along with cold molecular clouds, since dynamical friction on these clouds makes binary brown dwarfs very close. moreover, we argue that future microlensing experiments with a more accurate photometric observation can resolve binary brown dwarfs. | arxiv:astro-ph/9801196 |
screened modified gravity theories evade the solar system tests that have proved prohibitive for classical alternative gravity theories. in many cases, they do not fit into the ppn formalism. the environmental dependence of the screening has motivated a concerted effort to find new and novel probes of gravity using objects that are well - studied but have hitherto not been used to test gravity. astrophysical objects - - - stars, galaxies, clusters - - - have proved competitive tools for this purpose since they occupy the partially - screened regime between solar system and the hubble flow. in this article we review the current astrophysical tests of screened modified gravity theories. | arxiv:2002.04194 |
an algebra denoted $ m \ mathfrak { h } $ with three generators is introduced and shown to admit embeddings of the hahn algebra and the rational hahn algebra. it has a real version of the deformed jordan plane as a subalgebra whose connection with hahn polynomials is established. representation bases corresponding to eigenvalue or generalized eigenvalue problems involving the generators are considered. overlaps between these bases are shown to be bispectral orthogonal polynomials or biorthogonal rational functions thereby providing a unified description of these functions based on $ m \ mathfrak { h } $. models in terms of differential and difference operators are used to identify explicitly the underlying special functions as hahn polynomials and rational functions and to determine their characterizations. an embedding of $ m \ mathfrak { h } $ in $ \ mathcal { u } ( \ mathfrak { sl } _ 2 ) $ is presented. a pad \ ' e approximation table for the binomial function is obtained as a by - product. | arxiv:2009.05905 |
rapid detection and mitigation of issues that impact performance and reliability is paramount for large - scale online services. for real - time detection of such issues, datacenter operators use a stream processor and analyze streams of monitoring data collected from servers ( referred to as data source nodes ) and their hosted services. the timely processing of incoming streams requires the network to transfer massive amounts of data, and significant compute resources to process it. these factors often create bottlenecks for stream analytics. to help overcome these bottlenecks, current monitoring systems employ near - data processing by either computing an optimal query partition based on a cost model or using model - agnostic heuristics. optimal partitioning is computationally expensive, while model - agnostic heuristics are iterative and search over a large solution space. we combine these approaches by using model - agnostic heuristics to improve the partitioning solution from a model - based heuristic. moreover, current systems use operator - level partitioning : if a data source does not have sufficient resources to execute an operator on all records, the operator is executed only on the stream processor. instead, we perform data - level partitioning, i. e., we allow an operator to be executed both on a stream processor and data sources. we implement our algorithm in a system called jarvis, which enables quick adaptation to dynamic resource conditions. our evaluation on a diverse set of monitoring workloads suggests that jarvis converges to a stable query partition within seconds of a change in node resource conditions. compared to current partitioning strategies, jarvis handles up to 75 % more data sources while improving throughput in resource - constrained scenarios by 1. 2 - 4. 4x. | arxiv:2202.06021 |
with over 50 billion downloads and more than 1. 3 million apps in the google official market, android has continued to gain popularity amongst smartphone users worldwide. at the same time there has been a rise in malware targeting the platform, with more recent strains employing highly sophisticated detection avoidance techniques. as traditional signature based methods become less potent in detecting unknown malware, alternatives are needed for timely zero - day discovery. thus this paper proposes an approach that utilizes ensemble learning for android malware detection. it combines advantages of static analysis with the efficiency and performance of ensemble machine learning to improve android malware detection accuracy. the machine learning models are built using a large repository of malware samples and benign apps from a leading antivirus vendor. experimental results and analysis presented shows that the proposed method which uses a large feature space to leverage the power of ensemble learning is capable of 97. 3 to 99 percent detection accuracy with very low false positive rates. | arxiv:1608.00835 |
we present the first results from a study designed to test whether, given high - quality spectrophotometry spanning the mid - uv - - optical wavelength regime, it is possible to distinguish the metal content ( z ) and star - formation history ( sfh ) of individual elliptical galaxies with sufficient accuracy to establish whether their formation history is linked to their detailed morphology and position on the fundamental plane. from a detailed analysis of uv - optical spectrophotometry of the ` cuspy ' elliptical galaxy ngc 3605 and the giant elliptical ngc 5018 we find that : 1 ) optical spectra with l > 3500 a may not contain sufficient data to robustly uncover all the stellar populations present in individual galaxies, even in such relatively passive objects as ellipticals, 2 ) the addition of the uv data approaching l = 2500 a holds the key to establishing well - constrained sfhs, from which we can infer a formation and evolution history which is consistent with their photometric properties, 3 ) despite the superficial similarity of their spectra, the two galaxies have very different ` recent ' sfhs - - the smaller, cuspy elliptical ngc 3605 contains a high - z population of age ~ = 1 gyr, and has a position on the fundamental plane typical of the product of a low - z gas - rich merger ( most likely at z ~ 0. 08 ), while the giant elliptical ngc 5018, with a sub - solar secondary population, appears to have gained its more recent stars via mass transfer / accretion of gas from its spiral companion, 4 ) despite these differences in detailed history, more than 85 % of the stellar mass of both galaxies is associated with an old ( 9 - 12 gyr ) stellar population of near - solar z. this pilot study provides strong motivation for the construction and analysis of high - quality uv - optical spectra for a substantial sample of ellipticals spanning the fundamental plane. | arxiv:astro-ph/0605417 |
we study some arithmetic properties of the mirror maps and the quantum yukawa coupling for some 1 - parameter deformations of calabi - yau manifolds. first we use the schwarzian differential equation, which we derived previously, to characterize the mirror map in each case. for algebraic k3 surfaces, we solve the equation in terms of the $ j $ - function. by deriving explicit modular relations we prove that some k3 mirror maps are algebraic over the genus zero function field $ { \ bf q } ( j ) $. this leads to a uniform proof that those mirror maps have integral fourier coefficients. regarding the maps as riemann mappings, we prove that they are genus zero functions. by virtue of the conway - norton conjecture ( proved by borcherds using frenkel - lepowsky - meurman ' s moonshine module ), we find that these maps are actually the reciprocals of the thompson series for certain conjugacy classes in the griess - fischer group. this also gives, as an immediate consequence, a second proof that those mirror maps are integral. we thus conjecture a surprising connection between k3 mirror maps and the thompson series. for threefolds, we construct a formal nonlinear ode for the quantum coupling reduced $ mod \ p $. under the mirror hypothesis and an integrality assumption, we derive $ mod ~ p $ congruences for the fourier coefficients. for the quintics, we deduce ( at least for $ 5 \ not { | } d $ ) that the degree $ d $ instanton numbers $ n _ d $ are divisible by $ 5 ^ 3 $ - - a fact first conjectured by clemens. | arxiv:hep-th/9411234 |
we review supersymmetric models where r - parity is broken either explicitly or spontaneously. the simplest unified extension of the mssm with explicit bilinear r - - parity violation provides a predictive scheme for neutrino masses and mixings which can account for the observed atmospheric and solar neutrino anomalies. despite the smallness of neutrino masses r - parity violation is observable at present and future high - energy colliders, providing an unambiguous cross - check of the model. this model can be shown to be an effective model for the, more theoretically satisfying, spontaneous broken theory. the main difference in this last case is the appearance of a massless particle, the majoron, that can modify the decay modes of the higgs boson, making it decay invisibly most of the time. | arxiv:hep-ph/0510411 |
multipartite quantum states may exhibit different types of quantum entanglement in that they cannot be converted into each other by local quantum operations only, and fully understanding mathematical structures of different types of multipartite entanglement is a very challenging task. in this paper, from the viewpoint of hardy ' s nonlocality, we compare w and ghz states and show a couple of crucial different behaviors between them. particularly, by developing a geometric model for the hardy ' s nonlocality problem of w states, we derive an upper bound for its maximal violation probability, which turns out to be strictly smaller than the corresponding probability of ghz state. this gives us a new comparison between these two quantum states, and the result is also consistent with our intuition that ghz states is more entangled. furthermore, we generalize our approach to obtain an asymptotic characterization for general $ n $ - qubit w states, revealing that when $ n $ goes up, the speed that the maximum violation probabilities decay is exponentially slower than that of general $ n $ - qubit ghz states. we provide some numerical simulations to verify our theoretical results. | arxiv:2001.02143 |
quantum imaginary time evolution ( qite ) is a recently proposed quantum - classical hybrid algorithm that is guaranteed to reach the lowest state of system. in this study, we present several improvements on qite, mainly focusing on molecular applications. we analyze the derivation of the underlying qite equation order - by - order, and suggest a modification that is theoretically well founded. our results clearly indicate the soundness of the here - derived equation, enabling a better approximation of the imaginary time propagation by a unitary. we also discuss how to accurately estimate the norm of an imaginary - time - evolved state, and applied it to excited state calculations using the quantum lanczos algorithm. finally, we propose the folded - spectrum qite scheme as a straightforward extension of qite for general excited state simulations. the effectiveness of all these developments is illustrated by noiseless simulations, offering the further insights into quantum algorithms for imaginary time evolution. | arxiv:2205.01983 |
the strange vector form factors are evaluated for $ q ^ 2 = 0 $ and $ q ^ 2 = 1 \ \ mbox { gev } ^ 2 $ in the framework of the su ( 3 ) chiral quark - soliton model ( or semi - bosonized su ( 3 ) nambu - jona - lasinio model ). the rotational $ 1 / n _ c $ and $ m _ s $ corrections are taken into account up to linear order. the mean - square strange radius $ \ langle r ^ { 2 } \ rangle ^ { sachs } _ { s } = - 0. 35 \ ; \ mbox { fm } ^ 2 $ and the strange magnetic moment $ \ mu _ s = - 0. 44 \ ; \ mu _ n $ are obtained. the results are compared with several different models. | arxiv:hep-ph/9506344 |
the notion of the wave spectrum of a semi - bounded symmetric operator was introduced by one of the authors in 2013. the wave spectrum is a topological space determined by the operator in a canonical way. the definition uses a dynamical system associated with the operator : the wave spectrum is constructed from its reachable sets. in the paper we give a description of the wave spectrum of the operator $ l _ 0 = - \ frac { d ^ 2 } { dx ^ 2 } + q $ which acts in the space $ l _ 2 ( 0, \ infty ) $ and has defect indices $ ( 1, 1 ) $. we construct a functional ( wave ) model of the operator $ l _ 0 ^ * $ in which the elements of the original $ l _ 2 ( 0, \ infty ) $ are realized as functions on the wave spectrum. it turns out to be identical to the original $ l _ 0 ^ * $. the latter is fundamental in solving inverse problems : the wave model is determined by their data, which allows for reconstruction of the original. | arxiv:1703.00176 |
in this manuscript, we will discuss the construction of covariant derivative operator in quantum gravity. we will find it is more perceptive to use affine connections more general than metric compatible connections in quantum gravity. we will demonstrate this using the canonical quantization procedure. this is valid irrespective of the presence and nature of sources. the palatini and metric - affine formalisms, where metric and affine connections are the independent variables, are not sufficient to construct a source - free theory of gravity with affine connections more general than the metric compatible levi - civita connections. this is also valid for many minimally coupled interacting theories where sources only couple with metric by using the levi - civita connections exclusively. we will discuss potential formalism of affine connections to introduce affine connections more general than metric compatible connections in gravity. we will also discuss possible extensions of the actions for this purpose. general affine connections introduce new fields in gravity besides metric. in this article, we will consider a simple potential formalism with symmetric ricci tensor. corresponding affine connections introduce two massless scalar fields. one of these fields contributes a stress - tensor with opposite sign to the sources of einstein ' s equation when we state the equation using the levi - civita connections. this means we have a massless scalar field with negative stress - tensor in the familiar einstein equation. these scalar fields can be useful to explain dark energy and inflation. these fields bring us beyond strict local minkowski geometries. | arxiv:1702.02384 |
externally controlled motion of micro and nanomotors in a fluid environment constitutes a promising tool in biosensing, targeted delivery and environmental remediation. in particular, recent experiments have demonstrated that fuel - free propulsion can be achieved through the application of external magnetic fields on magnetic helically shaped structures. the magnetic interaction between helices and the rotating field induces a torque that rotates and propels them via the coupled rotational - translational motion. recent works have shown that there exist certain optimal geometries of helical shapes for propulsion. however, experiments show that controlled motion remains a challenge at the nanoscale due to brownian motion that interferes with the deterministic motion and makes it difficult to achieve controlled steering. in the present work we employ quantitatively accurate simulation methodology to design a setup for which magnetic nanohelices of 30 nm in radius, with and without cargo, can be accurately propelled and steered in the presence of thermal fluctuations. in particular, we demonstrate fast transport of such nanomotors and devise protocols in manipulating external fields to achieve directionally controlled steering at biologically relevant temperatures. | arxiv:1702.01989 |
we consider a game in which players are the vertices of a directed graph. initially, nature chooses one player according to some fixed distribution and gives her a buck, which represents the request to perform a chore. after completing the task, the player passes the buck to one of her out - neighbors in the graph. the procedure is repeated indefinitely and each player ' s cost is the asymptotic expected frequency of times that she receives the buck. we consider a deterministic and a stochastic version of the game depending on how players select the neighbor to pass the buck. in both cases we prove the existence of pure equilibria that do not depend on the initial distribution ; this is achieved by showing the existence of a generalized ordinal potential. we then use the price of anarchy and price of stability to measure fairness of these equilibria. we also study a buck - holding variant of the game in which players want to maximize the frequency of times they hold the buck, which includes the pagerank game as a special case. | arxiv:1808.03206 |
we present a method for mapping variations between probability distribution functions and apply this method within the context of measuring galaxy redshift distributions from imaging survey data. this method, which we name pitpz for the probability integral transformations it relies on, uses a difference in curves between distribution functions in an ensemble as a transformation to apply to another distribution function, thus transferring the variation in the ensemble to the latter distribution function. this procedure is broadly applicable to the problem of uncertainty propagation. in the context of redshift distributions, for example, the uncertainty contribution due to certain effects can be studied effectively only in simulations, thus necessitating a transfer of variation measured in simulations to the redshift distributions measured from data. we illustrate the use of pitpz by using the method to propagate photometric calibration uncertainty to redshift distributions of the dark energy survey year 3 weak lensing source galaxies. for this test case, we find that pitpz yields a lensing amplitude uncertainty estimate due to photometric calibration error within 1 per cent of the truth, compared to as much as a 30 per cent underestimate when using traditional methods. | arxiv:2210.03130 |
stochastic simulation aims to compute output performance for complex models that lack analytical tractability. to ensure accurate prediction, the model needs to be calibrated and validated against real data. conventional methods approach these tasks by assessing the model - data match via simple hypothesis tests or distance minimization in an ad hoc fashion, but they can encounter challenges arising from non - identifiability and high dimensionality. in this paper, we investigate a framework to develop calibration schemes that satisfy rigorous frequentist statistical guarantees, via a basic notion that we call eligibility set designed to bypass non - identifiability via a set - based estimation. we investigate a feature extraction - then - aggregation approach to construct these sets that target at multivariate outputs. we demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator ( abides ). | arxiv:2105.12893 |
complex geometries can be easily treated using the well - known full - way and half - way bounce - back rules. however, the accuracy of the full - way bounce - back rule is one order lower than the half - way bounce - back rule. moreover, when the walls are not aligned with the lattices, the errors increase. including the collision operator on the walls with the full - way bounce - back rule leads to an improvement of the accuracy of the pressure - drop, but, a loss of momentum is observed on concave corners. we propose to improve the momentum conservation by adding an extrapolation of the density by the inverse distance weighting at concave corners. the technique is shown to give a second - order accuracy at a lower grid resolution, thus, the computational cost can be reduced. | arxiv:1806.03623 |
a precessing source frame, constructed using the newtonian orbital angular momentum $ { \ bf l _ { \ rm n } } $, can be invoked to model inspiral gravitational waves from generic spinning compact binaries. an attractive feature of such a precessing convention is its ability to remove all spin precession induced modulations from the orbital phase evolution. however, this convention usually employs a post - newtonian ( pn ) accurate precessional equation, appropriate for the pn accurate orbital angular momentum $ { \ bf l } $, to evolve the $ { \ bf l _ { \ rm n } } $ - based precessing source frame. this influenced us to develop inspiral waveforms for spinning compact binaries in a precessing convention that explicitly employ $ { \ bf l } $ to describe the binary orbits. our approach introduces certain additional 3pn order terms in the evolution equations for the orbital phase and frequency with respect to the usual $ { \ bf l _ { \ rm n } } $ - based implementation of the precessing convention. we examine the practical implications of these additional terms by computing the match between inspiral waveforms that employ $ { \ bf l } $ and $ { \ bf l _ { \ rm n } } $ - based precessing conventions. the match estimates are found to be smaller than the optimal value, namely $ 0. 97 $, for a non - negligible fraction of unequal mass spinning compact binaries. | arxiv:1507.00406 |
the study of the two - body photodisintegration of the deuteron in the few gev region is the ideal reaction to clarify the transition from the nucleonic degrees of freedom to the qcd picture of hadrons. the clas large angle spectrometer of hall b at jlab allowed for the first time the complete measurement of the angular distribution of the differential cross section at photon energy between 0. 5 and 3 gev. preliminary results of the e93 - 017 experiment from the analysis of the 30 % of the total statistic accumulated show persistent forward - backward asymmetry and are well described by the recent calculation of the deuteron photodisintegration cross section derived in the framework of the quark gluon string model. | arxiv:hep-ex/0302029 |
we analyze two simple model planar molecules : an ionic molecule with d3 symmetry and a covalent molecule with d6 symmetry. both symmetries allow the existence of chiral molecular orbitals and normal modes that are coupled to each other in a jahn - teller manner, invariant under u ( 1 ) symmetry with generator a pseudo angular momentum. in the ionic molecule, the chiral mode possesses an electric dipole but lacks physical angular momentum, whereas, in the covalent molecule, the situation is reversed. in spite of that, we show that in both cases the chiral modes can be excited by a circularly polarized light and are subsequently able to induce rotational motion of the entire molecule. we further discuss the potential extension of our findings to the case of crystalline bulk samples. | arxiv:2504.01709 |
an sl - invariant extension of the concurrence to higher local hilbert - space dimension is due to its relation with the determinant of the matrix of a $ d \ times d $ two qudits state, which is the only sl - invariant of polynomial degree $ d $. this determinant is written in terms of antilinear expectation values of the local $ sl ( d ) $ operators. we use the permutation invariance of the comb - condition for creating further local antilinear operators which are orthogonal to the original operator. it means that the symmetric group acts transitively on the space of combs of a given order. this extends the mechanism for writing $ sl ( 2 ) $ - invariants for qubits to qudits. i outline the method, that in principle works for arbitrary dimension $ d $, explicitely for spin 1, and spin 3 / 2. there is an odd - even discrepancy : whereas for half odd integer spin a situation similar to that observed for qubits is found, for integer spin the outcome is an asymmetric invariant of polynomial degree $ 2d $. | arxiv:1309.6235 |
although the assumption of elliptical symmetry is quite common in multivariate analysis and widespread in a number of applications, the problem of testing the null hypothesis of ellipticity so far has not been addressed in a fully satisfactory way. most of the literature in the area indeed addresses the null hypothesis of elliptical symmetry with specified location and actually addresses location rather than non - elliptical alternatives. in this paper, we are proposing new classes of testing procedures, both for specified and unspecified location. the backbone of our construction is le cam ' s asymptotic theory of statistical experiments, and optimality is to be understood locally and asymptotically within the family of generalized skew - elliptical distributions. the tests we are proposing are meeting all the desired properties of a ` ` good ' ' test of elliptical symmetry : they have a simple asymptotic distribution under the entire null hypothesis of elliptical symmetry with unspecified radial density and shape parameter ; they are affine - invariant, computationally fast, intuitively understandable, and not too demanding in terms of moments. while achieving optimality against generalized skew - elliptical alternatives, they remain quite powerful under a much broader class of non - elliptical distributions and significantly outperform the available competitors. | arxiv:1911.08171 |
a storm is a type of extreme weather. therefore, forecasting the path of a storm is extremely important for protecting human life and property. however, storm forecasting is very challenging because storm trajectories frequently change. in this study, we propose an improved deep learning method using a transformer network to predict the movement trajectory of a storm over the next 6 hours. the storm data used to train the model was obtained from the national oceanic and atmospheric administration ( noaa ) [ 1 ]. simulation results show that the proposed method is more accurate than traditional methods. moreover, the proposed method is faster and more cost - effective | arxiv:2505.00495 |
we propose a general reduction procedure for classical field theories provided with abelian gauge symmetries in a lagrangian setting. these ideas come from an axiomatic presentation of the general boundary formulation ( gbf ) of field theories, mostly inspired by topological quantum field theories ( tqft ). we construct abelian yang - mills theories using this framework. we treat the case for space - time manifolds with smooth boundary components as well as the case of manifolds with corners. this treatment is the gbf analogue of extended tqfts. the aim for developing this classical formalism is to accomplish, in a future work, geometric quantization at least for the abelian case. | arxiv:1407.4741 |
the large separations between the oscillation frequencies of solar - like stars are measures of stellar mean density. the separations have been thought to be mostly constant in the observed range of frequencies. however, detailed investigation shows that they are not constant, and their variations are not random but have very strong diagnostic potential for our understanding of stellar structure and evolution. in this regard, frequencies of the minimum large separation are very useful tools. from these frequencies, in addition to the large separation and frequency of maximum amplitude, y \ i ld \ i z et al. recently have developed new methods to find almost all the fundamental stellar properties. in the present study, we aim to find metallicity and helium abundances from the frequencies, and generalize the relations given by y \ i ld \ i z et al. for a wider stellar mass range and arbitrary metallicity ( $ z $ ) and helium abundance ( $ y $ ). we show that the effect of metallicity is { significant } for most of the fundamental parameters. for stellar mass, for example, the expression must be multiplied by $ ( z / z _ { \ sun } ) ^ { 0. 12 } $. for arbitrary helium abundance, $ m \ propto ( y / y _ { \ sun } ) ^ { 0. 25 } $. methods for determination of $ z $ and $ y $ from pure asteroseismic quantities are based on amplitudes ( differences between maximum and minimum values of \ dnu ) in the oscillatory component in the spacing of oscillation frequencies. additionally, we demonstrate that the difference between the first maximum and the second minimum is very sensitive to $ z $. it also depends on $ \ nu _ { \ rm min1 } / \ nu _ { \ rm max } $ and small separation between the frequencies. such a dependence leads us to develop a method to find $ z $ ( and $ y $ ) from oscillation frequencies. the maximum difference between the estimated and model $ z $ values is about 14 per cent. it is 10 per cent for $ y $. | arxiv:1505.04063 |
let $ \ gamma $ denote the modular group $ sl ( 2, \ bbb z ) $ and $ c _ n ( \ gamma ) $ the number of congruence subgroups of $ \ gamma $ of index at most $ n $. we prove that $ \ lim \ limits _ { n \ to \ infty } \ frac { \ log c _ n ( \ gamma ) } { ( \ log n ) ^ 2 / \ log \ log n } = \ frac { 3 - 2 \ sqrt { 2 } } { 4 }. $ we also present a very general conjecture giving an asymptotic estimate for $ c _ n ( \ gamma ) $ for general arithmetic groups. the lower bound of the conjecture is proved modulo the generalized riemann hypothesis for artin - hecke l - functions, and in many cases is also proved unconditionally. | arxiv:math/0406249 |
the study of asteroid families has provided tremendous insight into the forces that sculpted the main belt and continue to drive the collisional and dynamical evolution of asteroids. the identification of asteroid families within the neo population could provide a similar boon to studies of their formation and interiors. in this study we examine the purported identification of neo families by drummond ( 2000 ) and conclude that it is unlikely that they are anything more than random fluctuations in the distribution of neo osculating orbital elements. we arrive at this conclusion after examining the expected formation rate of neo families, the identification of neo groups in synthetic populations that contain no genetically related neos, the orbital evolution of the largest association identified by drummond ( 2000 ), and the decoherence of synthetic neo families intended to reproduce the observed members of the same association. these studies allowed us to identify a new criterion that can be used to select real neo families for further study in future analyses, based on the ratio of the number of pairs and the size of strings to the number of objects in an identified association. | arxiv:astro-ph/0505270 |
suppression of fluctuations of normally perturbed magnetic fields in dynamo waves and slow dynamos along curved ( folded ), torsioned ( twisted ) and non - stretched, diffusive filaments are obtained. this form of fluctuations suppression has been recently obtained by vainshtein et al [ pre 56, ( 1997 ) ] in nonlinear abc and stretch - twist - fold ( stf ) dynamos by using a magnetic reynolds number of the order of $ rm \ approx { 10 ^ { 4 } } $. here when torsion does not vanish an expression between magnetic reynolds number and length scale l as with constant torsion $ { \ tau } _ { 0 } $ itself is obtained, such as $ rm \ approx { \ frac { { \ tau } _ { 0 } l } { \ eta } } $ is obtained. at coronal loops $ rm \ approx { 10 ^ { 12 } } $ and torsion of the twisted structured loop from astronomical data by lopez - fuentes et al [ astron. and astrophys. ( 2003 ) ] of $ { \ tau } \ approx { 9. 0 { \ times } 10 ^ { - 10 } } cm ^ { - 1 } $ is used to compute a very slow magnetic diffusion of $ { \ eta } \ approx { 10 ^ { - 8 } } $. the slow dynamo obtained here is in agreement with vishik arguement that fast dynamo cannot be obtained in non - stretched dynamo flows. when torsion vanishes helical turbulence is quenched and but $ { \ alpha } $ - dynamos cannot be maintained since exponential stretching depends on torsion. this is actually zeldovich antidynamo theorem for torsion - free or planar filaments which has been discussed by the other also recently in another context [ astr nach. ( 2008 ) ]. the suppression of magnetic field fluctuations is actually a result of the coupling of the magnetic diffusion and frenet torsion of helical turbulence. | arxiv:0806.3476 |
an angular analysis of the $ b ^ 0 \ rightarrow k ^ { * 0 } e ^ + e ^ - $ decay is performed using a data sample, corresponding to an integrated luminosity of 3. 0 $ { \ mbox { fb } ^ { - 1 } } $, collected by the lhcb experiment in $ pp $ collisions at centre - of - mass energies of 7 and 8 tev during 2011 and 2012. for the first time several observables are measured in the dielectron mass squared ( $ q ^ 2 $ ) interval between 0. 002 and 1. 120 $ { \ mathrm { \, ge \ kern - 0. 1em v ^ 2 \! / } c ^ 4 } $. the angular observables $ f _ { \ mathrm { l } } $ and $ a _ { \ mathrm { t } } ^ { \ mathrm { re } } $ which are related to the $ k ^ { * 0 } $ polarisation and to the lepton forward - backward asymmetry, are measured to be $ f _ { \ mathrm { l } } = 0. 16 \ pm 0. 06 \ pm0. 03 $ and $ a _ { \ mathrm { t } } ^ { \ mathrm { re } } = 0. 10 \ pm 0. 18 \ pm 0. 05 $, where the first uncertainty is statistical and the second systematic. the angular observables $ a _ { \ mathrm { t } } ^ { ( 2 ) } $ and $ a _ { \ mathrm { t } } ^ { \ mathrm { im } } $ which are sensitive to the photon polarisation in this $ q ^ 2 $ range, are found to be $ a _ { \ mathrm { t } } ^ { ( 2 ) } = - 0. 23 \ pm 0. 23 \ pm 0. 05 $ and $ a _ { \ mathrm { t } } ^ { \ mathrm { im } } = 0. 14 \ pm 0. 22 \ pm 0. 05 $. the results are consistent with standard model predictions. | arxiv:1501.03038 |
we present a 74 mhz survey of a 165 square degree region located near the north galactic pole. this survey has an unprecedented combination of both resolution ( 25 ' ' fwhm ) and sensitivity ( rms noise as low as 24 mjy / beam ). we detect 949 sources at the 5 - sigma level in this region, enough to begin exploring the nature of the 74 mhz source population. we present differential source counts, spectral index measurements and the size distribution as determined from counterparts in the high resolution first 1. 4 ghz survey. we find a trend of steeper spectral indices for the brighter sources. further, there is a clear correlation between spectral index and median source size, with the flat spectrum sources being much smaller on average. ultra - steep spectrum objects ( alpha < - 1. 2 ) are identified, and we present high resolution vla follow - up observations of these sources which, identified at such a low frequency, are excellent candidates for high redshift radio galaxies. | arxiv:astro-ph/0310521 |
understanding the duration of news events ' impact on the stock market is crucial for effective time - series forecasting, yet this facet is largely overlooked in current research. this paper addresses this research gap by introducing a novel dataset, the impact duration estimation dataset ( ided ), specifically designed to estimate impact duration based on investor opinions. our research establishes that pre - finetuning language models with ided can enhance performance in text - based stock movement predictions. in addition, we juxtapose our proposed pre - finetuning task with sentiment analysis pre - finetuning, further affirming the significance of learning impact duration. our findings highlight the promise of this novel research direction in stock movement prediction, offering a new avenue for financial forecasting. we also provide the ided and pre - finetuned language models under the cc by - nc - sa 4. 0 license for academic use, fostering further exploration in this field. | arxiv:2409.17419 |
this article generalizes a previous work in which the author obtained a large lower bound for the lifespan of the solutions to the primitive equations, and proved convergence to the 3d quasi - geostrophic system for general and ill - prepared ( possibly blowing - up ) initial data that are regularization of vortex patches related to the potential velocity. these results were obtained for a very particular case when the kinematic viscosity $ \ nu $ is equal to the heat diffusivity $ \ nu ' $, turning the diffusion operator into the classical laplacian. obtaining the same results without this assumption is much more difficult as it involves a non - local diffusion operator. the key to the main result is a family of a priori estimates for the 3d - qg system that we obtained in a companion paper. | arxiv:1411.6859 |
suppose that a quantum source is known to have von neumann entropy less than or equal to s but is otherwise completely unspecified. we describe a method of universal quantum data compression which will faithfully compress the quantum information of any such source to s qubits per signal ( in the limit of large block lengths ). | arxiv:quant-ph/9805017 |
an electron - ion collider ( eic ) with center - of - mass energies sqrt ( s _ { en } ) ~ 20 - 100 gev and luminosity l ~ 10 ^ { 34 } cm ^ { - 2 } s ^ { - 1 } would offer new opportunities to study heavy quark production in high - energy electron or photon scattering on protons and nuclei. we report about an r & d project exploring the feasibility of direct measurements of nuclear gluon densities at large x ( gluonic emc effect, antishadowing ) using open charm production at eic. we describe the charm production rates and angle - momentum distributions at large x and discuss methods of charm reconstruction using next - generation detector capabilities ( pi / k identification, vertex reconstruction ). the results can be used also for other physics applications of heavy quark production at eic ( fragmentation functions, jets, heavy quark propagation in nuclei ). | arxiv:1610.08536 |
the leading asymptotics of the truncation error for gauss ' s continued fraction is determined exactly. not only for this purpose but also for wider applicability elsewhere the discrete analogue of laplace ' s method for hypergeometric series containing a large parameter, which was developed in a previous paper, is generalized in two directions. | arxiv:1904.03350 |
learning in high dimensional continuous tasks is challenging, mainly when the experience replay memory is very limited. we introduce a simple yet effective experience sharing mechanism for deterministic policies in continuous action domains for the future off - policy deep reinforcement learning applications in which the allocated memory for the experience replay buffer is limited. to overcome the extrapolation error induced by learning from other agents ' experiences, we facilitate our algorithm with a novel off - policy correction technique without any action probability estimates. we test the effectiveness of our method in challenging openai gym continuous control tasks and conclude that it can achieve a safe experience sharing across multiple agents and exhibits a robust performance when the replay memory is strictly limited. | arxiv:2207.13453 |
in this paper we formulate and solve a mean - field game described by a linear stochastic dynamics and a quadratic or exponential - quadratic cost functional for each generic player. the optimal strategies for the players are given explicitly using a simple and direct method based on square completion and a girsanov - type change of measure. this approach does not use the well - known solution methods such as the stochastic maximum principle and the dynamic programming principle with hamilton - jacobi - bellman - isaacs equation and fokker - planck - kolmogorov equation. in the risk - neutral linear - quadratic mean - field game, we show that there is unique best response strategy to the mean of the state and provide a simple sufficient condition of existence and uniqueness of mean - field equilibrium. this approach gives a basic insight into the solution by providing a simple explanation for the additional term in the robust or risk - sensitive riccati equation, compared to the risk - neutral riccati equation. sufficient conditions for existence and uniqueness of mean - field equilibria are obtained when the horizon length and risk - sensitivity index are small enough. the method is then extended to the linear - quadratic robust mean - field games under small disturbance, formulated as a minimax mean - field game. | arxiv:1412.0037 |
we prove contact big fiber theorems, analogous to the symplectic big fiber theorem by entov and polterovich, using symplectic cohomology with support. unlike in the symplectic case, the validity of the statements requires conditions on the closed contact manifold. one such condition is to admit a liouville filling with non - zero symplectic cohomology. in the case of boothby - wang contact manifolds, we prove the result under the condition that the euler class of the circle bundle, which is the negative of an integral lift of the symplectic class, is not an invertible element in the quantum cohomology of the base symplectic manifold. as applications, we obtain new examples of rigidity of intersections in contact manifolds and also of contact non - squeezing. | arxiv:2503.04277 |
technology readiness levels ( trls ) are a method for estimating the maturity of technologies during the acquisition phase of a program. trls enable consistent and uniform discussions of technical maturity across different types of technology. trl is determined during a technology readiness assessment ( tra ) that examines program concepts, technology requirements, and demonstrated technology capabilities. trls are based on a scale from 1 to 9 with 9 being the most mature technology. trl was developed at nasa during the 1970s. the us department of defense has used the scale for procurement since the early 2000s. by 2008 the scale was also in use at the european space agency ( esa ). the european commission advised eu - funded research and innovation projects to adopt the scale in 2010. trls were consequently used in 2014 in the eu horizon 2020 program. in 2013, the trl scale was further canonized by the international organization for standardization ( iso ) with the publication of the iso 16290 : 2013 standard. a comprehensive approach and discussion of trls has been published by the european association of research and technology organisations ( earto ). extensive criticism of the adoption of trl scale by the european union was published in the innovation journal, stating that the " concreteness and sophistication of the trl scale gradually diminished as its usage spread outside its original context ( space programs ) ". = = definitions = = = = assessment tools = = a technology readiness level calculator was developed by the united states air force. this tool is a standard set of questions implemented in microsoft excel that produces a graphical display of the trls achieved. this tool is intended to provide a snapshot of technology maturity at a given point in time. the defense acquisition university ( dau ) decision point ( dp ) tool originally named the technology program management model was developed by the united states army. and later adopted by the dau. the dp / tpmm is a trl - gated high - fidelity activity model that provides a flexible management tool to assist technology managers in planning, managing, and assessing their technologies for successful technology transition. the model provides a core set of activities including systems engineering and program management tasks that are tailored to the technology development and management goals. this approach is comprehensive, yet it consolidates the complex activities that are relevant to the development and transition of a specific technology program into one integrated model. = = uses = = the primary purpose of using technology readiness levels is to help management in making decisions concerning the development and transitioning of technology. it is one of several tools that | https://en.wikipedia.org/wiki/Technology_readiness_level |
computation of document similarity is a critical task in various nlp domains that has applications in deduplication, matching, and recommendation. traditional approaches for document similarity computation include learning representations of documents and employing a similarity or a distance function over the embeddings. however, pairwise similarities and differences are not efficiently captured by individual representations. graph representations such as joint concept interaction graph ( jcig ) represent a pair of documents as a joint undirected weighted graph. jcigs facilitate an interpretable representation of document pairs as a graph. however, jcigs are undirected, and don ' t consider the sequential flow of sentences in documents. we propose two approaches to model document similarity by representing document pairs as a directed and sparse jcig that incorporates sequential information. we propose two algorithms inspired by supergenome sorting and hamiltonian path that replace the undirected edges with directed edges. our approach also sparsifies the graph to $ o ( n ) $ edges from jcig ' s worst case of $ o ( n ^ 2 ) $. we show that our sparse directed graph model architecture consisting of a siamese encoder and gcn achieves comparable results to the baseline on datasets not containing sequential information and beats the baseline by ten points on an instructional documents dataset containing sequential information. | arxiv:2402.03957 |
we consider closed subschemes in the affine grassmannian obtained by degenerating $ e $ - fold products of flag varieties, embedded via a tuple of dominant cocharacters. for $ g = \ operatorname { gl } _ 2 $, and cocharacters small relative to the characteristic, we relate the cycles of these degenerations to the representation theory of $ g $. we then show that these degenerations smoothly model the geometry of ( the special fibre of ) low weight crystalline subspaces inside the emerton - - gee stack classifying $ p $ - adic representations of the galois group of a finite extension of $ \ mathbb { q } _ p $. as an application we prove new cases of the breuil - - m \ ' ezard conjecture in dimension two. | arxiv:2108.04094 |
blazars may accelerate protons and / or nuclei as well as electrons. the hadronic component of accelerated particles in blazars may constitute the bulk of their high - energy budget ; nevertheless, this component is elusive due to a high value of the energy threshold of proton interaction with photon fields inside the source. however, broad line regions ( blrs ) of some flat spectrum radio quasars ( fsrqs ) may contain a sufficient amount of matter to render primary protons " visible " in $ \ gamma $ rays via hadronuclear interactions. in the present paper we study the persistent $ \ gamma $ - ray emission of the fsrq pks 1510 - 089 in its low state utilizing the publicly - available fermi - lat data, as well as using the spectrum measured with the magic imaging atmospheric cherenkov telescopes. we find an indication for an excess of $ \ gamma $ rays at the energy range $ \ gtrsim 20 $ gev with respect to a simple baseline log - parabolic intrinsic spectral model. this excess could be explained in a scenario invoking hadronuclear interactions of primary protons on the blr material with the subsequent development of electromagnetic cascades in photon fields. we present a monte carlo calculation of the spectrum of this cascade component, taking as input the blr photon field spectrum calculated with the cloudy code. to our knowledge, this is the first calculation of electromagnetic cascade spectrum inside a blazar based on a direct calculation of the photon field spectrum with a spectral synthesis code. | arxiv:2111.07389 |
extremophiles have gained prominence by providing an experimental approach to astrobiology. extremophiles gain equal value by being part of a framework for high - level characterisation of the evolutionary mechanisms that must necessarily restrict or promote their emergence and presence on solar system bodies. thus, extremophiles exist in extreme environments, and therein lies the paradox : extremophiles can only live in extreme environments but yet are not able to originate in such environments. therefore, even though the range of extremophile capabilities in extreme environments is wider than that in mesophiles, the range of their emergence possibilities is still equally restricted. therefore, even if one locates an extreme exoworld where terrestrial extremophiles could live here - and - now, it can be predicted that no extremophile analogues are present anyway. furthermore, it is possible for a world to be uninhabited, yet be habitable, and therein arises the extreme environment paradox : an extreme environment can sustain chemical evolution as well as arriving non - native life, yet native life cannot be built up in that very environment. thus, life may exist on an extraterrestrial extreme world ( if imported there ), and chemical evolution may be present on that world. however, it can be predicted that there is no native life anyway. this situation can be predicted to function as a chemosignature and eventually as a biosignature. however, the fact that a non - native extremopile in principle can exist in extreme environments may demonstrate that the intermediate step between chemical evolution and extremophiles can still occur in the form of a statistical deviation. | arxiv:2110.06144 |
we present a tensor - network approach for the strong - coupling expansion of two - dimensional qcd with staggered quarks at non - zero chemical potential. after expanding the boltzmann factor in the gauge and fermion actions, all gauge fields can be integrated out exactly and the partition function can be evaluated using the grassmann higher - order tensor renormalization group approach. the method is modified to compute the $ \ mu $ dependence of the quark number density and the chiral condensate up to order $ \ beta ^ 3 $ with complete absence of higher - order terms infiltrating the result. although the expansion itself is only a good approximation to the full theory at small $ \ beta < 0. 1 $, the range can be extended, by using judiciously chosen fits. moreover, these fits also yield a valuable expansion in $ \ beta $ for the critical chemical potential. | arxiv:2501.19192 |
we explicitly bound t - singularities on normal projective surfaces $ w $ with one singularity, and $ k _ w $ ample. this bound depends only on $ k _ w ^ 2 $, and it is optimal when $ w $ is not rational. we classify and realize surfaces attaining the bound for each kodaira dimension of the minimal resolution of $ w $. this answers effectiveness of bounds ( see [ alexeev94 ], [ alexeev - mori04 ], [ lee99 ] ) for those surfaces. | arxiv:1708.02278 |
we introduce and physically motivate the following problem in geometric combinatorics, originally inspired by analysing bell inequalities. a grasshopper lands at a random point on a planar lawn of area one. it then jumps once, a fixed distance $ d $, in a random direction. what shape should the lawn be to maximise the chance that the grasshopper remains on the lawn after jumping? we show that, perhaps surprisingly, a disc shaped lawn is not optimal for any $ d > 0 $. we investigate further by introducing a spin model whose ground state corresponds to the solution of a discrete version of the grasshopper problem. simulated annealing and parallel tempering searches are consistent with the hypothesis that for $ d < \ pi ^ { - 1 / 2 } $ the optimal lawn resembles a cogwheel with $ n $ cogs, where the integer $ n $ is close to $ \ pi ( \ arcsin ( \ sqrt { \ pi } d / 2 ) ) ^ { - 1 } $. we find transitions to other shapes for $ d \ gtrsim \ pi ^ { - 1 / 2 } $. | arxiv:1705.07621 |
the measurement of single transverse - spin asymmetries, $ a _ n $, for various quarkonium states and drell - yan lepton pairs can shed light on the orbital angular momentum of quarks and gluons, a fundamental ingredient of the proton - spin puzzle. the after @ lhc proposal combines a unique kinematic coverage and large luminosities thanks to the large hadron collider beams to deliver precise measurements, complementary to the knowledge provided by collider experiments such as at rhic. in this paper, we report on sensitivity studies for $ j / \ psi $, $ \ upsilon $ and drell - yan $ a _ n $ done using the performance of lhcb - like or alice - like detectors, combined with polarised gaseous hydrogen and helium - 3 targets. in particular, such analyses will provide us with new insights and knowledge about transverse - momentum - dependent parton distribution functions for quarks and gluons and on twist - 3 collinear matrix elements in the proton and the neutron. | arxiv:1702.01546 |
we introduce pattern injection local search ( pils ), an optimization strategy that uses pattern mining to explore high - order local - search neighborhoods, and illustrate its application on the vehicle routing problem. pils operates by storing a limited number of frequent patterns from elite solutions. during the local search, each pattern is used to define one move in which 1 ) incompatible edges are disconnected, 2 ) the edges defined by the pattern are reconnected, and 3 ) the remaining solution fragments are optimally reconnected. each such move is accepted only in case of solution improvement. as visible in our experiments, this strategy results in a new paradigm of local search, which complements and enhances classical search approaches in a controllable amount of computational time. we demonstrate that pils identifies useful high - order moves ( e. g., 9 - opt and 10 - opt ) which would otherwise not be found by enumeration, and that it significantly improves the performance of state - of - the - art population - based and neighborhood - centered metaheuristics. | arxiv:1912.11462 |
this expository essay discusses a finite dimensional approach to dilation theory. how much of dilation theory can be worked out within the realm of linear algebra? it turns out that some interesting and simple results can be obtained. these results can be used to give very elementary proofs of sharpened versions of some von neumann type inequalities, as well as some other striking consequences about polynomials and matrices. exploring the limits of the finite dimensional approach sheds light on the difference between those techniques and phenomena in operator theory that are inherently infinite dimensional, and those that are not. | arxiv:1012.4514 |
this paper presents an architecture and methodology to empower a service robot to navigate an indoor environment with semantic decision making, given rgb ego view. this method leverages the knowledge of robot ' s actuation capability and that of scenes, objects and their relations - - represented in a semantic form. the robot navigates based on geosem map - a relational combination of geometric and semantic map. the goal given to the robot is to find an object in a unknown environment with no navigational map and only egocentric rgb camera perception. the approach is tested both on a simulation environment and real life indoor settings. the presented approach was found to outperform human users in gamified evaluations with respect to average completion time. | arxiv:2210.11543 |
the phase diagram and sound velocities of the fe - si binary alloy, crucial for understanding the earth ' s core, are determined at inner core boundary pressure with \ textit { ab - initio } accuracy through deep - learning - aided hybrid monte carlo simulations. a complex phase diagram emerges close to the melting temperature, where a re - entrance of the body - centered cubic ( bcc ) phase is observed. the bcc structure is stabilized by a pronounced short - range ordering of the si atoms. the miscibility gap between the short - range ordered bcc structure and the long - range ordered cubic b2 structure shrinks with increasing temperature and the transition becomes continuous above 6000 k. we find that a bcc fe - si solid solution reproduces crucial geophysical data such as the low shear sound velocity and the seismic anisotropy of the inner core much better than other structures. | arxiv:2409.08008 |
using high - quality hubble space telescope observations, we construct the near infra - red ( nir ) to far ultra - violet ( fuv ) spectral energy distribution ( sed ) of psr b0656 + 14. the sed is non - monotonic. fitting it with a simple combination of a rayleigh - jeans spectrum ( uv ) and non - thermal power - law ( optical / nir ) leaves significant residuals, strongly hinting at one or more spectral features. we consider various models ( combination of continuum components, and absorption / emission lines ) with possible interpretations, and place them in the context of the broader spectral energy distribution. surprisingly, the extrapolation of the best - fit x - ray spectral model roughly match the nir - fuv data, and the power - law component is also consistent with the gamma - ray fluxes. we compare the multiwavelength sed of b0656 + 14 with those of other optical, x - ray and gamma - ray detected pulsars, and notice that a simple power - law spectrum crudely accounts for most of the non - thermal emission. | arxiv:1109.1984 |
the ligo - virgo - kagra collaboration recently detected gravitational waves ( gws ) from the merger of black - hole - neutron - star ( bhns ) binary systems gw200105 and gw200115. no coincident electromagnetic ( em ) counterparts were detected. while the mass ratio and bh spin in both systems were not sufficient to tidally disrupt the ns outside of the bh event horizon, other, magnetospheric mechanisms for em emission exist in this regime and depend sensitively on the ns magnetic field strength. combining gw measurements with em flux upper limits, we place upper limits on the ns surface magnetic field strength above which magnetospheric emission models would have generated an observable em counterpart. we consider fireball models powered by the black - hole battery mechanism, where energy is output in gamma - rays over $ \ lesssim1 $ ~ second. consistency with no detection by fermi - gbm or integral spi - acs constrains the ns surface magnetic field to $ \ lesssim10 ^ { 15 } $ ~ g. hence, joint gw detection and em upper limits rule out the theoretical possibility that the nss in gw200105 and gw200115, and the putative ns in gw190814, retain $ \ gtrsim10 ^ { 15 } $ ~ g dipolar magnetic fields until merger. they also rule out formation scenarios where strongly magnetized magnetars quickly merge with bhs. we alternatively rule out operation of the bh - battery powered fireball mechanism in these systems. this is the first multi - messenger constraint on ns magnetic fields in bhns systems and a novel approach to probe fields at this point in ns evolution. this demonstrates the constraining power that multi - messenger analyses of bhns mergers have on bhns formation scenarios, the magnetic - field evolution in nss, and the physics of bhns magnetospheric interactions. | arxiv:2112.01979 |
the $ 2n $ dimensional manifold with two mutually commutative operators of differentiation is introduced. nontrivial multidimensional integrable systems connected with arbitrary graded ( semisimple ) algebras are constructed. the general solution of them is presented in explicit form. | arxiv:nlin/0101059 |
we study the phenomenon of cluster synchrony that occurs in ensembles of coupled phase oscillators when higher - order modes dominate the coupling between oscillators. for the first time, we develop a complete analytic description of the dynamics in the limit of a large number of oscillators and use it to quantify the degree of cluster synchrony, cluster asymmetry, and switching. we use a variation of the recent dimensionality - reduction technique of ott and antonsen [ chaos { \ bf 18 }, 037113 ( 2008 ) ] and find an analytic description of the degree of cluster synchrony valid on a globally attracting manifold. shaped by this manifold, there is an infinite family of steady - state distributions of oscillators, resulting in a high degree of multi - stability in the cluster asymmetry. we also show how through external forcing the degree of asymmetry can be controlled, and suggest that systems displaying cluster synchrony can be used to encode and store data. | arxiv:1107.1511 |
brain computer interface ( bci ) is the only way for some special patients to communicate with the outside world and provide a direct control channel between brain and the external devices. as a non - invasive interface, the scalp electroencephalography ( eeg ) has a significant potential to be a major input signal for future bci systems. traditional methods only focus on a particular feature in the eeg signal, which limits the practical applications of eeg - based bci. in this paper, we propose a algorithm for eeg classification with the ability to fuse multiple features. first, use the common spatial pattern ( csp ) as the spatial feature and use wavelet coefficient as the spectral feature. second, fuse these features with a fusion algorithm in orchestrate way to improve the accuracy of classification. our algorithms are applied to the dataset iva from bci complete \ uppercase \ expandafter { \ romannumeral3 }. by analyzing the experimental results, it is possible to conclude that we can speculate that our algorithm perform better than traditional methods. | arxiv:1808.04443 |
[ pasj review paper ] rotation curves are the basic tool for deriving the distribution of mass in spiral galaxies. in this review, we describe various methods to measure rotation curves in the milky way and spiral galaxies. we then describe two major methods to calculate the mass distribution using the rotation curve. by the direct method, the mass is calculated from rotation velocities without employing mass models. by the decomposition method, the rotation curve is deconvolved into multiple mass components by model fitting assuming a black hole, bulge, exponential disk and dark halo. the decomposition is useful for statistical correlation analyses among the dynamical parameters of the mass components. we also review recent observations and derived results. ( full resolution copy is available at url : http : / / www. ioa. s. u - tokyo. ac. jp / ~ sofue / htdocs / pasjreview2016 / ) | arxiv:1608.08350 |
this study develops a water - level management model for the great lakes using a predictive control framework. requirement 1 : historical data ( pre - 2019 ) revealed consistent monthly water - level patterns. a simulated annealing algorithm optimized flow control via the moses - saunders dam and compensating works to align levels with multi - year benchmarks. requirement 2 : a water level predictive control model ( wlpcm ) integrated delayed differential equations ( ddes ) and model predictive control ( mpc ) to account for inflow / outflow dynamics and upstream time lags. natural variables ( e. g., precipitation ) were modeled via linear regression, while dam flow rates were optimized over 6 - month horizons with feedback adjustments for robustness. requirement 3 : testing wlpcm on 2017 data successfully mitigated ottawa river flooding, outperforming historical records. sensitivity analysis via the sobol method confirmed model resilience to parameter variations. requirement 4 : ice - clogging was identified as the most impactful natural variable ( via rmse - based sensitivity tests ), followed by snowpack and precipitation. requirement 5 : stakeholder demands ( e. g., flood prevention, ecological balance ) were incorporated into a fitness function. compared to plan 2014, wlpcm reduced catastrophic high levels in lake ontario and excessive st. lawrence river flows by prioritizing long - term optimization. key innovations include dde - based predictive regulation, real - time feedback loops, and adaptive control under extreme conditions. the framework balances hydrological dynamics, stakeholder needs, and uncertainty management, offering a scalable solution for large freshwater systems. | arxiv:2504.04761 |
let $ n $ and $ k $ be integers. a set $ a \ subset \ mathbb { z } / n \ mathbb { z } $ is $ k $ - free if for all $ x $ in $ a $, $ kx \ notin a $. we determine the maximal cardinality of such a set when $ k $ and $ n $ are coprime. we also study several particular cases and we propose an efficient algorithm for solving the general case. we finally give the asymptotic behaviour of the minimal size of a $ k $ - free set in $ \ left [ 1, n \ right ] $ which is maximal for inclusion. | arxiv:1409.7294 |
we consider a class of spatially flat cold dark matter ( cdm ) models, with a cosmological constant and a broken - scale - invariant ( bsi ) steplike primordial spectrum of adiabatic perturbations, previously found to be in very good agreement with observations. performing a fisher matrix analysis, we show that in case of a large gravitational waves ( gw ) contribution some free parameters ( defining the step ) of our bsi model can be extracted with remarkable accuracy by the planck satellite, thanks to the polarisation anisotropy measurements. further, cosmological parameters can still be found with very good precision, despite a larger number of free parameters than in the simplest inflationary models. | arxiv:astro-ph/9807020 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.