text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
cross - domain object detection is challenging, because object detection model is often vulnerable to data variance, especially to the considerable domain shift between two distinctive domains. in this paper, we propose a new unbiased mean teacher ( umt ) model for cross - domain object detection. we reveal that there often exists a considerable model bias for the simple mean teacher ( mt ) model in cross - domain scenarios, and eliminate the model bias with several simple yet highly effective strategies. in particular, for the teacher model, we propose a cross - domain distillation method for mt to maximally exploit the expertise of the teacher model. moreover, for the student model, we alleviate its bias by augmenting training samples with pixel - level adaptation. finally, for the teaching process, we employ an out - of - distribution estimation strategy to select samples that most fit the current model to further enhance the cross - domain distillation process. by tackling the model bias issue with these strategies, our umt model achieves maps of 44. 1 %, 58. 1 %, 41. 7 %, and 43. 1 % on benchmark datasets clipart1k, watercolor2k, foggy cityscapes, and cityscapes, respectively, which outperforms the existing state - of - the - art results in notable margins. our implementation is available at https : / / github. com / kinredon / umt. | arxiv:2003.00707 |
non - factorizable virtual corrections to higgs boson production in weak boson fusion at next - to - next - to - leading order in qcd were estimated in the eikonal approximation [ 1 ]. this approximation corresponds to the expansion of relevant amplitudes around the forward limit. in this paper we compute the leading power correction to the eikonal limit and show that it is proportional to first power of the higgs boson transverse momentum or the higgs boson mass over partonic center - of - mass energy. moreover, this correction can be significantly enhanced by the rapidity of the higgs boson. for realistic weak boson fusion cuts, the next - to - eikonal correction reduces the estimate of non - factorizable contributions to fiducial cross section by o ( 30 ) percent. | arxiv:2305.12937 |
pinning particles at random in supercooled liquids is a promising route to make substantial progress on the glass transition problem. here we develop a mean - field theory by studying the equilibrium and non - equilibrium dynamics of the spherical p - spin model in presence of a fraction c of pinned spins. our study shows the existence of two dynamic critical lines : one corresponding to usual mode coupling transitions and the other one to dynamic spinodal transitions. quenches in the portion of the c - t phase diagram delimited by those two lines leads to aging. by extending our results to finite dimensional systems we predict non - interrupted aging only for quenches on the ideal glass transition line and two very different types of equilibrium relaxations for quenches below and above it. | arxiv:1112.4068 |
effect modification occurs while the effect of the treatment is not homogeneous across the different strata of patient characteristics. when the effect of treatment may vary from individual to individual, precision medicine can be improved by identifying patient covariates to estimate the size and direction of the effect at the individual level. however, this task is statistically challenging and typically requires large amounts of data. investigators may be interested in using the individual patient data ( ipd ) from multiple studies to estimate these treatment effect models. our data arise from a systematic review of observational studies contrasting different treatments for multidrug - resistant tuberculosis ( mdr - tb ), where multiple antimicrobial agents are taken concurrently to cure the infection. we propose a marginal structural model ( msm ) for effect modification by different patient characteristics and co - medications in a meta - analysis of observational ipd. we develop, evaluate, and apply a targeted maximum likelihood estimator ( tmle ) for the doubly robust estimation of the parameters of the proposed msm in this context. in particular, we allow for differential availability of treatments across studies, measured confounding within and across studies, and random effects by study. | arxiv:2101.03997 |
we study three aspects of work statistics in the context of the fluctuation theorem for the quantum spin chains up to $ 1024 $ sites by numerical methods based on matrix - product states ( mps ). first, we use our numerical method to evaluate the moments / cumulants of work done by sudden quench process on the ising or haldane spin chains and study their behaviors across the quantum phase transitions. our results show that, up to the fourth cumulant, the work statistics can indicate the quantum phase transition characterized by the local order parameters but barely for purely topological phase transitions. second, we propose to use the fluctuation theorem, such as jarzynski ' s equality, which relates the real - time correlator to the ratio of the thermal partition functions, as a benchmark indicator for the numerical real - time evolving methods. third, we study the passivity of ground and thermal states of quantum spin chains under some cyclic impulse processes. we show that the passivity of thermal states and ground states under the hermitian actions are ensured by the second laws and variational principles, respectively, and also verify it by numerical calculations. besides, we also consider the passivity of ground states under non - hermitian actions, for which the variational principle cannot be applied. despite that, we find no violation of passivity from our numerical results for all the cases considered in the ising and haldane chains. { overall, we demonstrate that the work statistics for the sudden quench and impulse processes can be evaluated precisely by the numerical mps method to characterize quantum phase transitions and examine the passivity of quantum states. we also propose to exploit the universality of the fluctuation theorem to benchmark the numerical real - time evolutions in an algorithm and model independent way. | arxiv:2308.13366 |
in this paper, we propose a novel approach for measuring the degree of similarity between categories of two pieces of persian text, which were published as descriptions of two separate advertisements. we built an appropriate dataset for this work using a dataset which consists of advertisements posted on an e - commerce website. we generated a significant number of paired texts from this dataset and assigned each pair a score from 0 to 3, which demonstrates the degree of similarity between the domains of the pair. in this work, we represent words with word embedding vectors derived from word2vec. then deep neural network models are used to represent texts. eventually, we employ concatenation of absolute difference and bit - wise multiplication and a fully - connected neural network to produce a probability distribution vector for the score of the pairs. through a supervised learning approach, we trained our model on a gpu, and our best model achieved an f1 score of 0. 9865. | arxiv:1909.09690 |
we present results for non - perturbative renormalization ( npr ) factors for staggered fermion bilinears of arbitrary spin and taste. we use " covariant " bilinears which transform irreducibly under the lattice translation and rotation group, and thus do not mix. we form ~ 30 ratios which have no anomalous dimensions, and compare the npr results to those from 1 - loop perturbation theory. we also compare the absolute renormalization factors ( which, in general, do have anomalous dimensions ) to 1 - loop perturbation theory. we use asqtad and hyp - smeared staggered valence fermions on the coarse milc asqtad lattices. | arxiv:1110.5494 |
recent experiments involving cdte films grown on si ( 111 ) substrates by hot wall epitaxy revealed features not previously observed [ s. o. ferreira \ textit { et al. }, j. appl. phys. \ textbf { 93 }, 1195 ( 2003 ) ]. this system, which follows the volmer - weber growth mode with nucleation of isolated 3d islands for less than one monolayer of evaporated material, was described by a peculiar behavior of the quantum dot ( qd ) size distributions. in this work, we proposed a kinetic deposition model to reproduce these new features. the model, which includes thermally activated diffusion and evaporation of cdte, qualitatively reproduced the experimental qd size distributions. moreover, the model predicts a transition from stranski - krastanow growth mode at lower temperatures to volmer - weber growth mode at higher ones characterized through the qd width distributions. | arxiv:cond-mat/0601050 |
the accretion of dark matter ( dm ) into astrophysical black holes slowly increases their mass. the rate of this mass accretion depends on the dm model and the model parameters. if this mass accretion effect can be measured accurately enough, it is possible to rule out some dm models, and, with the sufficient technology and the help of other dm constraints, possibly confirm one model. we propose a dm probe based on accreting pulsar - black hole binaries, which provide a high - precision measurement on binary orbital phase shifts induced by dm accretion into black holes, and can help rule out dm models and study the nature of dm. | arxiv:2304.08824 |
space charge effects can significantly degrade charge collection in organic photovoltaics ( opvs ), especially in thick - film devices. the two main causes of space charge are doping and imbalanced transport. although these are completely different phenomena, they lead to the same voltage dependence of the photocurrent, making them difficult to distinguish. in this work, a method is introduced how the build - up of space charge due to imbalanced transport can be monitored in a real operating organic solar cell. the method is based on the reconstruction of quantum efficiency spectra and requires only optical input parameters that are straightforward to measure. this makes it suitable for the screening of new opv materials. furthermore, numerical and analytical means are derived to predict the impact of imbalanced transport on the charge collection. it is shown that when charge recombination is sufficiently reduced, balanced transport is not a necessary condition for efficient thick - film opvs. | arxiv:1911.00866 |
we introduce the minimum labelling spanning bi - connected subgraph problem ( mlsbp ) replacing connectivity by bi - connectivity in the well known minimum labelling spanning tree problem ( mlstp ). a graph is bi - connected if, for every two vertices, there are, at least, two vertex - disjoint paths joining them. the problem consists in finding the spanning bi - connected subgraph or block with minimum set of labels. we adapt the exact method of the mlstp to solve the mlstb and the basic greedy constructive heuristic, the maximum vertex covering algorithm ( mvca ). this proce - dure is a basic component in the application of metaheuristics to solve the problem. | arxiv:1505.01742 |
in an effort to study the applicability of adaptive mesh refinement ( amr ) techniques to atmospheric models an interpolation - based spectral element shallow water model on a cubed - sphere grid is compared to a block - structured finite volume method in latitude - longitude geometry. both models utilize a non - conforming adaptation approach which doubles the resolution at fine - coarse mesh interfaces. the underlying amr libraries are quad - tree based and ensure that neighboring regions can only differ by one refinement level. the models are compared via selected test cases from a standard test suite for the shallow water equations. they include the advection of a cosine bell, a steady - state geostrophic flow, a flow over an idealized mountain and a rossby - haurwitz wave. both static and dynamics adaptations are evaluated which reveal the strengths and weaknesses of the amr techniques. overall, the amr simulations show that both models successfully place static and dynamic adaptations in local regions without requiring a fine grid in the global domain. the adaptive grids reliably track features of interests without visible distortions or noise at mesh interfaces. simple threshold adaptation criteria for the geopotential height and the relative vorticity are assessed. | arxiv:physics/0702133 |
an alternative process is proposed for the diffractive higgs boson production in peripheral $ pp $ collisions, exploring it through the photon - proton interaction by double pomeron exchange. it is estimated the event rate of the diffractive higgs production in central rapidity for tevatron and lhc energies, being of the order of 1 fb, in agreement to the predictions from other diffractive processes. the results are confronted with those obtained from a similar approach of the durham group. | arxiv:0812.1181 |
the brain processes information about the environment via neural codes. the neural ideal was introduced recently as an algebraic object that can be used to better understand the combinatorial structure of neural codes. every neural ideal has a particular generating set, called the canonical form, that directly encodes a minimal description of the receptive field structure intrinsic to the neural code. on the other hand, for a given monomial order, any polynomial ideal is also generated by its unique ( reduced ) gr \ " obner basis with respect to that monomial order. how are these two types of generating sets - - canonical forms and gr \ " obner bases - - related? our main result states that if the canonical form of a neural ideal is a gr \ " obner basis, then it is the universal gr \ " obner basis ( that is, the union of all reduced gr \ " obner bases ). furthermore, we prove that this situation - - when the canonical form is a gr \ " obner basis - - occurs precisely when the universal gr \ " obner basis contains only pseudo - monomials ( certain generalizations of monomials ). our results motivate two questions : ( 1 ) ~ when is the canonical form a gr \ " obner basis? ( 2 ) ~ when the universal gr \ " obner basis of a neural ideal is { \ em not } a canonical form, what can the non - pseudo - monomial elements in the basis tell us about the receptive fields of the code? we give partial answers to both questions. along the way, we develop a representation of pseudo - monomials as hypercubes in a boolean lattice. | arxiv:1612.05660 |
a sturmian word is a map w from the natural numbers into { 0, 1 } for which the set of { 0, 1 } - vectors f _ n ( w ) : = { ( w ( i ), w ( i + 1 ),..., w ( i + n - 1 ) ) ^ t : i \ ge 0 } has cardinality exactly n + 1 for each positive integer n. our main result is that the volume of the simplex whose n + 1 vertices are the n + 1 points in f _ n ( w ) does not depend on w. our proof of this motivates studying algebraic properties of the permutation $ \ pi $ ( depending on an irrational x and a positive integer n ) that orders the fractional parts { 1 x }, { 2 x },..., { n x }, i. e., 0 < { \ pi ( 1 ) x } < { \ pi ( 2 ) x } <... < { \ pi ( n ) x } < 1. we give a formula for the sign of $ \ pi $, and prove that for every irrational x there are infinitely many n such that the order of $ \ pi $ ( as an element of the symmetric group s _ n ) is less than n. | arxiv:math/0211200 |
we develop a theory of generalist predation showing how alternative prey species are affected by changes in both mean abundance and variability ( coefficient of variation ) of their predator ' s primary prey. the theory is motivated by the indirect effects of cyclic rodent populations on ground - breeding birds, and developed through progressive analytic simplifications of an empirically - based model. it applies nonetheless to many other systems where primary prey have fast life - histories and can become locally superabundant, which facilitates impact on alternative prey species. in contrast to classic apparent competition theory based on symmetric interactions, our results suggest that predator effects on alternative prey should generally decrease with mean primary prey abundance, and increase with primary prey variability ( low to high cv ) - unless predators have strong aggregative responses, in which case these results can be reversed. approximations of models including predator dynamics ( general numerical response with possible delays ) confirm these results but further suggest that negative temporal correlation between predator and primary prey is harmful to alternative prey. we find in general that predator numerical responses are crucial to predict the response of ecosystems to changes in key prey species exhibiting outbreaks, and extend the apparent competition / mutualism theory to asymmetric interactions. | arxiv:1405.2428 |
in this note we study the power of so called query - limited computers. we compare the strength of a classical computer that is allowed to ask two questions to an np - oracle with the strength of a quantum computer that is allowed only one such query. it is shown that any decision problem that requires two parallel ( non - adaptive ) sat - queries on a classical computer can also be solved exactly by a quantum computer using only one sat - oracle call, where both computations have polynomial time - complexity. such a simulation is generally believed to be impossible for a one - query classical computer. the reduction also does not hold if we replace the sat - oracle by a general black - box. this result gives therefore an example of how a quantum computer is probably more powerful than a classical computer. it also highlights the potential differences between quantum complexity results for general oracles when compared to results for more structured tasks like the sat - problem. | arxiv:quant-ph/9806090 |
an effective spin model for mott insulators is determined by the symmetries involved among magnetic sites, electron fillings, and their interactions. such a spin hamiltonian offers insight to mechanisms of magnetic orders and magnetic anisotropy beyond the heisenberg model. for a spin moment s bigger than 1 / 2, single - ion anisotropy is in principle allowed. however, for $ d ^ 3 $ mott insulators with large cubic crystal field splitting, the single - ion anisotropy is absent within the ls coupling, despite s = 3 / 2 local moment. on the other hand, preferred magnetic moment directions in $ d ^ 3 $ materials have been reported, which calls for a further theoretical investigation. here we derive the single - ion anisotropy interaction using the strong - coupling perturbation theory. the cubic crystal field splitting including $ e _ g $ orbitals, trigonal distortions, hund ' s coupling, and spin - orbit coupling beyond the ls scheme are taken into account. for compressed distortion, the spin - orbit coupling at magnetic sites can favor either the easy - axis or the easy - plane while that of anions leads to easy - axis anisotropy. we apply the theory on $ \ rm { crx } _ 3 $ with x = cl and i, and show the dependence of the single - ion anisotropy on the strength of the spin - orbit couplings of both magnetic and anion sites. significance of the single - ion anisotropy in ideal two - dimensional magnets is also discussed. | arxiv:2203.08836 |
in this paper, we explore the behavior of orthogonal involutions in the context of totally positive field extensions. let $ k / f $ be a totally positive extension of formally real fields. by becher ' s result, if a quadratic form $ q $ over $ f $ becomes isotropic over $ k $, then $ q $ is weakly isotropic over $ f $. we present an example in which, despite $ k / f $ being totally positive, a central simple algebra $ ( a, \ sigma ) $ over $ f $ with an orthogonal involution becomes isotropic over $ k $, while remaining strongly isotropic over $ f $. however, when $ k / f $ is assumed to be a galois totally positive $ 2 $ - extension of formally real fields, we show that an analogue of becher ' s result for quadratic forms holds for orthogonal involutions. furthermore, for a totally positive galois field extension $ k / f $, we verify becher ' s conjecture for central division algebras of index $ 2 ^ n $ and exponent $ 2 $ containing a subfield of $ f _ { py } $ of degree $ 2 ^ { n - 2 } $ over $ f $. | arxiv:2503.03366 |
we study the self - organization of turbulence in a geophysically motivated two - dimensional fluid with local interactions. using simulations and theory, we show that the out - of - equilibrium flux to small scales imposes a constraint on the large - scale emergent flow. consequently, a rich phase diagram of large - scale configurations emerges, replacing the unique state found in flows with energy injection below the interaction scale. we explain what sets the boundaries between the different phases, and the occurrence of spontaneous symmetry breaking. our work demonstrates that the selection mechanism of large - scale structures in quasi - geostrophic flows can be dramatically altered by forcing above the interaction scale. | arxiv:2410.15950 |
we focus on legendrian submanifolds of the space of one - jets of functions, $ j ^ 1 ( \ mathbb { r } ^ n, \ mathbb { r } ) $. we are interested in processes - operations - that build new legendrian submanifolds from old ones. we introduce in particular two operations, namely the sum and the convolution, which in some sense lift to $ j ^ 1 ( \ mathbb { r } ^ n, \ mathbb { r } ) $ the operations sum and infimal - convolution on functions that belong to convex analysis. we show that these operations fit well with the classical theory of generating functions. finally, we refine this theory so that the min - max selector of generating functions plays its natural role. | arxiv:1611.06823 |
the study of su ( n ) quantum spin models is relevant to a variety of physical systems including ultracold atoms in optical lattices, and also leads to insights into novel quantum phases and phase transitions of su ( 2 ) spin models. we use gutzwiller projected fermionic variational wavefunctions to explore the phase diagram and correlation functions of su ( n ) spin models in the self - conjugate representation, with heisenberg bilinear and biquadratic interactions. in 1d, the variational phase diagram of the su ( 4 ) spin chain is constructed by examining instabilities of the gutzwiller projected free fermion ground state to various broken symmetries, and it agrees well with exact results. the spin and dimer correlations of the gutzwiller projected free fermion state with n flavors of fermions are also in good agreement with exact and 1 / n calculations for the critical points of su ( n ) spin chains. in 2d, the variational phase diagram on the square lattice is obtained by studying instabilities of the gutzwiller projected pi - flux state. the variational ground state of the pure heisenberg model is found to exhibit long range neel order for n = 2, 4 and spin peierls order for n > 4. for n = 4 and 6, biquadratic interactions lead to a complex phase diagram which includes an extended valence bond crystal in both cases, as well as a stable pi - flux phase for n = 6. the spin correlations of the projected pi - flux state at n = 4 are in good agreement with 1 / n calculations. we find that this state also shows strongly enhanced dimer correlations, in qualitative accord with the large - n results. we compare our results with a recent qmc study of the su ( 4 ) heisenberg model. | arxiv:cond-mat/0608691 |
we fix $ z _ 0 \ in \ mathbb c $ and a field $ \ mathbb f $ with $ \ mathbb c \ subset \ mathbb f \ subset \ mathcal m _ { z _ 0 } : = $ the field of germs of meromorphic functions at $ z _ 0 $. we fix $ f _ 1, \ ldots, f _ r \ in \ mathcal m _ { z _ 0 } $ and we consider the $ \ mathbb f $ - algebras $ s : = \ mathbb f [ f _ 1, \ ldots, f _ r ] $ and $ \ overline s : = \ mathbb f [ f _ 1 ^ { \ pm 1 }, \ ldots, f _ r ^ { \ pm 1 } ] $. we present the general properties of the semigroup rings \ begin { align * } & s ^ { hol } : = \ mathbb f [ f ^ { \ mathbf a } : = f _ 1 ^ { a _ 1 } \ cdots f _ r ^ { a _ r } : ( a _ 1, \ ldots, a _ r ) \ in \ mathbb n ^ r \ text { and } f ^ { \ mathbf a } \ text { is holomorphic at } z _ 0 ], \ \ & \ overline s ^ { hol } : = \ mathbb f [ f ^ { \ mathbf a } : = f _ 1 ^ { a _ 1 } \ cdots f _ r ^ { a _ r } : ( a _ 1, \ ldots, a _ r ) \ in \ mathbb z ^ r \ text { and } f ^ { \ mathbf a } \ text { is holomorphic at } z _ 0 ], \ end { align * } and we tackle in detail the case in which $ \ mathbb f = \ mathcal m _ { < 1 } $ is the field of meromorphic functions of order $ < 1 $ and $ f _ j $ ' s are meromorphic functions over $ \ mathbb c $ of finite order with a finite number of zeros and poles. | arxiv:1907.12099 |
a new two - parametric family of mass distribution for spherical stellar systems is considered. it generalizes families by kuzmin, veltmann ( 1972 ) and by an, evans ( 2006 ). steady velocity dispersions are found for these models by solving an equation of hydrostatic equilibrium. axisymmetric generalizations of the model are discussed. | arxiv:1003.0259 |
photosensitivity refers to a neurophysiological condition in which the brain generates epileptic discharges known as photoparoxysmal responses ( ppr ) in response to light flashes. in severe cases, these ppr can lead to epileptic seizures. the standardized diagnostic procedure for this condition is called intermittent photic stimulation. during this procedure, the patient is exposed to a flashing light, aiming to trigger these epileptic reactions while preventing their full development. meanwhile, brain activity is monitored using electroencephalography, which is visually analyzed by clinical staff to identify these responses. hence, the automatic detection of ppr becomes a highly unbalanced problem that has been barely studied in the literature due to photosensitivity ' s low prevalence. this research tackles this problem and proposes using inception - based deep learning ( dl ) neural networks that, together with transfer learning, are trained in epilepsy seizure detection and tuned in the ppr automatic detection task. a data augmentation ( da ) technique is also applied to balance the available data set, evaluating its effects on the dl models. the proposal outperformed state - of - the - art solutions in the literature, achieving higher ratios on standard performance metrics, and with da significantly improving the sensitivity without affecting accuracy and specificity. this project is currently being developed with patients from burgos university hospital, spain | arxiv:2502.12021 |
we present results of time - domain brillouin scattering ( tdbs ) to determine the local temperature of liquids in contact to an optical transducer. tdbs is based on an ultrafast pump - probe technique to determine the light scattering frequency shift caused by the propagation of coherent acoustic waves in a sample. since the temperature influences the brillouin scattering frequency shift, the tdbs signal probes the local temperature of the liquid. results for the extracted brillouin scattering frequencies recorded at different liquid temperatures and at different laser powers - i. e. different steady state background temperatures - are shown to demonstrate the usefulness of tdbs as a temperature probe. this tdbs experimental scheme is a first step towards the investigation of ultrathin liquids measured by ghz ultrasonic probing. | arxiv:1809.06711 |
the critical behavior of the random field $ o ( n ) $ model driven at a uniform velocity is investigated at zero - temperature. from naive phenomenological arguments, we introduce a dimensional reduction property, which relates the large - scale behavior of the $ d $ - dimensional driven random field $ o ( n ) $ model to that of the $ ( d - 1 ) $ - dimensional pure $ o ( n ) $ model. this is an analogue of the dimensional reduction property in equilibrium cases, which states that the large - scale behavior of $ d $ - dimensional random field models is identical to that of $ ( d - 2 ) $ - dimensional pure models. however, the dimensional reduction property breaks down in low enough dimensions due to the presence of multiple meta - stable states. by employing the non - perturbative renormalization group approach, we calculate the critical exponents of the driven random field $ o ( n ) $ model near three - dimensions and determine the range of $ n $ in which the dimensional reduction breaks down. | arxiv:1704.03644 |
we study the $ b _ { 1g } $ and $ a _ { 1g } $ raman profiles of m $ _ { 2 } $ cu $ o _ { 4 } $ ( with m = la, pr, nd, sm, gd ), bi $ _ { 2 } $ sr $ _ { 2 } $ ca $ _ { 0. 5 } $ y $ _ { 0. 5 } $ cu $ _ { 2 } $ o $ _ { 8 + y } $ %, yba $ _ { 2 } $ cu $ _ { 3 } $ o $ _ { 6. 2 } $ and prba $ _ { 2 } $ cu $ _ { 2. 7 } $ al $ _ { 0. 3 } $ o $ _ { 7 } $ insulating cuprates within the loudon - fleury theory, in the framework of an extended hubbard model for moderate on - site coulomb interaction $ u $. we calculate the non - resonant contribution to these raman profiles by using exact diagonalization techniques and analyze two types of contributing mechanisms to the line shapes : 4 - spin cyclic exchange and spin - phonon interactions. although these interactions contribute to different parts of the spectra, together, they account for the enhanced linewidth and asymmetry of the $ b _ { 1g } $ mode, as well as the non - negligible intensity of the $ a _ { 1g } $ raman line observed in these materials. | arxiv:cond-mat/9809258 |
in cocktail party listening scenarios, the human brain is able to separate competing speech signals. however, the signal processing implemented by the brain to perform cocktail party listening is not well understood. here, we trained two separate convolutive autoencoder deep neural networks ( dnn ) to separate monaural and binaural mixtures of two concurrent speech streams. we then used these dnns as convolutive deep transform ( cdt ) devices to perform probabilistic re - synthesis. the cdts operated directly in the time - domain. our simulations demonstrate that very simple neural networks are capable of exploiting monaural and binaural information available in a cocktail party listening scenario. | arxiv:1503.06046 |
integrated optics provides a platform for the experimental implementation of highly complex and compact circuits for practical applications as well as for advances in the fundamental science of quantum optics. the lithium niobate ( ln ) waveguide is an important candidate for the construction of integrated optical circuits. based on the bound state in the continuum ( bic ) in a ln waveguide, we propose an efficient way to produce polarization - entangled photon pairs. the implementation of this method is simple and does not require the polarization process needed for periodically poled ln. the generation rate of the entangled photon pairs increases linearly with the length of the waveguide. for visible light, the generation efficiency can be improved by more than five orders of magnitude with waveguides having the length of only a few millimeters, compared with the corresponding case without bics. the phenomena can appear in a very wide spectrum range from the visible to thz regions. this study is of great significance for the development of active integrated quantum chips in various wavelength ranges. | arxiv:2103.05323 |
to facilitate the evolution of edge intelligence in ever - changing environments, we study on - device incremental learning constrained in limited computation resource in this paper. current on - device training methods just focus on efficient training without considering the catastrophic forgetting, preventing the model getting stronger when continually exploring the world. to solve this problem, a direct solution is to involve the existing incremental learning mechanisms into the on - device training framework. unfortunately, such a manner cannot work well as those mechanisms usually introduce large additional computational cost to the network optimization process, which would inevitably exceed the memory capacity of the edge devices. to address this issue, this paper makes an early effort to propose a simple but effective edge - friendly incremental learning framework. based on an empirical study on the knowledge intensity of the kernel elements of the neural network, we find that the center kernel is the key for maximizing the knowledge intensity for learning new data, while freezing the other kernel elements would get a good balance on the model ' s capacity for overcoming catastrophic forgetting. upon this finding, we further design a center - sensitive kernel optimization framework to largely alleviate the cost of the gradient computation and back - propagation. besides, a dynamic channel element selection strategy is also proposed to facilitate a sparse orthogonal gradient projection for further reducing the optimization complexity, upon the knowledge explored from the new task data. extensive experiments validate our method is efficient and effective, e. g., our method achieves average accuracy boost of 38. 08 % with even less memory and approximate computation compared to existing on - device training methods, indicating its significant potential for on - device incremental learning. | arxiv:2406.08830 |
in this paper we present a deep x - ray observation of the nearby m dwarf gj 357 and use it to put constraints on the atmospheric evolution of its planet, gj 357 b. we also analyse the systematic errors in the stellar parameters of gj 357 in order to see how they affect the perceived planetary properties. we estimate the age of gj 357 b by comparing the observed x - ray luminosity of its host star, derived from a recent { \ em xmm - newton } observation { ( $ \ log { l _ { \ rm x } } \, { \ rm [ erg / s ] } = 25. 73 $ ), with $ l _ { \ rm x } - $ age relations for m dwarfs. we find that gj 357 presents one of the lowest x - ray activity levels ever measured for an m dwarf, and we put a lower limit on its age of $ 5 $ \, gyr. } using this age limit, we perform a backwards reconstruction of the original primordial atmospheric reservoir. furthermore, by considering the systematic errors in the stellar parameters, we find a range of possible planetary masses, radii, and densities. from the backwards reconstruction of gj 357 b ' s irradiation history we find that the upper limit of its initial primordial atmospheric mass is $ \ sim \ rm 38m _ { \ oplus } $. an initial atmospheric reservoir significantly larger than this may have survived through the x - ray and ultraviolet irradiation history, hence being inconsistent with current observations that suggest a telluric composition. in spite of the unlikelihood of a currently existing primordial envelope, volcanism and outgassing may have contributed to a secondary atmosphere. under this assumption, we present three different synthetic infrared spectra for gj 357 b that one might expect, consisting of $ 100 \ % ~ \ rm co _ { 2 } $, $ 100 \ % ~ \ rm so _ { 2 } $, and $ 75 \ % ~ \ rm n _ { 2 } $, $ 24 \ % ~ \ rm co _ { 2 } $ and $ 1 \ % ~ \ rm h _ { 2 } o $. | arxiv:2007.10262 |
after two decades of repository development, some conclusions may be drawn as to which type of repository and what kind of service best supports digital scholarly communication, and thus the production of new knowledge. four types of publication repository may be distinguished, namely the subject - based repository, research repository, national repository system and institutional repository. two important shifts in the role of repositories may be noted. with regard to content, a well - defined and high quality corpus is essential. this implies that repository services are likely to be most successful when constructed with the user and reader uppermost in mind. with regard to service, high value to specific scholarly communities is essential. this implies that repositories are likely to be most useful to scholars when they offer dedicated services supporting the production of new knowledge. along these lines, challenges and barriers to repository development may be identified in three key dimensions : a ) identification and deposit of content ; b ) access and use of services ; and c ) preservation of content and sustainability of service. an indicative comparison of challenges and barriers in some major world regions such as europe, north america and east asia plus australia is offered in conclusion. | arxiv:1005.0839 |
we report the discovery of blue shifted ( delta ( v ) > 200 km / s ) mid - infrared [ neiii ] and / or [ nev ] emission in 25 out of 82 ulirgs ( 30 % of our sample ). the incidence of blue shifted [ nev ] emission is even higher ( 59 % ) among the sources with a [ nev ] detection - - the tell - tale signature of an active galactic nucleus ( agn ). sixteen ulirgs in our sample, eleven of which are optically classified as agn, have [ neiii ] blue shifts above 200 km / s. a comparison of the line profiles of their 12. 81um [ neii ], 15. 56um [ neiii ] and 14. 32um [ nev ] lines reveals the ionization of the blue shifted gas to increase with blue shift, implying decelerating outflows in a stratified medium, photo - ionized by the agn. the strong correlation of the line width of the [ neiii ] line with the radio luminosity indicates that interaction of expanding radio jets with the dense ism surrounding the agn may explain the observed neon line kinematics for the strongest radio sources in this sample. | arxiv:0907.4370 |
the discrete source classifier ( dsc ) provides probabilistic classification of sources in gaia data release 3 using a bayesian framework and a global prior. the dsc combmod classifier in gdr3 achieved for the extragalactic classes ( quasars and galaxies ) a high completeness of 92 %, but a low purity of 22 % due to contamination from the far larger star class. however, these single metrics mask significant variation in performance with magnitude and sky position. furthermore, a better combination of the individual classifiers is possible. here we compute two - dimensional representations of the completeness and the purity as function of galactic latitude and source brightness, and also exclude the magellanic clouds where stellar contamination significantly reduces the purity. reevaluated on a cleaner validation set and without introducing changes to the published gdr3 dsc probabilities themselves, we achieve for combmod average 2d completenesses of 92 % and 95 % and average 2d purities of 55 % and 89 % for the quasar and galaxy classes, respectively. since the relative proportions of extragalactic objects to stars in gaia is expected to vary significantly with brightness and latitude, we introduce a new prior as a continuous function of brightness and latitude, and compute new class probabilities. this variable prior only improves the performance by a few percentage points, mostly at the faint end. significant improvement, however, is obtained by a new additive combination of specmod and allosmod. this classifier, combmod - $ \ alpha $, achieves average 2d completenesses of 82 % and 93 % and average 2d purities of 79 % and 93 % for the quasar and galaxy classes, respectively, when using the global prior. thus, we achieve a significant improvement in purity for a small loss of completeness. the improvement is most significant for faint quasars where the purity rises from 20 % to 62 %. | arxiv:2405.01340 |
electrical distribution systems are extensively penetrated with distributed energy resources ( ders ) to cater the energy demands with the general perception that it enhances the system ' s resilience. however, integration of ders may adversely affect the grid operation and affect the system resilience due to various factors like their intermittent availability, dynamics of weather conditions, non - linearity, complexity, number of malicious threats, and improved reliability requirements of consumers. this paper proposes a methodology to evaluate the planning and operational resilience of power distribution systems under extreme events and determines the withstand capability of the electrical network. the proposed framework is developed by effectively employing the complex network theory. correlated networks for undesirable configurations are developed from the time series data of active power monitored at nodes of the electrical network. for these correlated networks, computed the network parameters such as clustering coefficient, assortative coefficient, average degree and power law exponent for the anticipation ; and percolation threshold for the determination of the network withstand capability under extreme conditions. the proposed methodology is also suitable for identifying the hosting capacity of solar panels in the system while maintaining resilience under different unfavourable conditions and identifying the most critical nodes of the system that could drive the system into non - resilience. this framework is demonstrated on ieee 123 node test feeder by generating active power time - series data for a variety of electrical conditions using simulation software, gridlab - d. the percolation threshold resulted as an effective metric for the determination of the planning and operational resilience of the power distribution system. | arxiv:2208.11543 |
the renormalization group equations ( rges ) of non - universal soft supersymmetric breaking terms with cp violating phases are analyzed in this paper. we obtain the analytic solutions of rges by directly solving the rges themselves. compared with the method of spurion expansion our approach proves to be simple and succinct, and easy to extend to the case of complex parameters. with the analytical forms of the solutions we obtained the infrared quasi fixed point behavior of soft terms are analyzed and it turns out to support the notion in scenarios with cp violating phases. | arxiv:hep-ph/0008166 |
we report a detection of differences in ion and neutral velocities in prominences using high resolution spectral data obtained in september 2012 at the german vacuum tower telescope ( observatorio del teide, tenerife ). a time series of scans of a small portion of a solar prominence was obtained simultaneously with a high cadence using the lines of two elements with different ionization states, namely the caii 8542 a and the hei 10830 a. displacements, widths and amplitudes of both lines were carefully compared to extract dynamical information about the plasma. many dynamical features are detected, such as counterstreaming flows, jets and propagating waves. in all the cases we find very strong correlation between the parameters extracted from the lines of both elements, confirming that both trace the same plasma. nevertheless, we also find short - lived transients where this correlation is lost. these transients are associated with the ion - neutral drift velocities of the order of several hundred m / s. the patches of non - zero drift velocity show coherence on time - distance diagrams. | arxiv:1604.01177 |
anymod. jl is a julia framework for creating large - scale energy system models with multiple periods of capacity expansion. it applies a novel graphbased approach that was developed to address the challenges in modeling high levels of intermittent generation and sectoral integration. created models are formulated as linear optimization problems using jump. jl as a backend. to enable modelers to work more efficiently, the framework provides additional features that help to visualize results, streamline the read - in of input data, and rescale optimization problems to increase solver performance. | arxiv:2011.00895 |
humidity and c _ n ^ 2 data collected from the chesapeake bay area during the 2003 / 2004 period have been analyzed. we demonstrate that there is an unequivocal correlation between the data during the same time periods, in the absence of solar insolation. this correlation manifests itself as an inverse relationship. we suggest that c _ n ^ 2 in the infrared region is also function of humidity, in addition to temperature and pressure. | arxiv:physics/0605050 |
an effective low - energy model describing magnetic properties of alkali - cluster - loaded sodalites is derived by { \ em ab initio } downfolding. we start with constructing an extended hubbard model for maximally localized wannier functions. { \ em ab initio } screened coulomb and exchange interactions are calculated by constrained random phase approximation. we find that the system resides in the strong coupling regime and thus the heisenberg model is derived as a low - energy model of the extended hubbard model. we obtain antiferromagnetic couplings $ \ sim $ $ o $ ( 10 k ), being consistent with the experimental temperature dependence of the spin susceptibility. importance of considering the screening effect in the derivation of the extended hubbard model is discussed. | arxiv:0907.4593 |
we use the s \ ' aez - ballester ( sb ) theory on anisotropic bianchi class a cosmological model, with barotropic fluid and cosmological constant, using the hamilton or hamilton - jacobi approach. contrary to claims in the specialized literature, it is shown that the s \ ' aez - ballester theory cannot provide a realistic solution to the dark matter problem of cosmology for the dust epoch, without a fine tunning because the contribution of the scalar field in this theory is equivalent to a stiff fluid ( as can be seen from the energy - - momentum tensor for the scalar field ), that evolves in a different way as the dust component. to have similar contributions of the scalar component and the dust component implies that their past values were fine tunned. so, we reinterpreting this null result as an indication that dark matter plays a central role in the formation of structures and galaxy evolution, having measureable effects in the cosmic microwave bound radiation, and than this formalism yield to this epoch as primigenius results. we do the mention that this formalism was used recently in the so called k - essence theory applied to dark energy problem, in place to the dark matter problem. also, we include a quantization procedure of the theory which can be simplified by reinterpreting the theory in the einstein frame, where the scalar field can be interpreted as part of the matter content of the theory, and exact solutions to the wheeler - dewitt equation are found, employing the bianchi class a cosmological models. | arxiv:1111.2318 |
in this paper we study detection and reconstruction of planted structures in erd \ h { o } s - r \ ' enyi random graphs. motivated by a problem of communication security, we focus on planted structures that consist in a tree graph. for planted line graphs, we establish the following phase diagram. in a low density region where the average degree $ \ lambda $ of the initial graph is below some critical value $ \ lambda _ c = 1 $, detection and reconstruction go from impossible to easy as the line length $ k $ crosses some critical value $ f ( \ lambda ) \ ln ( n ) $, where $ n $ is the number of nodes in the graph. in the high density region $ \ lambda > \ lambda _ c $, detection goes from impossible to easy as $ k $ goes from $ o ( \ sqrt { n } ) $ to $ \ omega ( \ sqrt { n } ) $, and reconstruction remains impossible so long as $ k = o ( n ) $. for $ d $ - ary trees of varying depth $ h $ and $ 2 \ le d \ le o ( 1 ) $, we identify a low - density region $ \ lambda < \ lambda _ d $, such that the following holds. there is a threshold $ h * = g ( d ) \ ln ( \ ln ( n ) ) $ with the following properties. detection goes from feasible to impossible as $ h $ crosses $ h * $. we also show that only partial reconstruction is feasible at best for $ h \ ge h * $. we conjecture a similar picture to hold for $ d $ - ary trees as for lines in the high - density region $ \ lambda > \ lambda _ d $, but confirm only the following part of this picture : detection is easy for $ d $ - ary trees of size $ \ omega ( \ sqrt { n } ) $, while at best only partial reconstruction is feasible for $ d $ - ary trees of any size $ o ( n ) $. these results are in contrast with the corresponding picture for detection and reconstruction of { \ em low rank } planted structures, such as dense subgraphs and block communities : we observe a discrepancy between detection and reconstruction, the latter being impossible for a wide range of parameters where detection is easy. this property does not hold for previously studied low rank planted structures. | arxiv:1811.01800 |
distributed evacuation of mobile robots is a recent development. we consider the evacuation problem of two robots which are initially located at the center of a unit disk. both the robots have to evacuate the disk through the exits situated on the perimeter of the disk at an unknown location. the distance between two exits along the perimeter $ d $ is given. we consider two different communication models. first, in the wireless model, the robots can send a message to each other over a long distance. second, in face - to - face communication model, the robots can exchange information with each other only when they touch each other. the objective of the evacuation problem is to design an algorithm which minimizes the evacuation time of both the robots. for the wireless communication model, we propose a generic algorithm for two robots moving to two points on the perimeter with an initial separation of $ \ zeta \ leq d $. we also investigate evacuation problem for both unlabeled and labeled exits in the wireless communication model. for the face - to - face communication model, we propose two different algorithms for $ \ zeta = 0 $ and $ \ zeta = d $ for unlabeled exits. we also propose a generic algorithm for $ \ zeta \ leq d $ for labeled exits. we provide lower bounds corresponding to different $ d $ values in the face - to - face communication model. we evaluate the performance our algorithms with simulation for both of the communication models. | arxiv:1708.03792 |
we calculate the effect of a heat current on transporting $ ^ 3 $ he dissolved in superfluid $ ^ 4 $ he at ultralow concentration, as will be utilized in a proposed experimental search for the electric dipole moment of the neutron ( nedm ). in this experiment, a phonon wind will generated to drive ( partly depolarized ) $ ^ 3 $ he down a long pipe. in the regime of $ ^ 3 $ he concentrations $ \ tilde < 10 ^ { - 9 } $ and temperatures $ \ sim 0. 5 $ k, the phonons comprising the heat current are kept in a flowing local equilibrium by small angle phonon - phonon scattering, while they transfer momentum to the walls via the $ ^ 4 $ he first viscosity. on the other hand, the phonon wind drives the $ ^ 3 $ he out of local equilibrium via phonon - $ ^ 3 $ he scattering. for temperatures below $ 0. 5 $ k, both the phonon and $ ^ 3 $ he mean free paths can reach the centimeter scale, and we calculate the effects on the transport coefficients. we derive the relevant transport coefficients, the phonon thermal conductivity and the $ ^ 3 $ he diffusion constants from the boltzmann equation. we calculate the effect of scattering from the walls of the pipe and show that it may be characterized by the average distance from points inside the pipe to the walls. the temporal evolution of the spatial distribution of the $ ^ 3 $ he atoms is determined by the time dependent $ ^ 3 $ he diffusion equation, which describes the competition between advection by the phonon wind and $ ^ 3 $ he diffusion. as a consequence of the thermal diffusivity being small compared with the $ ^ 3 $ he diffusivity, the scale height of the final $ ^ 3 $ he distribution is much smaller than that of the temperature gradient. we present exact solutions of the time dependent temperature and $ ^ 3 $ he distributions in terms of a complete set of normal modes. | arxiv:1505.01468 |
member states, canada, and japan. despite its status as the first international space program, the space station freedom was controversial, with much of the debate centering on cost. several redesigns to reduce cost were conducted in the early 1990s, stripping away much of its functions. despite calls for congress to terminate the program, it continued, in large part because by 1992 it had created 75, 000 jobs across 39 states. by 1993, president bill clinton attempted to significantly reduce nasa ' s budget and directed costs be significantly reduced, aerospace industry jobs were not lost, and the russians be included. in 1993, the clinton administration announced that the space station freedom would become the international space station in an agreement with the russian federation. this allowed the russians to maintain their space program through an infusion of american currency to maintain their status as one of the two premier space programs. while the united states built and launched the majority of the international space station, russia, canada, japan, and the european space agency all contributed components. despite nasa ' s insistence that costs would be kept at a budget of $ 17. 4, they kept rising and nasa had to transfer funds from other programs to keep the international space station solvent. ultimately, the total cost of the station was $ 150 billion, with the united states paying for two - thirds. following the space shuttle columbia disaster in 2003, nasa was forced to rely on russian soyuz launches for its astronauts and the 2011 retirement of the space shuttle accelerated the station ' s completion. in the 1980s, right after the first flight of the space shuttle, nasa started a joint program with the department of defense to develop the rockwell x - 30 national aerospace plane. nasa realized that the space shuttle, while a massive technological accomplishment, would not be able to live up to all its promises. designed to be a single - stage - to - orbit spaceplane, the x - 30 had both civil and military applications. with the end of the cold war, the x - 30 was canceled in 1992 before reaching flight status. = = = unleashing commercial space and return to the moon = = = following the space shuttle columbia disaster in 2003, president bush started the constellation program to smoothly replace the space shuttle and expand space exploration beyond low earth orbit. constellation was intended to use a significant amount of former space shuttle equipment and return astronauts to the moon. this program was canceled by the obama administration. former astronauts neil armstrong, gene cernan, and jim lovell sent a letter to president barack obama to warn him that if the united states did | https://en.wikipedia.org/wiki/NASA |
achieving compact on - chip pulsed lasers with attractive performance metrics and compatibility with the silicon photonics platform is an important, yet elusive, goal in contemporary nanophotonics. here, the fundamental question of whether 2d materials can be utilized as both gain and saturable absorption media to enable compact integrated passively q - switched nanophotonic lasers is posed and addressed by examining a broad range of 2d material families. the study is conducted by developing a temporal coupled - mode theory framework involving semi - classical rate equations that is capable of rigorously handling gain and saturable absorption by 2d materials, allowing to perform stability and bifurcation analysis covering broad parameter spaces. the range of pulse - train metrics ( repetition rate, pulse width, peak power ) that can be obtained via different 2d materials is thoroughly assessed. our work illustrates that nanophotonic cavities enhanced with 2d materials can enable passive q - switching with repetition rates ranging up to 50 ~ ghz, short pulse duration down to few picoseconds, and peak power exceeding several milliwatts. such attractive metrics, along with the ultrathin nature of 2d materials and the ability to electrically tune their properties, demonstrate the potential of the proposed platform for compact and flexible integrated laser sources. | arxiv:2502.00431 |
day - and - night radiative sky cooling has emerged as a potential alternative to conventional cooling technologies such as refrigeration - based air conditioning and evaporative wet cooling. both radiative cooling and evaporative cooling can passively achieve sub - ambient cooling without consuming electricity. although both cooling techniques are subject to impacts from various weather conditions, the extents of the impacts under the same conditions are not well understood. in this work, we experimentally and theoretically study the thermal performances of a passive radiative cooler and a passive evaporative cooler when exposed to a clear night sky. we show that evaporative cooling is better suited for high - temperature and low - humidity weather conditions, with the measured sub - ambient temperatures of the radiative and evaporative coolers being - 13. 5 { \ deg } c and - 15. 0 { \ deg } c, respectively, at a low relative humidity of 13 % and a high ambient temperature of 26. 0 { \ deg } c. on the other hand, radiative cooling is relatively more resilient than evaporative cooling under high - humidity and / or low - temperature weather conditions, with the measured sub - ambient temperatures of the coolers being - 11. 5 { \ deg } c and - 10. 5 { \ deg } c, respectively, at a slightly higher relative humidity of 32. 0 % and a slightly lower ambient temperature of 17. 0 { \ deg } c. depending on water availability and weather conditions, both evaporative cooling and radiative cooling can be adopted as mutually supplemental cooling technologies. | arxiv:2107.04151 |
many real - world user queries ( e. g. " how do to make egg fried rice? " ) could benefit from systems capable of generating responses with both textual steps with accompanying images, similar to a cookbook. models designed to generate interleaved text and images face challenges in ensuring consistency within and across these modalities. to address these challenges, we present isg, a comprehensive evaluation framework for interleaved text - and - image generation. isg leverages a scene graph structure to capture relationships between text and image blocks, evaluating responses on four levels of granularity : holistic, structural, block - level, and image - specific. this multi - tiered evaluation allows for a nuanced assessment of consistency, coherence, and accuracy, and provides interpretable question - answer feedback. in conjunction with isg, we introduce a benchmark, isg - bench, encompassing 1, 150 samples across 8 categories and 21 subcategories. this benchmark dataset includes complex language - vision dependencies and golden answers to evaluate models effectively on vision - centric tasks such as style transfer, a challenging area for current models. using isg - bench, we demonstrate that recent unified vision - language models perform poorly on generating interleaved content. while compositional approaches that combine separate language and image models show a 111 % improvement over unified models at the holistic level, their performance remains suboptimal at both block and image levels. to facilitate future work, we develop isg - agent, a baseline agent employing a " plan - execute - refine " pipeline to invoke tools, achieving a 122 % performance improvement. | arxiv:2411.17188 |
a novel sparsity - based algorithm for audio inpainting is proposed. it is an adaptation of the spade algorithm by kiti \ ' c et al., originally developed for audio declipping, to the task of audio inpainting. the new spain ( sparse audio inpainter ) comes in synthesis and analysis variants. experiments show that both a - spain and s - spain outperform other sparsity - based inpainting algorithms. moreover, a - spain performs on a par with the state - of - the - art method based on linear prediction in terms of the snr, and, for larger gaps, spain is even slightly better in terms of the pemo - q psychoacoustic criterion. | arxiv:1810.13137 |
production of the higgs boson, $ h $ in association with a massive vector boson, $ v $, i. e., the $ vh $ process, plays an important role in the explorations of higgs physics at the large hadron collider, both for a precise study of higgs ' standard model couplings and for probing new physics. in this publication we present the two - loop corrections in massless quantum chromodynamics ( qcd ) to the amplitude of the higgs production associated with a $ z $ boson via the bottom quark - antiquark annihilation channel with a non - vanishing bottom - quark yukawa coupling, which is a necessary ingredient of the full next - to - next - to - leading - order qcd corrections to the $ vh $ process in the five - flavour scheme. the computation is performed by projecting the d - dimensional scattering amplitude directly onto an appropriate set of lorentz structures related to the linear polarisation states of the $ z $ boson. we provide analytic expressions of the complete set of renormalised polarised amplitudes in terms of polylogarithms of maximum weight four. to give an estimation of the size of contributions from amplitudes considered in this work, we compute numerically the resulting cross sections under the soft - virtual approximation. we also take the opportunity to make a dedicated discussion regarding an interesting subtlety appearing in the conventional form factor decomposition of amplitudes involving axial currents regularised in d dimensions. | arxiv:1910.06347 |
nowadays iot applications consist of a collection of loosely coupled modules, namely microservices, that can be managed and placed in a heterogeneous environment consisting of private and public resources. it follows that distributing the application logic introduces new challenges in guaranteeing performance and reducing costs. however, most existing solutions are focused on reducing pay - per - use costs without considering a microservice - based architecture. we propose a cost - effective workload allocation for microservice - based applications. we model the problem as an integer programming problem and we formulate an efficient and near - optimal heuristic solution given the np - hardness of the original problem. numerical results demonstrate the good performance of the proposed heuristic in terms of cost reduction and performance with respect to optimal and state - of - the - art solutions. moreover, an evaluation conducted in a kubernetes cluster running in an openstack ecosystem confirms the feasibility and the validity of the proposed solution. | arxiv:2110.12788 |
in this paper, we propose two variants of the positivity - preserving schemes, namely the truncated euler - maruyama ( em ) method and the truncated milstein scheme, applied to stochastic differential equations ( sdes ) with positive solutions and super - linear coefficients. under some regularity and integrability assumptions we derive the optimal strong convergence rates of the two schemes. moreover, we demonstrate flexibility of our approaches by applying the truncated methods to approximate sdes with super - linear coefficients ( 3 / 2 and ai { \ i } t - sahalia models ) directly and also with sub - linear coefficients ( cir model ) indirectly. numerical experiments are provided to verify the effectiveness of the theoretical results. | arxiv:2410.05614 |
a stellar population synthesis model, suitable for comparison with giant extragalactic hii regions ( gehrs ), is constructed incorporating the recent developments in modelling stellar evolution by maeder and co - workers and stellar atmospheres by kurucz. a number of quantities suitable for comparison with broad band data of gehrs in visible and near infrared parts of the spectrum are synthesized in addition to the hydrogen and helium ionizing photon production rates at solar metallicities, for three scenarios of star formation - - - ( i ) instantaneous burst ( ib ) ( ii ) continuous star formation ( csf ) and ( iii ) two bursts of star formation, with the older burst rich in red supergiants. for ib case, evolution of colors shows three distinct phases - - - an initial steady blue phase, followed by a red bump ( 5 - - 15 ~ myr ) and another steady phase with colors intermediate to the earlier two phases. csf colors asymptotically reach peak values at $ \ sim 10 $ ~ myr, never reaching the reddest ib colors. ionizing photon production rate falls off by an order of magnitude in 6 ~ myr for ib, where as it almost remains constant for csf model. two - burst models with burst separations $ \ sim 10 $ ~ myr have properties of both ib and csf, simultaneously producing the red ib colors and high ionizing photon rate, making such regions easily distinguishable using optical observations. flat imfs result in bluest colors when the massive stars are on the main sequence and reddest colors during the red supergiant phase of the evolving massive stars. errors on the computed quantities due to the statistical uncertainties inherent in the process of star formation become negligible for cluster masses in excess of $ 10 ^ 5 $ \, \ msun. | arxiv:astro-ph/9503061 |
we present a fully autonomous real - world rl framework for mobile manipulation that can learn policies without extensive instrumentation or human supervision. this is enabled by 1 ) task - relevant autonomy, which guides exploration towards object interactions and prevents stagnation near goal states, 2 ) efficient policy learning by leveraging basic task knowledge in behavior priors, and 3 ) formulating generic rewards that combine human - interpretable semantic information with low - level, fine - grained observations. we demonstrate that our approach allows spot robots to continually improve their performance on a set of four challenging mobile manipulation tasks, obtaining an average success rate of 80 % across tasks, a 3 - 4 improvement over existing approaches. videos can be found at https : / / continual - mobile - manip. github. io / | arxiv:2409.20568 |
we solve numerically the ideal mhd equations with an external gravitational field in 2d in order to study the effects of impulsively generated linear and non - linear alfv \ ' en waves into isolated solar arcades and coronal funnels. we analyze the region containing the interface between the photosphere and the corona. the main interest is to study the possibility that alfv \ ' en waves triggers the energy flux transfer toward the quiet solar corona and heat it, including the case that two consecutive waves can occur. we find that in the case of arcades, short or large, the transferred fluxes by alfv \ ' en waves are sufficient to heat the quiet corona only during a small lapse of time and in a certain region. in the case of funnels the threshold is achieved only when the wave is faster than 10 km / s, which is extremely high. we conclude from our analysis, that alfv \ ' en waves, even in the optimistic scenario of having two consecutive alfv \ ' en wave pulses, cannot transport enough energy as to heat the quiet corona. | arxiv:1505.01401 |
a graphical design is a proper subset of vertices of a graph on which many eigenfunctions of the laplacian operator have mean value zero. in this paper, we show that extremal independent sets make extremal graphical designs, that is, a design on which the maximum possible number of eigenfunctions have mean value zero. we then provide examples of such graphs and sets, which arise naturally in extremal combinatorics. we also show that sets which realize the isoperimetric constant of a graph make extremal graphical designs, and provide examples for them as well. we investigate the behavior of graphical designs under the operation of weak graph product. in addition, we present a family of extremal graphical designs for the hypercube graph. | arxiv:1910.05966 |
we consider the bennett - brassard cryptographic scheme, which uses two conjugate quantum bases. an eavesdropper who attempts to obtain information on qubits sent in one of the bases causes a disturbance to qubits sent in the other basis. we derive an upper bound to the accessible information in one basis, for a given error rate in the conjugate basis. independently fixing the error rate in the conjugate bases, we show that both bounds can be attained simultaneously by an optimal eavesdropping probe, consisting of two qubits. the qubits ' interaction and their subsequent measurement are described explicitly. these results are combined to give an expression for the optimal information an eavesdropper can obtain for a given average disturbance when her interaction and measurements are performed signal by signal. finally, the relation between quantum cryptography and violations of bell ' s inequalities is discussed. | arxiv:quant-ph/9701039 |
it is almost universally believed that in quantum theory the two following statements hold : 1 ) all transformations are achieved by a unitary interaction followed by a von neumann measurement ; 2 ) all mixed states are marginals of pure entangled states. i name this doctrine the dogma of purification ontology. the source of the dogma is the original von neumann axiomatisation of the theory, which largely relies on the schroedinger equation as a postulate, which holds in a nonrelativistic context, and whose operator version holds only in free quantum field theory, but no longer in the interacting theory. in the present paper i prove that both ontologies of unitarity and state purity are unfalsifiable, even in principle, and therefore axiomatically spurious. i propose instead a minimal four - postulate axiomatisation : 1 ) associate a hilbert space ha to each system a ; 2 ) compose two systems by the tensor product rule hab = haxhb ; 3 ) associate a transformation from system a to b to a quantum operation, i. e. to a completely positive trace - non - increasing map between the trace - class operators of a and b ; 4 ) ( born rule ) evaluate all joint probabilities through that of a special type of quantum operation : the state preparation. i then conclude that quantum paradoxes - such as the schroedinger - cat ' s, and, most relevantly, the information paradox - are originated only by the dogma of purification ontology, and they are no longer paradoxes of the theory in the minimal formulation. for the same reason, most interpretations of the theory ( e. g. many - world, relational, darwinism, transactional, von neumann - wigner, time - symmetric,... ) interpret the same dogma, not the strict theory stripped of the spurious postulates. | arxiv:2011.04011 |
the arrival of the vera c. rubin observatory ' s legacy survey of space and time ( lsst ), euclid - wide and roman wide area sensitive surveys will herald a new era in strong lens science in which the number of strong lenses known is expected to rise from $ \ mathcal { o } ( 10 ^ 3 ) $ to $ \ mathcal { o } ( 10 ^ 5 ) $. however, current lens - finding methods still require time - consuming follow - up visual inspection by strong - lens experts to remove false positives which is only set to increase with these surveys. in this work we demonstrate a range of methods to produce calibrated probabilities to help determine the veracity of any given lens candidate. to do this we use the classifications from citizen science and multiple neural networks for galaxies selected from the hyper suprime - cam ( hsc ) survey. our methodology is not restricted to particular classifier types and could be applied to any strong lens classifier which produces quantitative scores. using these calibrated probabilities, we generate an ensemble classifier, combining citizen science and neural network lens finders. we find such an ensemble can provide improved classification over the individual classifiers. we find a false positive rate of $ 10 ^ { - 3 } $ can be achieved with a completeness of $ 46 \ % $, compared to $ 34 \ % $ for the best individual classifier. given the large number of galaxy - galaxy strong lenses anticipated in lsst, such improvement would still produce significant numbers of false positives, in which case using calibrated probabilities will be essential for population analysis of large populations of lenses. | arxiv:2311.07455 |
heavy - flavour production measurements in pp collisions are important tools to test theoretical models based on perturbative quantum chromodymanics ( pqcd ) and to investigate the heavy - quark hadronization mechanisms. in alice, heavy quarks are measured via the hadronic and electronic decay channels at central rapidity ( - 0. 9 $ < $ $ \ textit { y } $ $ < $ 0. 9 ) and via the muon decay channels at forward rapidity ( - 4 $ < $ $ \ textit { y } $ $ < $ - 2. 5 ). in this contribution, the production cross - section measurements via the leptonic decay of heavy - flavour hadrons are presented and compared to pqcd theoretical calculations. the latest measurements of $ \ rm d ^ { 0 } $, $ \ rm d ^ { + } $, $ \ rm d ^ { * + } $, $ d ^ { + } _ { s } $ mesons whose hadronic decays into charged are fully reconstructed together with the measurements of $ \ lambda ^ { + } _ { c } $, $ \ xi ^ { 0, + } _ { c } $, $ \ sigma ^ { 0, + + } _ { c } $ and $ \ omega ^ { 0 } _ { c } $ baryons, performed with the alice detector at midrapidity in pp collisions at $ \ sqrt { s } $ = 13 tev, are also presented. measurements of charm - baryon production are crucial to study the charm - quark hadronization mechanisms in a partonic rich environment like the one produced in pp collisions at lhc energies. | arxiv:2301.11141 |
collective guidance of out - of - equilibrium systems without using external fields is a challenge of paramount importance in active matter, ranging from bacterial colonies to swarms of self - propelled particles. designing strategies to guide active matter and exploiting enhanced diffusion associated to its motion will provide insights for application from sensing, drug delivery to water remediation. however, achieving directed motion without breaking detailed balance, for example by asymmetric topographical patterning, is challenging. here we engineer a two - dimensional periodic topographical design with detailed balance in its unit cell where we observe spontaneous particle edge guidance and corner accumulation of self - propelled particles. this emergent behaviour is guaranteed by a second - order non - hermitian skin effect, a topologically robust non - equilibrium phenomenon, that we use to dynamically break detailed balance. our stochastic circuit model predicts, without fitting parameters, how guidance and accumulation can be controlled and enhanced by design : a device guides particles more efficiently if the topological invariant characterizing it is non - zero. our work establishes a fruitful bridge between active and topological matter, and our design principles offer a blueprint to design devices that display spontaneous, robust and predictable guided motion and accumulation, guaranteed by out - of - equilibrium topology. | arxiv:2012.14496 |
surveys searching for transiting exoplanets have found many more candidates than they have been able to confirm as true planets. this situation is especially acute with the kepler survey, which has found over 2300 candidates but has confirmed only 77 planets to date. i present here a general procedure that can quickly be applied to any planet candidate to calculate its false positive probability. this procedure takes into account the period, depth, duration, and shape of the signal ; the colors of the target star ; arbitrary spectroscopic or imaging follow - up observations ; and informed assumptions about the populations and distributions of field stars and multiple - star properties. i also introduce the concept of the " specific occurrence rate, " which allows for the calculation of the fpp without relying on an assumed planet radius function. applying these methods to a sample of known kepler planets, i demonstrate that many signals can be validated with very limited follow - up observations : in most cases with only a spectrum and an ao image. additionally, i demonstrate that this procedure can reliably identify false positive signals. because of the computational efficiency of this analysis, it is feasible to apply it to all kepler planet candidates in the near future, and it will streamline the follow - up efforts for kepler and other current and future transit surveys. | arxiv:1206.1568 |
this paper is devoted to pulse solutions in fitzhugh - - nagumo systems that are coupled parabolic equations with rapidly periodically oscillating coefficients. in the limit of vanishing periods, there arises a two - scale fitzhugh - - nagumo system, which qualitatively and quantitatively captures the dynamics of the original system. we prove existence and stability of pulses in the limit system and show their proximity on any finite time interval to pulse - like solutions of the original system. | arxiv:1707.08176 |
ranking systems are the key components of modern information retrieval ( ir ) applications, such as search engines and recommender systems. besides the ranking relevance to users, the exposure fairness to item providers has also been considered an important factor in ranking optimization. many fair ranking algorithms have been proposed to jointly optimize both ranking relevance and fairness. however, we find that most existing fair ranking methods adopt greedy algorithms that only optimize rankings for the next immediate session or request. as shown in this paper, such a myopic paradigm could limit the upper bound of ranking optimization and lead to suboptimal performance in the long term. to this end, we propose \ textbf { fara }, a novel \ textbf { f } uture - \ textbf { a } ware \ textbf { r } anking \ textbf { a } lgorithm for ranking relevance and fairness optimization. instead of greedily optimizing rankings for the next immediate session, fara plans ahead by jointly optimizing multiple ranklists together and saving them for future sessions. specifically, fara first uses the taylor expansion to investigate how future ranklists will influence the overall fairness of the system. then, based on the analysis of the taylor expansion, fara adopts a two - phase optimization algorithm where we first solve an optimal future exposure planning problem and then construct the optimal ranklists according to the optimal future exposure planning. theoretically, we show that fara is optimal for ranking relevance and fairness joint optimization. empirically, our extensive experiments on three semi - synthesized datasets show that fara is efficient, effective, and can deliver significantly better ranking performance compared to state - of - the - art fair ranking methods. we make our implementation public at \ href { https : / / github. com / taosheng - ty / qp _ fairness / } { https : / / github. com / taosheng - ty / qp \ _ fairness / }. | arxiv:2305.16637 |
usually be recovered with reverse engineering. the process can also help to cut down the time required to understand the source code, thus reducing the overall cost of the software development. reverse engineering can also help to detect and to eliminate a malicious code written to the software with better code detectors. reversing a source code can be used to find alternate uses of the source code, such as detecting the unauthorized replication of the source code where it was not intended to be used, or revealing how a competitor ' s product was built. that process is commonly used for " cracking " software and media to remove their copy protection, : 7 or to create a possibly - improved copy or even a knockoff, which is usually the goal of a competitor or a hacker. : 8 malware developers often use reverse engineering techniques to find vulnerabilities in an operating system to build a computer virus that can exploit the system vulnerabilities. : 5 reverse engineering is also being used in cryptanalysis to find vulnerabilities in substitution cipher, symmetric - key algorithm or public - key cryptography. : 6 there are other uses to reverse engineering : games. reverse engineering in the context of games and game engines is often used to understand underlying mechanics, data structures, and proprietary protocols, allowing developers to create mods, custom tools, or to enhance compatibility. this practice is particularly useful when interfacing with existing systems to improve interoperability between different game components, engines, or platforms. platforms like reshax provide tools and resources that assist in analyzing game binaries, dissecting game engine behavior, thus contributing to a deeper understanding of game technology and enabling community - driven enhancements. interfacing. reverse engineering can be used when a system is required to interface to another system and how both systems would negotiate is to be established. such requirements typically exist for interoperability. military or commercial espionage. learning about an enemy ' s or competitor ' s latest research by stealing or capturing a prototype and dismantling it may result in the development of a similar product or a better countermeasure against it. obsolescence. integrated circuits are often designed on proprietary systems and built on production lines, which become obsolete in only a few years. when systems using those parts can no longer be maintained since the parts are no longer made, the only way to incorporate the functionality into new technology is to reverse - engineer the existing chip and then to redesign it using newer tools by using the understanding gained as a guide. another obsolescence originated problem that can be | https://en.wikipedia.org/wiki/Reverse_engineering |
membership inference attacks ( mias ) aim to predict whether a data sample belongs to the model ' s training set or not. although prior research has extensively explored mias in large language models ( llms ), they typically require accessing to complete output logits ( \ ie, \ textit { logits - based attacks } ), which are usually not available in practice. in this paper, we study the vulnerability of pre - trained llms to mias in the \ textit { label - only setting }, where the adversary can only access generated tokens ( text ). we first reveal that existing label - only mias have minor effects in attacking pre - trained llms, although they are highly effective in inferring fine - tuning datasets used for personalized llms. we find that their failure stems from two main reasons, including better generalization and overly coarse perturbation. specifically, due to the extensive pre - training corpora and exposing each sample only a few times, llms exhibit minimal robustness differences between members and non - members. this makes token - level perturbations too coarse to capture such differences. to alleviate these problems, we propose \ textbf { petal } : a label - only membership inference attack based on \ textbf { pe } r - \ textbf { t } oken sem \ textbf { a } ntic simi \ textbf { l } arity. specifically, petal leverages token - level semantic similarity to approximate output probabilities and subsequently calculate the perplexity. it finally exposes membership based on the common assumption that members are ` better ' memorized and have smaller perplexity. we conduct extensive experiments on the wikimia benchmark and the more challenging mimir benchmark. empirically, our petal performs better than the extensions of existing label - only attacks against personalized llms and even on par with other advanced logit - based attacks across all metrics on five prevalent open - source llms. | arxiv:2502.18943 |
we construct eberlein almost periodic functions $ f _ j : j \ to h $ so that $ | | f _ 1 ( \ cdot ) | | $ is not ergodic and thus not eberlein almost periodic and $ | | f _ 2 (. ) | | $ is eberlein almost periodic, but $ f _ 1 $ and $ f _ 2 $ are not pseudo almost periodic, the parseval equation for them fails, where $ j = \ r _ + $ or $ \ r $ and $ h $ is a hilbert space. this answers several questions posed by zhang and liu [ 18 ]. | arxiv:1104.1827 |
we investigated the dependence of the seed [ ta / pt, ta / au ] and capping [ pt / ta, au / ta ] layers on spin pumping effect in the ferromagnetic 3 nm thick co thin film using ferromagnetic resonance spectroscopy. the data is fitted with kittel equation to evaluate damping constant and g - factor. a strong dependence of seed and capping layers on spin pumping has been discussed. the value of damping constant { alpha } is found to be relatively large i. e. 0. 0326 for the ta { 3 } / pt { 3 } / co { 3 } / pt { 3 } / ta { 3 } { nm } multi - layer structure, while it is 0. 0104 for ta { 3 } / co { 3 } / ta { 3 } { nm }. increase in { alpha } is observed due to pt layer that works as a good sink for spins due to high spin orbit coupling. in addition, we measured the effective spin conductance = 2. 0e18 m - 2 for the trilayer structure pt { 3 } / co { 3 } / pt { 3 } { nm } as a result of the enhancement in { alpha } relative to its bulk value. we observed that the evaluated g - factor decreases as effective demagnetizing magnetic field increases in all the studied samples. the azimuthal dependence of magnetic resonance field and line width showed relatively high anisotropy in the trilayer ta { 3 } / co { 3 } / ta { 3 } { nm } structure. | arxiv:1703.10630 |
recently, the impact of disorder on topological properties has attracted significant attention in photonics, especially the intriguing disorder - induced topological phase transitions in photonic topological anderson insulators ( ptais ). however, the reported ptais are based on time - reversal symmetry broken systems or quasi - three - dimensional time - reversal invariant system, both of which would limit the applications in integrated optics. here, we realize a time - reversal symmetric two - dimensional ptai on silicon platform within the near - ir wavelength range, taking the advantageous valley degree of freedom of photonic crystal. a low - threshold topological anderson phase transition is observed by applying disorder to the critical topologically trivial phase. conversely, we have also realized extremely robust topologically protected edge states based on the stable topological phase. both two phenomena are validated through theoretical dirac hamiltonian analysis, numerical simulations, and experimental measurements. our proposed structure holds promise to achieve near - zero topological phase transition thresholds, which breaks the conventional cognition that strong disorder is required to induce the phase transition. it significantly alleviates the difficulty of manipulating disorder and could be extended to other systems, such as condensed matter systems where strong disorder is hard to implement. this work is also beneficial to construct highly robust photonic integrated circuits serving for on - chip photonic and quantum optic information processing. moreover, this work also provides an outstanding platform to investigate on - chip integrated disordered systems. | arxiv:2501.11251 |
a major challenge when trying to detect fraud is that the fraudulent activities form a minority class which make up a very small proportion of the data set. in most data sets, fraud occurs in typically less than 0. 5 % of the cases. detecting fraud in such a highly imbalanced data set typically leads to predictions that favor the majority group, causing fraud to remain undetected. we discuss some popular oversampling techniques that solve the problem of imbalanced data by creating synthetic samples that mimic the minority class. a frequent problem when analyzing real data is the presence of anomalies or outliers. when such atypical observations are present in the data, most oversampling techniques are prone to create synthetic samples that distort the detection algorithm and spoil the resulting analysis. a useful tool for anomaly detection is robust statistics, which aims to find the outliers by first fitting the majority of the data and then flagging data observations that deviate from it. in this paper, we present a robust version of rose, called robrose, which combines several promising approaches to cope simultaneously with the problem of imbalanced data and the presence of outliers. the proposed method achieves to enhance the presence of the fraud cases while ignoring anomalies. the good performance of our new sampling technique is illustrated on simulated and real data sets and it is shown that robrose can provide better insight in the structure of the data. the source code of the robrose algorithm is made freely available. | arxiv:2003.11915 |
we consider the superposition of the first two members of the gravitational hierarchy ( einstein plus first gauss - bonnet ( gb ) ) interacting with the superposition of the first two members of the $ so _ { ( \ pm ) } ( d ) $ yang - - mills hierarchy, in $ d $ dimensions. such systems can occur in the low energy effective action of string theory. particle - like solutions % for the systems with only an einstein term, and with only a gb term, in dimensions $ d = 6, 8 $ are constructed respectively. our results reveal qualitatively new properties featuring double - valued solutions with critical behaviour. in this preliminary study, we have restricted ourselves to one - node solutions. | arxiv:hep-th/0202141 |
recent diagnostic datasets on compositional generalization, such as scan ( lake and baroni, 2018 ) and cogs ( kim and linzen, 2020 ), expose severe problems in models trained from scratch on these datasets. however, in contrast to this poor performance, state - of - the - art models trained on larger and more general datasets show better generalization ability. in this work, to reconcile this inconsistency, we conduct an empirical analysis by training transformer models on a variety of training sets with different data factors, including dataset scale, pattern complexity, example difficulty, etc. first, we show that increased dataset complexity can lead to better generalization behavior on multiple different generalization challenges. to further understand this improvement, we show two axes of the benefit from more complex datasets : they provide more diverse examples so compositional understanding becomes more effective, and they also prevent ungeneralizable memorization of the examples due to reduced example repetition frequency. finally, we explore how training examples of different difficulty levels influence generalization differently. on synthetic datasets, simple examples invoke stronger compositionality than hard examples do. on larger - scale real language datasets, while hard examples become more important potentially to ensure decent data coverage, a balanced mixture of simple and hard examples manages to induce the strongest generalizability. the code and data for this work are available at https : / / github. com / owenzx / data4comp | arxiv:2311.04420 |
we introduce and study a generalization of the notion of exact operator space that we call subexponential. using random matrices we show that the factorization results of grothendieck type that are known in the exact case all extend to the subexponential case, but we exhibit ( a continuum of distinct ) examples of non - exact subexponential operator spaces, as well as a $ c ^ * $ - algebra that is subexponential with constant 1 but not exact. we also show that $ oh $, $ r + c $ and $ \ max ( \ ell _ 2 ) $ ( or any other maximal operator space ) are not subexponential. | arxiv:1212.2053 |
we study the euler - lagrange equation of the dynamical boulatov model which is a simplicial model for 3d euclidean quantum gravity augmented by a laplace - beltrami operator. we provide all its solutions on the space of left and right invariant functions that render the interaction of the model an equilateral tetrahedron. surprisingly, for a non - linear equation of motion, the solution space forms a vector space. this space distinguishes three classes of solutions : saddle points, global and local minima of the action. our analysis shows that there exists one parameter region of coupling constants for which the action admits degenerate global minima. | arxiv:1806.09961 |
in light of the recent success of graph neural networks ( gnns ) and their ability to perform inference on complex data structures, many studies apply gnns to the task of text classification. in most previous methods, a heterogeneous graph, containing both word and document nodes, is constructed using the entire corpus and a gnn is used to classify document nodes. in this work, we explore a new discriminative graph of words graph neural network ( dgow - gnn ) approach encapsulating both a novel discriminative graph construction and model to classify text. in our graph construction, containing only word nodes and no document nodes, we split the training corpus into disconnected subgraphs according to their labels and weight edges by the pointwise mutual information of the represented words. our graph construction, for which we provide theoretical motivation, allows us to reformulate the task of text classification as the task of walk classification. we also propose a new model for the graph - based classification of text, which combines a gnn and a sequence model. we evaluate our approach on seven benchmark datasets and find that it is outperformed by several state - of - the - art baseline models. we analyse reasons for this performance difference and hypothesise under which conditions it is likely to change. | arxiv:2410.20469 |
to one - loop order and $ o ( \ alpha _ { em } ) $, the electromagnetic mass splittings of $ \ pi $, $ a _ 1 $, $ k $, $ k _ 1 ( 1400 ) $, and $ k ^ * ( 892 ) $ are calculated in the framework of $ u ( 3 ) _ l \ times u ( 3 ) _ r $ chiral field theory. the logarithmic divergences emerging in the feynman integrations of the mesonic loops are factorized by using an intrinsic parameter $ g $ of this theory. no other additional parameters or counterterms are introduced to absorb the mesonic loop divergences. when $ f _ \ pi $, $ m _ \ rho $ and $ m _ a $ are taken as inputs, the parameter $ g $ will be determined and all the physical results are finite and fixed. dashen ' s theorem is satisfied in the chiral su ( 3 ) limit of this theory, and a rather large violation of the theorem is revealed at the order of $ m _ s $ or $ m _ k ^ 2 $. mass ratios of light quarks have been determined. a relation for electromagnetic corrections to masses of axial - vector mesons is obtained. it could be regarded as a generalization of dashen ' s theorem. comparing with data, it is found that the non - electromagnetic mass difference of $ k ^ * $ is in agreement with the estimation of schechter, subbaraman, weigel. | arxiv:hep-ph/9611297 |
we study properties of topological phases by calculating the ground state degeneracy ( gsd ) of the 2d levin - wen ( lw ) model. here it is explicitly shown that the gsd depends only on the spatial topology of the system. then we show that the ground state on a sphere is always non - degenerate. moreover, we study an example associated with a quantum group, and show that the gsd on a torus agrees with that of the doubled chern - simons theory, consistent with the conjectured equivalence between the lw model associated with a quantum group and the doubled chern - simons theory. | arxiv:1105.5771 |
quantum computing has emerged as a powerful tool for solving complex problems intractable for classical computers, particularly in popular fields such as cryptography, optimization, and neurocomputing. in this paper, we present a new quantum - based approach named the hierarchical quantum control gates ( hqcg ) method for efficient understanding of functional magnetic resonance imaging ( fmri ) data. this approach includes two novel modules : the local quantum control gate ( lqcg ) and the global quantum control gate ( gqcg ), which are designed to extract local and global features of fmri signals, respectively. our method operates end - to - end on a quantum machine, leveraging quantum mechanics to learn patterns within extremely high - dimensional fmri signals, such as 30, 000 samples which is a challenge for classical computers. empirical results demonstrate that our approach significantly outperforms classical methods. additionally, we found that the proposed quantum model is more stable and less prone to overfitting than the classical methods. | arxiv:2408.03596 |
we investigate the existence of a class of zfc - provably total recursive unary functions, given certain constraints, and apply some of those results to show that, for $ \ sigma _ 1 $ - sound set theory, zfc $ \ not \ vdash p < np $. | arxiv:cmp-lg/9804005 |
nonperturbative effects in event shape distributions can be characterized by shape functions derived in the eikonal approximation or, equivalently, from soft - collinear effective theory. the use of energy flow operators and the boost invariance of the wilson lines of soft gluons in the shape functions leads to a proof of universality for power corrections to the mean values of event shapes, without invoking the single gluon approximation. | arxiv:hep-ph/0603066 |
we study two effective models developed for description of superconductors with short - coherence length : ( i ) the extended hubbard model with on - site attraction and intersite repulsion, ( ii ) the model of hard - core charged bosons on a lattice. the analysis is concentrated on the problem of phase separations and competition between superconductivity ( ss ) and charge - density - wave ( cdw ) orderings. the phase diagrams of the systems are shown to consist of at least seven different states, including 3 types of phase separated ( ps ) states : cdw - ss ( ps1 ), cdw - normal ( ps2 ) and the state of electron droplets ( ps3 ). by taking into account the ps states and the effects of longer - range density - density interactions ( beyond nearest neighbours ) our work substantially generalizes and modifies the conclusions of previous works concerning the models considered. | arxiv:cond-mat/0702638 |
the reduction of the e8 gauge theory to ten dimensions leads to a loop group, which in relation to twisted k - theory has a dixmier - douady class identified with the neveu - schwarz h - field. we give an interpretation of the degree two part of the eta - form by comparing the adiabatic limit of the eta invariant with the one loop term in type iia. more generally, starting with a g - bundle, the comparison for manifolds with string structure identifies g with e8 and the representation as the adjoint, due to an interesting appearance of the dual coxeter number. this makes possible a description in terms of a generalized wzw model at the critical level. we also discuss the relation to the index gerbe, the possibility of obtaining such bundles from loop space, and the symmetry breaking to finite - dimensional bundles. we discuss the implications of this and we give several proposals. | arxiv:hep-th/0608190 |
deep learning ( dl ) has proven to be a highly effective approach for developing models in diverse contexts, including visual perception, speech recognition, and machine translation. however, the end - to - end process for applying dl is not trivial. it requires grappling with problem formulation and context understanding, data engineering, model development, deployment, continuous monitoring and maintenance, and so on. moreover, each of these steps typically relies heavily on humans, in terms of both knowledge and interactions, which impedes the further advancement and democratization of dl. consequently, in response to these issues, a new field has emerged over the last few years : automated deep learning ( autodl ). this endeavor seeks to minimize the need for human involvement and is best known for its achievements in neural architecture search ( nas ), a topic that has been the focus of several surveys. that stated, nas is not the be - all and end - all of autodl. accordingly, this review adopts an overarching perspective, examining research efforts into automation across the entirety of an archetypal dl workflow. in so doing, this work also proposes a comprehensive set of ten criteria by which to assess existing work in both individual publications and broader research areas. these criteria are : novelty, solution quality, efficiency, stability, interpretability, reproducibility, engineering quality, scalability, generalizability, and eco - friendliness. thus, ultimately, this review provides an evaluative overview of autodl in the early 2020s, identifying where future opportunities for progress may exist. | arxiv:2112.09245 |
we analyze the deviations of the mixing induced cp asymmetry in b - - > phi ks from sin ( 2beta ), as well as the deviations of the asymmetries in bs - - > k * k *, bs - - > phi k * and bs - - > phi phi from sin ( 2beta _ s ), that arise in sm due to penguin pollution. we use a theoretical input which is short - distance dominated in qcd - factorization and thus free of ir - divergencies. we also provide alternative ways to extract angles of the unitarity triangle from penguin - mediated decays, and give predictions for bs - - > k * k * observables. | arxiv:0707.2046 |
< 1 $. | arxiv:1805.10388 |
we studied nonsparsely diluted mean - field models that differ from sparsely diluted mean - field models, such as the viana - - bray model. when the existence probability of each edge follows a bernoulli distribution, we rigorously prove that the free energy of nonsparsely diluted mean - field models with appropriate parameterization coincides exactly with that of the corresponding mean - field models in ferromagnetic and spin - glass models composed of any discrete spin $ s $ in the thermodynamic limit. our results is a broad generalization of the result of a previous study [ bovier and gayrard, j. stat. phys. 72, 643 ( 1993 ) ], where the densely diluted mean - field ferromagnetic ising model ( diluted curie - - weiss model ) with appropriate parameterization was analyzed rigorously, and it was proven that its free energy was exactly equivalent to that of the corresponding mean - field model ( curie - - weiss model ). | arxiv:2406.13245 |
. there are also many tools to support specific engineering tasks such as computer - aided manufacturing ( cam ) software to generate cnc machining instructions ; manufacturing process management software for production engineering ; eda for printed circuit board ( pcb ) and circuit schematics for electronic engineers ; mro applications for maintenance management ; and architecture, engineering and construction ( aec ) software for civil engineering. in recent years the use of computer software to aid the development of goods has collectively come to be known as product lifecycle management ( plm ). = = social context = = the engineering profession engages in a range of activities, from collaboration at the societal level, and smaller individual projects. almost all engineering projects are obligated to a funding source : a company, a set of investors, or a government. the types of engineering that are less constrained by such a funding source, are pro bono, and open - design engineering. engineering has interconnections with society, culture and human behavior. most products and constructions used by modern society, are influenced by engineering. engineering activities have an impact on the environment, society, economies, and public safety. engineering projects can be controversial. examples from different engineering disciplines include : the development of nuclear weapons, the three gorges dam, the design and use of sport utility vehicles and the extraction of oil. in response, some engineering companies have enacted serious corporate and social responsibility policies. the attainment of many of the millennium development goals requires the achievement of sufficient engineering capacity to develop infrastructure and sustainable technological development. overseas development and relief ngos make considerable use of engineers, to apply solutions in disaster and development scenarios. some charitable organizations use engineering directly for development : engineers without borders engineers against poverty registered engineers for disaster relief engineers for a sustainable world engineering for change engineering ministries international engineering companies in more developed economies face challenges with regard to the number of engineers being trained, compared with those retiring. this problem is prominent in the uk where engineering has a poor image and low status. there are negative economic and political issues that this can cause, as well as ethical issues. it is agreed the engineering profession faces an " image crisis ". the uk holds the most engineering companies compared to other european countries, together with the united states. = = = code of ethics = = = many engineering societies have established codes of practice and codes of ethics to guide members and inform the public at large. the national society of professional engineers code of ethics states : engineering is an important and learned profession. as members of this profession, engineers are expected to exhibit the | https://en.wikipedia.org/wiki/Engineering |
deep reinforcement learning ( drl ) is widely applied to safety - critical decision - making scenarios. however, drl is vulnerable to backdoor attacks, especially action - level backdoors, which pose significant threats through precise manipulation and flexible activation, risking outcomes like vehicle collisions or drone crashes. the key distinction of action - level backdoors lies in the utilization of the backdoor reward function to associate triggers with target actions. nevertheless, existing studies typically rely on backdoor reward functions with fixed values or conditional flipping, which lack universality across diverse drl tasks and backdoor designs, resulting in fluctuations or even failure in practice. this paper proposes the first universal action - level backdoor attack framework, called unidoor, which enables adaptive exploration of backdoor reward functions through performance monitoring, eliminating the reliance on expert knowledge and grid search. we highlight that action tampering serves as a crucial component of action - level backdoor attacks in continuous action scenarios, as it addresses attack failures caused by low - frequency target actions. extensive evaluations demonstrate that unidoor significantly enhances the attack performance of action - level backdoors, showcasing its universality across diverse attack scenarios, including single / multiple agents, single / multiple backdoors, discrete / continuous action spaces, and sparse / dense reward signals. furthermore, visualization results encompassing state distribution, neuron activation, and animations demonstrate the stealthiness of unidoor. the source code of unidoor can be found at https : / / github. com / maoubo / unidoor. | arxiv:2501.15529 |
the detection of a pev high - energy neutrino of astrophysical origin, observed by the icecube collaboration and correlated with a 3 $ \ sigma $ significance with fermi measurements to the gamma - ray blazar txs 0506 + 056, further stimulated the discussion on the production channels of high - energy particles in blazars. many models also consider a hadronic component that would not only contribute to the emission of electromagnetic radiation in blazars but also lead to the production of secondary high - energy neutrinos and gamma - rays. relativistic and compact plasma structures, so - called plasmoids, have been discussed in such flares to be moving along the jet axis. the frequently used assumption in such models that diffusive transport can describe particles in jet plasmoids is investigated in the present contribution. while the transport in the stationary scenario is diffusive for most of the parameter space, a flaring scenario is always accompanied by a non - diffusive phase in the beginning. in this paper, we present those conditions that determine the time scale to reach the diffusion phase as a function of the model parameters in the jet. we show that the type of the charged - particle transport, diffusive or ballistic, has a large influence on many observables, including the spectral energy distribution of blazars. | arxiv:2107.11386 |
the eu cost action newfocus is focused on investigating radical solutions with the potential to impact the design of future wireless networks. it aims to address some of the challenges in owc and establish it as an efficient technology that can satisfy the demanding requirements of backhaul and access network levels in 5g networks. this also includes the use of hybrid links that associate owc with radiofrequency or wired / fiber - based technologies. the focus of this white paper is on the use of optical wireless communication ( owc ) as enabling technology in a range of areas outlined in he ' s pillar ii including health, manufacturing, intelligent transportation systems ( its ), unmanned aerial vehicles and network and protocol. | arxiv:2210.02397 |
the 7 \ times7 reconstruction of the si ( 111 ) surface represents arguably the most fascinating surface reconstruction so far observed in nature. yet, the atomistic mechanism underpinning its formation remains unclear after it was discovered sixty years ago. experimentally, it is observed post priori so that analysis of its formation mechanism can only be carried out in analogy with archaeology. theoretically, density - functional - theory ( dft ) correctly predicts the si ( 111 ) - ( 7 \ times7 ) ground state but is impractical to simulate its formation process ; while empirical potentials failed to produce it as the ground state. developing an artificial neural - network potential of dft quality, we carried out accurate large - scale simulations to unravel the formation of the si ( 111 ) - ( 7 \ times7 ) surface. we reveal a possible step - mediated atom - pop rate - limiting process that triggers massive non - conserved atomic rearrangements, most remarkably, a critical process of collective vacancy diffusion that mediates a sequence of selective dimer, corner - hole, stacking fault and dimer - line pattern formation, to fulfill the 7 \ times7 reconstruction. our findings may not only solve the long - standing mystery of this famous surface reconstruction but also illustrate the power of machine learning in studying complex structures. | arxiv:2011.14505 |
the diagonal elements of the time correlation matrix are used to probe closed quantum systems that are measured at random times. this enables us to extract two distinct parts of the quantum evolution, a recurrent part and an exponentially decaying part. this separation is strongly affected when spectral degeneracies occur, for instance, in the presence of spontaneous symmetry breaking. moreover, the slowest decay rate is determined by the smallest energy level spacing, and this decay rate diverges at the spectral degeneracies. probing the quantum evolution with the diagonal elements of the time correlation matrix is discussed as a general concept and tested in the case of a bosonic josephson junction. it reveals for the latter characteristic properties at the transition to hilbert - space localization. | arxiv:2108.13143 |
we introduce the notion of combinatorial gauge symmetry - - a local transformation that includes single spin rotations plus permutations of spins ( or swaps of their quantum states ) - - that preserve the commutation and anti - commutation relations among the spins. we show that hamiltonians with simple two - body interactions contain this symmetry if the coupling matrix is a hadamard matrix, with the combinatorial gauge symmetry being associated to the automorphism of these matrices with respect to monomial transformations. armed with this symmetry, we address the physical problem of how to build quantum spin liquids with physically accessible interactions. in addition to its intrinsic physical significance, the problem is also tied to that of how to build topological qubits. | arxiv:1908.04791 |
measurements of magnetic hysteresis loops in cu - al - mn alloys of different mn content at low temperatures are presented. the loops are smooth and continuous above a certain temperature, but exhibit a magnetization discontinuity below that temperature. scaling analysis suggest that this system displays a disorder induced phase transition line. measurements allow to determine the critical exponents $ \ beta = 0. 03 \ pm 0. 01 $ and $ \ beta \ delta = 0. 4 \ pm 0. 1 $ in agreement with those reported recently [ berger et al., phys. rev. lett. { \ bf 85 }, 4176 ( 2000 ) ] | arxiv:cond-mat/0209323 |
when error correction becomes possible it will be necessary to dedicate a large number of physical qubits to each logical qubit. error correction allows for deeper circuits to be run, but each additional physical qubit can potentially contribute an exponential increase in computational space, so there is a trade - off between using qubits for error correction or using them as noisy qubits. in this work we look at the effects of using noisy qubits in conjunction with noiseless qubits ( an idealized model for error - corrected qubits ), which we call the " clean and dirty " setup. we employ analytical models and numerical simulations to characterize this setup. numerically we show the appearance of noise - induced barren plateaus ( nibps ), i. e., an exponential concentration of observables caused by noise, in an ising model hamiltonian variational ansatz circuit. we observe this even if only a single qubit is noisy and given a deep enough circuit, suggesting that nibps cannot be fully overcome simply by error - correcting a subset of the qubits. on the positive side, we find that for every noiseless qubit in the circuit, there is an exponential suppression in concentration of gradient observables, showing the benefit of partial error correction. finally, our analytical models corroborate these findings by showing that observables concentrate with a scaling in the exponent related to the ratio of dirty - to - total qubits. | arxiv:2205.13454 |
quantization of the bosonic string around the classical, perturbative vacuum is not consistent for spacetime dimensions 2 < d < 26. recently we have showed that at large d there is another so - called mean field vacuum. here we extend this mean field calculation to finite d and show that the corresponding mean field vacuum is stable under quadratic fluctuations for 2 < d < 26. we point out the analogy with the two - dimensional o ( n ) - symmetric sigma - model, where the 1 / n - vacuum is very close to the real vacuum state even for finite n, in contrast to the perturbative vacuum. | arxiv:1703.05382 |
this work proposes a new adaptive - robust control ( arc ) architecture for a class of uncertain euler - lagrange ( el ) systems where the upper bound of the uncertainty satisfies linear in parameters ( lip ) structure. conventional arc strategies either require structural knowledge of the system or presume that the overall uncertainties or its time derivative are norm bounded by a constant. due to unmodelled dynamics and modelling imperfection, true structural knowledge of the system is not always available. further, for the class of systems under consideration, prior assumption regarding the uncertainties ( or its time derivative ) being upper bounded by a constant, puts a restriction on states beforehand. conventional arc laws invite overestimation - underestimation problem of switching gain. towards this front, adaptive switching - gain based robust control ( asrc ) is proposed which alleviates the overestimation - underestimation problem of switching gain. moreover, asrc avoids any presumption of constant upper bound on the overall uncertainties and can negotiate uncertainties regardless of being linear or nonlinear in parameters. experimental results of asrc using a wheeled mobile robot notes improved control performance in comparison to adaptive sliding mode control. | arxiv:1708.01442 |
black phosphorus ( bp ), a relative new plasmonic two - dimensional ( 2d ) material, offers unique photonic and electronic properties. in this work, we propose a new tunable and broadband ultrathin coherent perfect absorber ( cpa ) device operating in the terahertz ( thz ) frequency range. it is based on a bifacial metasurface made of bp patch periodic arrays separated by a thin dielectric layer. broadband cpa bandwidth is realized due to the ultrathin thickness of the proposed device and the extraordinary properties of bp. in addition, a substantial modulation between cpa and complete transparency is achieved by adjusting the phase difference between the two counter - propagating incident waves. the cpa performance can be tuned by dynamically changing the electron doping level of bp. the cpa response under normal and oblique transverse magnetic ( tm ) and electric ( te ) polarized incident waves is investigated. it is derived that cpa can be achieved under both incident polarizations and across a broad range of incident angles. the presented cpa device can be used in the design of tunable planar thz modulators, all - optical switches, detectors, and signal processors. | arxiv:1904.04165 |
we investigate an simple evolutionary game of sequences and demonstrate on this example the structure of fitness landscapes in discrete problems. we show the smoothing action of the genotype - phenotype mapping which still makes it feasible for evolution to work. further we propose the density of sequence states as a classifying measure of fitness landscapes. | arxiv:adap-org/9508002 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.