text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
we investigate the possible types of coupling between ferroelectricity and magnetism for the zig - zag spin chain multiferroic licu2o2 compound. we construct a multi - order parameter phenomenological model for the material based on a group theoretical analysis. from our calculation we conclude that a coupling involving the inter - chain magnetic structure and ferroelectricity is necessary to understand the experimental results of park et. al. our proposed model is able to account for the electric polarization flop in the presence of an externally applied magnetic field. furthermore, based on our theoretical model we can make specific selection rule predictions about electromagnon excitations present in the licu2o2 system. we also predict that the electromagnon peaks measured in an ac - conductivity measurement are field dependent. | arxiv:0803.0095 |
systems as diverse as mechanical structures assembled from elastic components, and photonic metamaterials enjoy a common geometrical feature : a sublattice symmetry. this property realizes a chiral symmetry first introduced to characterize a number of electronic insulators in the vicinity of their energy gaps. in this article, we introduce a generic framework to elucidate and design zero - energy topological boundary modes in all systems enjoying a chiral symmetry, whether crystalline or amorphous. we first show how to distinguish chiral insulators from one another by a real - space measure : their chiral polarization. in crystals, we use it to redefine the very concept of bulk - boundary correspondence, and resolve long - standing ambiguities in its application to chiral insulators. in amorphous metamaterials, we use it to lay out generic geometrical rules to locate topologically distinct phases, and explain how to engineer localized zero - mode wave guides even more robust than in periodic structures. | arxiv:2002.02850 |
we derive holographic superconductivity from a hamiltonian that describes pairing of two - dimensional electrons near a ferromagnetic quantum - critical point. at low energies the theory maps onto a four - dimensional gravity description with lifshitz spacetime and dynamic scaling exponent $ z = 3 / 2 $. the curved spacetime is due to powerlaw correlations of the critical normal state. the lifshitz anisotropy is caused by phase - space constraints near the fermi surface. the pairing instabilities obtained in lifshitz space and from the eliashberg formalism are found to be identical. we also formulate the holographic map for values of the dynamic scaling exponent $ 1 < z < \ infty $. our result provides an explicit realization of the holographic correspondence in two dimensions. | arxiv:2209.00474 |
we have quantitatively reanalyzed the inclusive charmed - baryon decays. new ingredients are the voloshin preasymptotic effects in semileptonic decays and the cabibbo - subleading contributions to both semileptonic and nonleptonic decays. it has been found that the cabbibo - subleading voloshin contribution essentially improves the theoretical semileptonic branching ratio of $ \ lambda _ c ^ { + } $, in agreement with experiment. the semileptonic branching ratios for $ \ xi _ c ^ { + } $ and $ \ omega _ c ^ { 0 } $ are found to be large, i. e., of the order of 20 %. the lifetimes hierarchy is in a good qualitative and even quantitative agreement with experiment except for the $ \ xi _ c ^ { + } $ lifetime, which is somewhat smaller than the experimental value. future measurements, especially measurements of the semileptonic branching ratios for $ \ omega _ c ^ { 0 } $, $ \ xi _ c ^ { + } $ and $ \ xi _ c ^ { 0 } $ should be decisive for the check of this approach. | arxiv:hep-ph/9704445 |
classical version of born - infeld electrodynamics is recalled and its most important properties discussed. then we analyze possible abelian and non - abelian generalizations of this theory, and show how certain soliton - like configurations can be obtained. the relationship with the standard model of electroweak interactions is also mentioned. | arxiv:hep-th/0108026 |
in this note we give a simple, model - independent construction of chern classes as natural transformations from differential complex k - theory to differential integral cohomology. we verify the expected behaviour of these chern classes with respect to sums and suspension. | arxiv:0907.2504 |
test - time adaptation ( tta ) aims to address the distribution shift between the training and test data with only unlabeled data at test time. existing tta methods often focus on improving recognition performance specifically for test data associated with classes in the training set. however, during the open - world inference process, there are inevitably test data instances from unknown classes, commonly referred to as outliers. this paper pays attention to the problem that conducts both sample recognition and outlier rejection during inference while outliers exist. to address this problem, we propose a new approach called stable memory replay ( stamp ), which performs optimization over a stable memory bank instead of the risky mini - batch. in particular, the memory bank is dynamically updated by selecting low - entropy and label - consistent samples in a class - balanced manner. in addition, we develop a self - weighted entropy minimization strategy that assigns higher weight to low - entropy samples. extensive results demonstrate that stamp outperforms existing tta methods in terms of both recognition and outlier detection performance. the code is released at https : / / github. com / yuyongcan / stamp. | arxiv:2407.15773 |
in this work, we have constructed a relation between the surface brightness ( $ \ sigma $ ) and diameter ( d ) of galactic c - and s - type supernova remnants ( snrs ). in order to calibrate the $ \ sigma $ - d dependence, we have carefully examined some intrinsic ( e. g. explosion energy ) and extrinsic ( e. g. density of the ambient medium ) properties of the remnants and, taking into account also the distance values given in the literature, we have adopted distances for some of the snrs which have relatively more reliable distance values. these calibrator snrs are all c - and s - type snrs, i. e. f - type snrs ( and s - type snr cas a which has an exceptionally high surface brightness ) are excluded. the sigma - d relation has 2 slopes with a turning point at d = 36. 5 pc : $ \ sigma $ ( at 1 ghz ) = 8. 4 $ ^ { + 19. 5 } _ { - 6. 3 } $ $ \ times10 ^ { - 12 } $ d $ ^ { { - 5. 99 } ^ { + 0. 38 } _ { - 0. 33 } } $ wm $ ^ { - 2 } $ hz $ ^ { - 1 } $ ster $ ^ { - 1 } $ ( for $ \ sigma $ $ \ le3. 7 \ times10 ^ { - 21 } $ wm $ ^ { - 2 } $ hz $ ^ { - 1 } $ ster $ ^ { - 1 } $ and d $ \ ge $ 36. 5 pc ) and $ \ sigma $ ( at 1 ghz ) = 2. 7 $ ^ { + 2. 1 } _ { - 1. 4 } $ $ \ times $ 10 $ ^ { - 17 } $ d $ ^ { { - 2. 47 } ^ { + 0. 20 } _ { - 0. 16 } } $ wm $ ^ { - 2 } $ hz $ ^ { - 1 } $ ster $ ^ { - 1 } $ ( for $ \ sigma $ $ > 3. 7 \ times10 ^ { - 21 } $ wm $ ^ { - 2 } $ hz $ ^ { - 1 } $ ster $ ^ { - 1 } $ and d $ < $ 36. 5 pc ). we discussed the theoretical basis for the $ \ sigma $ - d dependence and particularly the reasons for the | arxiv:astro-ph/0304499 |
pairwise learning corresponds to the supervised learning setting where the goal is to make predictions for pairs of objects. prominent applications include predicting drug - target or protein - protein interactions, or customer - product preferences. in this work, we present a comprehensive review of pairwise kernels, that have been proposed for incorporating prior knowledge about the relationship between the objects. specifically, we consider the standard, symmetric and anti - symmetric kronecker product kernels, metric - learning, cartesian, ranking, as well as linear, polynomial and gaussian kernels. recently, a o ( nm + nq ) time generalized vec trick algorithm, where n, m, and q denote the number of pairs, drugs and targets, was introduced for training kernel methods with the kronecker product kernel. this was a significant improvement over previous o ( n ^ 2 ) training methods, since in most real - world applications m, q < < n. in this work we show how all the reviewed kernels can be expressed as sums of kronecker products, allowing the use of generalized vec trick for speeding up their computation. in the experiments, we demonstrate how the introduced approach allows scaling pairwise kernels to much larger data sets than previously feasible, and provide an extensive comparison of the kernels on a number of biological interaction prediction tasks. | arxiv:2009.01054 |
an extension of the single - freeze - out model with thermal and geometric parameters dependent on the spatial rapidity, $ \ alpha _ \ parallel $, is used to describe the rapidity and transverse - momentum spectra of pions, kaons, protons, and antiprotons measured at rhic at $ \ sqrt { s _ { nn } } = 200 { \ rm gev } $ by the brahms collaboration. { \ tt therminator } is used to perform the necessary simulation, which includes all resonance decays. the result of the fit to the rapidity spectra in the range of the brahms data is the expected growth of the baryon and strange chemical potentials with the magnitude of $ \ alpha _ \ parallel $, while the freeze - out temperature is kept fixed. the value of the baryon chemical potential at $ \ alpha _ \ parallel \ sim 3 $, which is the relevant region for particles detected at the brahms forward rapidity $ y \ sim 3 $, is about $ 200 { \ rm gev } $, { \ em i. e. } lies in the range of the values obtained for the highest sps energy. the chosen geometry of the fireball has a decreasing transverse size as the magnitude of $ \ alpha _ \ parallel $ is increased, which also corresponds to decreasing transverse flow. this feature is verified by reproducing the transverse momentum spectra of pions and kaons at various rapidities. the strange chemical potential obtained from the fit to the $ k ^ + / k ^ - $ ratio is such that the local strangeness density in the fireball is compatible with zero. the resulting rapidity spectra of net protons are described qualitatively in the model. as a result of the study, the knowledge of the ` ` topography ' ' of the fireball is achieved, making other calculations possible. as an example, we give predictions for the rapidity spectra of hyperons. | arxiv:nucl-th/0610083 |
single top quark cross section evaluations for the complete sets of tree - level diagrams in the $ e ^ + e ^ - $, $ e ^ - e ^ - $, $ \ gamma e $ and $ \ gamma \ gamma $ modes of the next linear collider with unpolarized and polarized beams are performed within the standard model and beyond. from comparison of all possibilities we conclude that the process $ \ gamma _ + e ^ - _ l \ to e ^ - t \ bar b $ is extremely favoured due to large cross section, no $ t \ bar t $ background, high degrees of beam polarization, and exceptional sensitivities to $ v _ { tb } $ and anomalous $ wtb $ couplings. similar reasons favour the process $ e ^ - e ^ - \ to e ^ - \ nu _ e \ bar t b $ for probing top quark properties despite a considerably lower cross section. less favourable are processes like $ e ^ + e ^ -, \ gamma \ gamma \ to e ^ - \ nu _ e t \ bar b $. three processes were chosen to probe their sensitivity to anomalous $ wtb $ couplings, with best bounds found for $ \ gamma _ + e ^ - _ l \ to e ^ - t \ bar b $ and $ e ^ + _ r e ^ - _ r \ to e ^ - \ nu _ e t \ bar b $. | arxiv:hep-ph/0104279 |
a general expression is given for the 14th chern form in terms of simple polynomial concomitants of the curvature 2 - form for n - dimensional differentiable manifolds having a general linear connection. | arxiv:gr-qc/9905003 |
the article investigates the properties of associative ideals in monoids. such ideals have some applications in the logic of non - standard sequences and category theory. the relations of these ideals with the verbal structure of words over generators are examined. finitely and infinitely generated monoids are treated separately. | arxiv:2403.13979 |
we study the brst cohomology for two - dimensional supergravity coupled to $ \ hat c \ leq 1 $ superconformal matter in the conformal gauge. the super - liouville and superconformal matters are represented by free scalar fields $ \ phi ^ l $ and $ \ phi ^ m $ and fermions $ \ psi ^ l $ and $ \ psi ^ m $, respectively, with suitable background charges, and these are coupled in such a way that the brst charge is nilpotent. the physical states of the full theory are determined for ns and r sectors. it is shown that there are extra states with ghost number $ n _ { fp } = 0, \ pm 1 $ for discrete momenta other than the degree of freedom corresponding to the ` ` center of mass ", and that these are closely related to the ` ` null states " in the minimal models with $ \ hat c < 1 $. | arxiv:hep-th/9110013 |
ability to count number of occurrences of events within a specified time interval is very useful in specification of resource bounded real time computation. in this paper, we study an extension of metric temporal logic ( $ \ mathsf { mtl } $ ) with two different counting modalities called $ \ mathsf { c } $ and $ \ mathsf { ut } $ ( until with threshold ), which enhance the expressive power of $ \ mathsf { mtl } $ in orthogonal fashion. we confine ourselves only to the future fragment of $ \ mathsf { mtl } $ interpreted in a pointwise manner over finite timed words. we provide a comprehensive study of the expressive power of logic $ \ mathsf { ctmtl } $ and its fragments using the technique of ef games extended with suitable counting moves. finally, as our main result, we establish the decidability of $ \ mathsf { ctmtl } $ by giving an equisatisfiable reduction from $ \ mathsf { ctmtl } $ to $ \ mathsf { mtl } $. the reduction provides one more example of the use of temporal projections with oversampling introduced earlier for proving decidability. our reduction also implies that $ \ mathsf { mitl } $ extended with $ \ mathsf { c } $ and $ \ mathsf { ut } $ modalities is elementarily decidable. | arxiv:1512.09032 |
we investigate the impact of nonlinear evolution of the gravitational potentials in the lcdm model on the integrated sachs - wolfe ( isw ) contribution to the cmb temperature power spectrum, and on the cross - power spectrum of the cmb and a set of biased tracers of the mass. we use an ensemble of n - body simulations to directly follow the potentials and compare results to perturbation theory ( pt ). the predictions from pt match the results to high precision for k < 0. 2 h / mpc. we compute the nonlinear corrections to the angular power spectrum and find them to be < 10 % of linear theory for l < 100. these corrections are swamped by cosmic variance. on scales l > 100 the departures are more significant, however the cmb signal is more than a factor 10 ^ 3 larger at this scale. nonlinear isw effects therefore play no role in shaping the cmb power spectrum for l < 1500. we analyze the cmb - - density tracer cross - spectrum using simulations and renormalized bias pt, and find good agreement. the usual assumption is that nonlinear evolution enhances the growth of structure and counteracts linear isw on small scales, leading to a change in sign of the cmb - lss cross - spectrum at small scales. however, pt analysis suggests that this trend reverses at late times when the logarithmic growth rate f ( a ) = dlnd / dlna < 0. 5 or om _ m ( a ) < 0. 3. numerical results confirm these expectations and we find no sign change in isw - lss cross - power for low redshifts. corrections due to nonlinearity and scale dependence of the bias are found to be < 10 % for l < 100, therefore below the s / n of the current and future measurements. finally, we estimate the cmb - - halo cross - correlation coefficient and show that it can be made to match that for cmb - - dark matter to within 5 % for thin redshift shells, mitigating the need to model bias evolution. | arxiv:0905.2408 |
this paper studies the statistical models of the noise - robust normalized subband adaptive filter ( nr - nsaf ) algorithm in the mean and mean square deviation senses involving transient - state and steady - state behavior by resorting to the method of the vectorization operation and the kronecker product. the analysis method does not require the gaussian input signal. moreover, the proposed analysis removes the paraunitary assumption imposed on the analysis filter banks as in the existing analyses of subband adaptive algorithms. simulation results in various conditions demonstrate the effectiveness of our theoretical analysis. for a special form of the algorithm, the proposed steady - state expression is also better accurate than the previous analysis. | arxiv:1711.11413 |
this paper presents a sensitive and comprehensive irac 3 - 8 $ \ mu $ m photometric survey of white dwarfs for companions in the planetary mass regime with temperatures cooler than the known t dwarfs. the search focuses on descendents of intermediate mass stars with $ m \ ga3 $ $ m _ { \ odot } $ whose inner, few hundred au regions cannot be probed effectively for massive planets and brown dwarfs by any alternative existing method. furthermore, examination for mid - infrared excess explores an extensive range of orbital semimajor axes, including the intermediate 5 - 50 au range poorly covered and incompletely accessible by other techniques at main sequence or evolved stars. three samples of white dwarfs are chosen which together represent relatively young as well as older populations of stars : 9 open cluster white dwarfs, 22 high mass field white dwarfs, and 17 metal - rich field white dwarfs. in particular, these targets include : 7 hyads and 4 field white dwarfs of similar age ; 1 pleiad and 19 field white dwarfs of similar age ; van maanen 2 and 16 similarly metal - rich white dwarfs with ages between 1 and 7 gyr. no substellar companion candidates were identified at any star. by demanding a 15 % minimum photometric excess at 4. 5 $ \ mu $ m to indicate a companion detection, upper limits in the planetary mass regime are established at 34 of the sample white dwarfs, 20 of which have limits below 10 $ m _ { \ rm j } $ according to substellar cooling models. specifically, limits below the minimum mass for deuterium burning are established at all pleiades and hyadeswhite dwarfs, as well as similarly young field white dwarfs, half a dozen of which receive limits at or below 5 $ m _ { \ rm j } $. two irac epochs at vma 2 rule out $ t \ ga200 $ k proper motion companions within 1200 au. | arxiv:0804.0237 |
we give a necessary and sufficient condition for an mv polytope $ p $ in a highest weight crystal to lie in an arbitrary fixed demazure crystal ( resp., opposite demazure crystal ), in terms of the lengths of edges along a path through the 1 - skeleton of $ p $ corresponding to a reduced word for the longest element of the weyl group $ w $. % also, we give an explicit description as a pseudo - weyl polytope for extremal mv polytopes in a highest weight crystal. % finally, by combining the results above, we obtain a polytopal condition for an mv polytope $ p $ to lie in an arbitrary fixed opposite demazure crystal. | arxiv:0806.3112 |
the scintillator - strip electromagnetic calorimeter ( scecal ) is one of the calorimeter technologies which can achieve fine granularity required for the particle flow algorithm. second prototype of the scecal has been built and tested with analog hadron calorimeter ( ahcal ) and tail catcher ( tcmt ) in september 2008 at fermilab meson test beam facility. data are taken with 1 to 32 gev of electron, pion and muon beams to evaluate all the necessary performances of the scecal, ahcal and tcmt system. this manuscript describes overview of the beam test and very preliminary results focusing on the scecal part. | arxiv:0902.2257 |
we analyze the role played by the gauge invariance for the existence of dirac monopole. to this end, we consider the electrodynamics with massive photon and ask if the magnetic charge can be introduced there. we show that the derivation of the dirac quantization condition based on the angular momentum algebra cannot be generalized to the case of massive electrodynamics. possible implications of this result are briefly discussed. | arxiv:hep-ph/9505445 |
we present analytically and numerically the spectrum of high harmonic emission generated by twisted electrons in the presence of linearly polarized light. ensuing transitions from electronic continuum states with orbital angular momentum to bound states give rise to circularly polarized attosecond pulses. for central collisions with twisted wavepackets continuum - bound transitions are subject to dipole selection rules. for non - central collisions a crossover from circularly to linearly polarized emission occurs for increasing impact parameter, due to the transverse topology of twisted wavepackets. | arxiv:1909.00728 |
prosody is a rich information source in natural language, serving as a marker for phenomena such as contrast. in order to make this information available to downstream tasks, we need a way to detect prosodic events in speech. we propose a new model for pitch accent detection, inspired by the work of stehwien et al. ( 2018 ), who presented a cnn - based model for this task. our model makes greater use of context by using full utterances as input and adding an lstm layer. we find that these innovations lead to an improvement from 87. 5 % to 88. 7 % accuracy on pitch accent detection on american english speech in the boston university radio news corpus, a state - of - the - art result. we also find that a simple baseline that just predicts a pitch accent on every content word yields 82. 2 % accuracy, and we suggest that this is the appropriate baseline for this task. finally, we conduct ablation tests that show pitch is the most important acoustic feature for this task and this corpus. | arxiv:2004.14846 |
the cross section for dijet production in pp collisions at sqrt ( s ) = 7 tev is presented as a function of xi, a variable that approximates the fractional momentum loss of the scattered proton in single - diffractive events. the analysis is based on an integrated luminosity of 2. 7 inverse nanobarns collected with the cms detector at the lhc at low instantaneous luminosities, and uses events with jet transverse momentum of at least 20 gev. the dijet cross section results are compared to the predictions of diffractive and nondiffractive models. the low - xi data show a significant contribution from diffractive dijet production, observed for the first time at the lhc. the associated rapidity gap survival probability is estimated. | arxiv:1209.1805 |
during crises, social media serves as a crucial coordination tool, but the vast influx of posts - - from " actionable " requests and offers to generic content like emotional support, behavioural guidance, or outdated information - - complicates effective classification. although generative llms ( large language models ) can address this issue with few - shot classification, their high computational demands limit real - time crisis response. while fine - tuning encoder - only models ( e. g., bert ) is a popular choice, these models still exhibit higher inference times in resource - constrained environments. moreover, although distilled variants ( e. g., distilbert ) exist, they are not tailored for the crisis domain. to address these challenges, we make two key contributions. first, we present crisishelpoffer, a novel dataset of 101k tweets collaboratively labelled by generative llms and validated by humans, specifically designed to distinguish actionable content from noise. second, we introduce the first crisis - specific mini models optimized for deployment in resource - constrained settings. across 13 crisis classification tasks, our mini models surpass bert ( also outperform or match the performance of roberta, mpnet, and bertweet ), offering higher accuracy with significantly smaller sizes and faster speeds. the medium model is 47 % smaller with 3. 8 % higher accuracy at 3. 5x speed, the small model is 68 % smaller with a 1. 8 % accuracy gain at 7. 7x speed, and the tiny model, 83 % smaller, matches bert ' s accuracy at 18. 6x speed. all models outperform existing distilled variants, setting new benchmarks. finally, as a case study, we analyze social media posts from a global crisis to explore help - seeking and assistance - offering behaviours in selected developing and developed countries. | arxiv:2502.16839 |
we present measurements obtained with the spitzer space telescope in five bands from 3. 6 - 24 microns of the northern inner radio lobe of centaurus a, the nearest powerful radio galaxy. we show that this emission is synchrotron in origin. comparison with ultraviolet observations from galex shows that diffuse ultraviolet emission exists in a smaller region than the infrared but also coincides with the radio jet. we discuss the possibility, that synchrotron emission is responsible for the ultraviolet emission and conclude that further data are required to confirm this. | arxiv:astro-ph/0601413 |
( abridged ) we investigate the importance of several numerical artifacts such as lack of resolution on spectral properties of the lyman alpha forest as computed from cosmological hydrodynamic simulations in a standard cold dark matter universe. we assume an ionising background produced by quasars as computed by haardt & madau. we use a new simulation code based on p3m and sph, which we compare in detail with a modified version of hydra ( couchman et al. ) and published results of treesph ( hernquist et al. ). the agreement is very good between all three codes. we then use our new code to investigate several numerical effects such as resolution on spectral statistics deduced from voigt profile fitting. our highest resolution simulation has a mass resolution of 2. 1x10 ^ 5 solar masses. the column density distribution is converged but the b - parameter distribution is only marginally converged. the simulation reproduces both the hi column density and b - parameter distribution when we assume a high baryon density, omega _ b h ^ 2 > 0. 028. in addition we need to impose a higher igm temperature than predicted within our basic set of assumptions. the simulated hi optical depth is in good agreement with observations but the heii optical depth is lower than observed. fitting the latter requires a larger jump between the photon flux at the h and he edge than is present in the haardt & madau spectrum. | arxiv:astro-ph/9805119 |
schools of cognitivism, and these are the cognitivist and social cognitivist. the former focuses on the understanding of the thinking or cognitive processes of an individual while the latter includes social processes as influences in learning besides cognition. these two schools, however, share the view that learning is more than a behavioral change but is rather a mental process used by the learner. = = = constructivism = = = educational psychologists distinguish between several types of constructivism : individual ( or psychological ) constructivism, such as piaget ' s theory of cognitive development, and social constructivism. this form of constructivism has a primary focus on how learners construct their own meaning from new information, as they interact with reality and with other learners who bring different perspectives. constructivist learning environments require students to use their prior knowledge and experiences to formulate new, related, and / or adaptive concepts in learning ( termos, 2012 ). under this framework, the role of the teacher becomes that of a facilitator, providing guidance so that learners can construct their own knowledge. constructivist educators must make sure that the prior learning experiences are appropriate and related to the concepts being taught. jonassen ( 1997 ) suggests " well - structured " learning environments are useful for novice learners and that " ill - structured " environments are only useful for more advanced learners. educators utilizing a constructivist perspective may emphasize an active learning environment that may incorporate learner - centered problem - based learning, project - based learning, and inquiry - based learning, ideally involving real - world scenarios, in which students are actively engaged in critical thinking activities. an illustrative discussion and example can be found in the 1980s deployment of constructivist cognitive learning in computer literacy, which involved programming as an instrument of learning. : 224 logo, a programming language, embodied an attempt to integrate piagetian ideas with computers and technology. initially there were broad, hopeful claims, including " perhaps the most controversial claim " that it would " improve general problem - solving skills " across disciplines. : 238 however, logo programming skills did not consistently yield cognitive benefits. : 238 it was " not as concrete " as advocates claimed, it privileged " one form of reasoning over all others ", and it was difficult to apply the thinking activity to non - logo - based activities. by the late 1980s, logo and other similar programming languages had lost their novelty and dominance and were gradually de - emphasized amid criticisms. = = practice = = the extent to which e - learning assists or replaces other learning and teaching approaches | https://en.wikipedia.org/wiki/Educational_technology |
it is becoming clear that luminous extragalactic x - ray and sub - mm sources are essentially distinct populations. thus, if high redshift sub - mm sources represent massive spheroids in formation, there must be a time lag between the major epoch of star - formation and the appearance of a visible quasar. despite this distinction, i find tentative evidence for a puzzling angular cross - correlation between x - ray sources and bright sub - mm sources in two independent fields. if this signal is due to large - scale structure it would argue for a low redshift ( z < 2 ) for many of the scuba sources. alternatively, i suggest that the effect may be enhanced by gravitational lensing. the exceptionally steep slope of the bright sub - mm counts makes this population particularly prone to lensing bias. an apparent correlation may therefore be produced if x - ray sources trace the intervening large scale structure. | arxiv:astro-ph/0203173 |
the mid - pleistocene transition, the shift from 41 kyr to 100 kyr glacial - interglacial cycles that occurred roughly 1 myr ago, is often considered as a change in internal climate dynamics. here we revisit the model of quaternary climate dynamics that was proposed by saltzman and maasch ( 1988 ). we show that it is quantitatively similar to a scalar equation for the ice dynamics only when combining the remaining components into a single delayed feedback term. the delay is the sum of the internal times scales of ocean transport and ice sheet dynamics, which is on the order of 10 kyr. we find that, in the absence of astronomical forcing, the delayed feedback leads to bistable behaviour, where stable large - amplitude oscillations of ice volume and an equilibrium coexist over a large range of values for the delay. we then apply astronomical forcing. we perform a systematic study to show how the system response depends on the forcing amplitude. we find that over a wide range of forcing amplitudes the forcing leads to a switch from small - scale oscillations of 41 kyr to large - amplitude oscillations of roughly 100 kyr without any change of other parameters. the transition in the forced model consistently occurs near the time of the mid - pleistocene transition as observed in data records. this provides evidence that the mpt could have been primarily a forcing - induced switch between attractors of the internal dynamics. small additional random disturbances make the forcing - induced transition near 800 kyr bp even more robust. we also find that the forced system forgets its initial history during the small - scale oscillations, in particular, nearby initial conditions converge prior to transitioning. in contrast to this, in the regime of large - amplitude oscillations, the oscillation phase is very sensitive to random perturbations, which has a strong effect on the timing of the deglaciation events. | arxiv:1712.07614 |
there is a broad interest in enhancing the strength of light - atom interactions to the point where injecting a single photon induces a nonlinear material response. here, we show theoretically that sub - doppler - cooled, two - level atoms that are spatially organized by weak optical fields give rise to a nonlinear material response that is greatly enhanced beyond that attainable in a homogeneous gas. specifically, in the regime where the intensity of the applied optical fields is much less than the off - resonant saturation intensity, we show that the third - order nonlinear susceptibility scales inversely with atomic temperature and, due to this scaling, can be two orders of magnitude larger than that of a homogeneous gas for typical experimental parameters. as a result, we predict that spatially bunched two - level atoms can exhibit single - photon nonlinearities. our model is valid for all atomic temperature regimes and simultaneously accounts for the back - action of the atoms on the optical fields. our results agree with previous theoretical and experimental results for light - atom interactions that have considered only a limited range of temperatures. for lattice beams tuned to the low - frequency side of the atomic transition, we find that the nonlinearity transitions from a self - focusing type to a self - defocusing type at a critical intensity. we also show that higher than third - order nonlinear optical susceptibilities are significant in the regime where the dipole potential energy is on the order of the atomic thermal energy. we therefore find that it is crucial to retain high - order nonlinearities to accurately predict interactions of laser fields with spatially organized ultracold atoms. the model presented here is a foundation for modeling low - light - level nonlinear optical processes for ultracold atoms in optical lattices. | arxiv:1405.2361 |
this is a very short paper that briefly discusses some of the tasks that nlg systems perform. it is of no research interest, but i have occasionally found it useful as a way of introducing nlg to potential project collaborators who know nothing about the field. | arxiv:cmp-lg/9605002 |
in the context of novel solid electrolytes for solid - state batteries, first - principles calculations are becoming increasingly more popular due to their ability to reproduce and predict accurately the energy, structural, and dynamical properties of fast - ion conductors. in order to accelerate the discovery of new superionic conductors is convenient to establish meaningful relations between ionic transport and simple materials descriptors. recently, several experimental studies on lithium fast - ion conductors have suggested a correlation between lattice softness and enhanced ionic conductivity due to a concomitant decrease in the activation energy for ion migration, $ e _ { a } $. in this article, we employ extensive \ emph { ab initio } molecular dynamics simulations based on density functional theory to substantiate the links between ionic transport and lattice dynamics in a number of structurally and chemically distinct lithium superionic conductors. our first - principles results show no evidence for a direct and general correlation between $ e _ { a } $, or the hopping attempt frequency, and lattice softness. however, we find that, in agreement with recent observations, the pre - exponential factor of lithium diffusivity, $ d _ { 0 } $, follows the meyer - neldel rule $ \ propto \ exp { \ left ( e _ { a } / \ langle \ omega \ rangle \ right ) } $, where $ \ langle \ omega \ rangle $ represents an average phonon frequency. hence, lattice softness can be identified with enhanced lithium diffusivity but only within families of superionic materials presenting very similar migration activation energies, due to larger $ d _ { 0 } $. on the technical side, we show that neglection of temperature effects in first - principles estimation of $ e _ { a } $ may lead to huge inaccuracies of $ \ sim 10 $ \ %. the limitations of zero - temperature harmonic approaches in modeling of lithium - ion conductors are also illustrated. | arxiv:1811.07936 |
in order to perform a sensitivity analysis of lagrangian trajectory models, lagrangian trajectory simulations have been compared to six openmetbuoy - v2021 drifter trajectories in the agulhas current system ( jan - mar 2023 ). three different lagrangian trajectory simulations have been assessed : ( 1 ) two offline lagrangian tracking tools, opendrift and parcels, ( 2 ) three eulerian ocean surface current products, hycom, mercator and globcurrent, and ( 3 ) the addition of wind and / or wave forcing parameterizations. the latter has also been evaluated by strong ocean current, high wind speed and stokes drift regimes. firstly, using the same time stepping scheme and linear interpolation methods, the different lagrangian simulators opendrift and parcels, performed identically. secondly, the globcurrent product showed the highest mean skill of the three ocean current products, although it underestimated the speed for strong ocean currents due to its spatial resolution. the hycom and mercator model simulations showed, respectively, 40 \ % and 15 \ % lower skill than the globcurrent simulations. finally, the addition of the stokes drift and a wind drift factor ( wdf ), improved the lagrangian simulation performance in skill and speed, especially in high wind ( > 10 m / s ) and / or stokes drift regimes ( > 0. 15 m / s ). the optimal wdf for the openmetbuoy - v2021 is found to be ~ 1. 8 \ % and ~ 2. 3 \ % for simulations including and excluding stokes drift forcing respectively. to further improve the incorporation of stokes drift and direct wind drag on the trajectory simulations, a more physically based solution is advised as there are still numerous wind and wave related processes that remain unresolved, like wave - current interactions and vertical shear. to statistically strengthen the conclusions from this research, incorporating additional observed drifter trajectories would be highly favourable. | arxiv:2409.20096 |
correlations in sensory neural networks have both extrinsic and intrinsic origins. extrinsic or stimulus correlations arise from shared inputs to the network, and thus depend strongly on the stimulus ensemble. intrinsic or noise correlations reflect biophysical mechanisms of interactions between neurons, which are expected to be robust to changes of the stimulus ensemble. despite the importance of this distinction for understanding how sensory networks encode information collectively, no method exists to reliably separate intrinsic interactions from extrinsic correlations in neural activity data, limiting our ability to build predictive models of the network response. in this paper we introduce a general strategy to infer { population models of interacting neurons that collectively encode stimulus information }. the key to disentangling intrinsic from extrinsic correlations is to infer the { couplings between neurons } separately from the encoding model, and to combine the two using corrections calculated in a mean - field approximation. we demonstrate the effectiveness of this approach on retinal recordings. the same coupling network is inferred from responses to radically different stimulus ensembles, showing that these couplings indeed reflect stimulus - independent interactions between neurons. the inferred model predicts accurately the collective response of retinal ganglion cell populations as a function of the stimulus. | arxiv:1801.01823 |
the scalar sector of the simplest version of the 3 - 3 - 1 electroweak models is constructed with three higgs triplets only. we show that a relation involving two of the constants of the higgs potential, two vacuum expectation values of the neutral scalars and the mass of the doubly charged higgs boson leads to important information concerning the signals of this scalar particle. | arxiv:hep-ph/0610276 |
as people ' s demand for personal privacy and data security becomes a priority, encrypted traffic has become mainstream in the cyber world. however, traffic encryption is also shielding malicious and illegal traffic introduced by adversaries, from being detected. this is especially so in the post - covid - 19 environment where malicious traffic encryption is growing rapidly. common security solutions that rely on plain payload content analysis such as deep packet inspection are rendered useless. thus, machine learning based approaches have become an important direction for encrypted malicious traffic detection. in this paper, we formulate a universal framework of machine learning based encrypted malicious traffic detection techniques and provided a systematic review. furthermore, current research adopts different datasets to train their models due to the lack of well - recognized datasets and feature sets. as a result, their model performance cannot be compared and analyzed reliably. therefore, in this paper, we analyse, process and combine datasets from 5 different sources to generate a comprehensive and fair dataset to aid future research in this field. on this basis, we also implement and compare 10 encrypted malicious traffic detection algorithms. we then discuss challenges and propose future directions of research. | arxiv:2203.09332 |
{ x } { \ left ( x \ right ) } \ right ] } = \ log { \ left ( { \ frac { 1 } { p _ { x } { \ left ( x \ right ) } } } \ right ) }. } for example, winning in the example § choosing 6 from 49 above is a bernoulli - distributed random variable x { \ displaystyle x } with a 1 / 13, 983, 816 chance of winning ( " success " ) we write x b e r n o u l l i ( p ) = b ( 1, p ) { \ textstyle x \ sim \ mathrm { bernoulli } \! \ left ( p \ right ) = \ mathrm { b } \! \ left ( 1, p \ right ) } with p = 1 13, 983, 816 { \ textstyle p = { \ tfrac { 1 } { 13, 983, 816 } } } and q = 13, 983, 815 13, 983, 816 { \ textstyle q = { \ tfrac { 13, 983, 815 } { 13, 983, 816 } } }. the information content of winning is i x ( win ) = − log 2 p x ( win ) = − log 2 1 13, 983, 816 ≈ 23. 73725 { \ displaystyle \ operatorname { i } _ { x } ( { \ text { win } } ) = - \ log _ { 2 } { p _ { x } { ( { \ text { win } } ) } } = - \ log _ { 2 } \! { \ tfrac { 1 } { 13, 983, 816 } } \ approx 23. 73725 } shannons or bits of information. ( see units of information for further explanation of terminology. ) the information content of losing is i x ( lose ) = − log 2 p x ( lose ) = − log 2 13, 983, 815 13, 983, 816 ≈ 1. 0317 × 10 − 7 shannons. { \ displaystyle { \ begin { aligned } \ operatorname { i } _ { x } ( { \ text { lose } } ) & = - \ log _ { 2 } { p _ { x } { ( { \ text { lose } } ) } } = - \ log _ { 2 } \! { \ tfra | https://en.wikipedia.org/wiki/Lottery_mathematics |
air pollution is one of the leading causes of death globally, and continues to have a detrimental effect on our health. in light of these impacts, an extensive range of statistical modelling approaches has been devised in order to better understand air pollution statistics. however, the time - varying statistics of different types of air pollutants are far from being fully understood. the observed probability density functions ( pdfs ) of concentrations depend very much on the spatial location and on the pollutant substance. in this paper, we analyse a large variety of data from 3544 different european monitoring sites and show that the pdfs of nitric oxide ( $ no $ ), nitrogen dioxide ( $ no2 $ ) and particulate matter ( $ pm10 $ and $ pm2. 5 $ ) concentrations generically exhibit heavy tails and are asymptotically well approximated by $ q $ - exponential distributions with a given width parameter $ \ lambda $. we observe that the power - law parameter $ q $ and the width parameter $ \ lambda $ vary widely for the different spatial locations. for each substance, we find different patterns of parameter clouds in the $ ( q, \ lambda ) $ plane. these depend on the type of pollutants and on the environmental characteristics ( urban / suburban / rural / traffic / industrial / background ). this means the effective statistical physics description of air pollution exhibits a strong degree of spatial heterogeneity. | arxiv:2203.04296 |
in the encoding of many real - world problems to propositional satisfiability, the cardinality constraint is a recurrent constraint that needs to be managed effectively. several efficient encodings have been proposed while missing that such a constraint can be involved in a more general propositional formulation. to avoid combinatorial explosion, tseitin principle usually used to translate such general propositional formula to conjunctive normal form ( cnf ), introduces fresh propositional variables to represent sub - formulas and / or complex contraints. thanks to plaisted and greenbaum improvement, the polarity of the sub - formula $ \ phi $ is taken into account leading to conditional constraints of the form $ y \ rightarrow \ phi $, or $ \ phi \ rightarrow y $, where $ y $ is a fresh propositional variable. in the case where $ \ phi $ represents a cardinality constraint, such translation leads to conditional cardinality constraints subject of the present paper. we first show that when all the clauses encoding the cardinality constraint are augmented with an additional new variable, most of the well - known encodings cease to maintain the generalized arc consistency property. then, we consider some of these encodings and show how they can be extended to recover such important property. an experimental validation is conducted on a sat - based pattern mining application, where such conditional cardinality constraints is a cornerstone, showing the relevance of our proposed approach. | arxiv:1804.00211 |
we prove a number of results to the general effect that, under obviously necessary numerical and determinant constraints, " most " morphisms between fixed bundles on a complex elliptic curve produce ( co ) kernels which can either be specified beforehand or else meet various rigidity constraints. examples include : ( a ) for indecomposable $ \ mathcal { e } $ and $ \ mathcal { e ' } $ with slopes and ranks increasing strictly in that order the space of monomorphisms whose cokernel is semistable and maximally rigid ( i. e. has minimal - dimensional automorphism group ) is open dense ; ( b ) for indecomposable $ \ mathcal { k } $, $ \ mathcal { e } $ and stable $ \ mathcal { f } $ with slopes increasing strictly in that order and ranks and determinants satisfying the obvious additivity constraints the space of embeddings $ \ mathcal { k } \ to \ mathcal { e } $ whose cokernel is isomorphic to $ \ mathcal { f } $ is open dense ; ( c ) the obvious mirror images of these results ; ( d ) generalizations weakening indecomposability to semistability + maximal rigidity ; ( e ) various examples illustrating the necessity of the assorted assumptions. | arxiv:2407.07344 |
we propose a scheme to produce spin entangled states for two interacting electrons. one electron is bound in a well in a semiconductor quantum wire and the second electron is transported along the wire, trapped in a surface acoustic wave potential ( saw ) minimum. we investigate the conditions for which the coulomb interaction between the two electrons induces entanglement. detailed numerical investigation reveals that the two electrons can be fully spin entangled depending on the confinement characteristics of the well and the saw potential amplitude. | arxiv:cond-mat/0610168 |
this short technical report illustrates the results of a test procedure we performed to validate the computer simulation of the hyq robot. | arxiv:1604.06818 |
oxygen ion migration in li2mno3 was systematically studied by first - principles calculations. hole polaron is found effective to lower the migration barrier of oxygen ion. | arxiv:1908.05754 |
interpreting the recently discovered narrow exotic baryons as pentaquark states, we discuss, along an old argument of ours, the isospin mixing occurring within the two doublets of $ q = - 1 $ and q = 0 states lying inside the $ s = - 2 $ ( $ \ xi $ - cascade ) sector. we argue that, at least within the jaffe - wilczek assignment, presently available data already indicate that mixing should occur at an observable level in both charge sectors, with mixing angles that can be predicted in terms of ratios of observable mass splittings. | arxiv:hep-ph/0404262 |
any continuous function $ f ^ * $ can be approximated arbitrarily well by a neural network with sufficiently many neurons $ k $. we consider the case when $ f ^ * $ itself is a neural network with one hidden layer and $ k $ neurons. approximating $ f ^ * $ with a neural network with $ n < k $ neurons can thus be seen as fitting an under - parameterized " student " network with $ n $ neurons to a " teacher " network with $ k $ neurons. as the student has fewer neurons than the teacher, it is unclear, whether each of the $ n $ student neurons should copy one of the teacher neurons or rather average a group of teacher neurons. for shallow neural networks with erf activation function and for the standard gaussian input distribution, we prove that " copy - average " configurations are critical points if the teacher ' s incoming vectors are orthonormal and its outgoing weights are unitary. moreover, the optimum among such configurations is reached when $ n - 1 $ student neurons each copy one teacher neuron and the $ n $ - th student neuron averages the remaining $ k - n + 1 $ teacher neurons. for the student network with $ n = 1 $ neuron, we provide additionally a closed - form solution of the non - trivial critical point ( s ) for commonly used activation functions through solving an equivalent constrained optimization problem. empirically, we find for the erf activation function that gradient flow converges either to the optimal copy - average critical point or to another point where each student neuron approximately copies a different teacher neuron. finally, we find similar results for the relu activation function, suggesting that the optimal solution of underparameterized networks has a universal structure. | arxiv:2311.01644 |
given a hopf algebra in a symmetric monoidal category with duals, the category of modules inherits the structure of a monoidal category with duals. if the notion of algebra is replaced with that of monad on a monoidal category with duals then bruguieres and virelizier showed when the category of modules inherits this structure of being monoidal with duals, and this gave rise to what they called a hopf monad. in this paper it is shown that there are good diagrammatic descriptions of dinatural transformations which allows the three - dimensional, object - free nature of their constructions to become apparent. | arxiv:0807.0658 |
the cdf collaboration has recently published a precision measurement of the w - boson mass that differs from the standard model prediction by seven standard deviations. this result can be explained with additional electroweak multiplets that either break the custodial symmetry or contribute to oblique parameters at loop level. here, we study one of the best - motivated scenarios involving new multiplets : the type - ii seesaw model, which involves a scalar triplet that generates majorana neutrino masses and can furthermore resolve the w - boson mass discrepancy. this favors a doubly - charged scalar with mass between 100 and 200 gev as well as other scalars with a fixed mass splitting. the entire preferred parameter space is testable at the lhc. | arxiv:2204.10274 |
electrostatic spectrometers utilized in high - resolution beta - spectroscopy studies such as in the karlsruhe tritium neutrino ( katrin ) experiment have to operate with a background level of less than 10 ^ ( - 2 ) counts per second. this limit can be exceeded by even a small number of rn - 219 or rn - 220 atoms being emanated into the volume and undergoing alpha - decay there. in this paper we present a detailed model of the underlying background - generating processes via electron emission by internal conversion, shake - off and relaxation processes in the atomic shells of the po - 215 and po - 216 daughters. the model yields electron energy spectra up to 400 kev and electron multiplicities of up to 20 which are compared to experimental data. | arxiv:1304.1375 |
we introduce crystalformer, a transformer - based autoregressive model specifically designed for space group - controlled generation of crystalline materials. the incorporation of space group symmetry significantly simplifies the crystal space, which is crucial for data and compute efficient generative modeling of crystalline materials. leveraging the prominent discrete and sequential nature of the wyckoff positions, crystalformer learns to generate crystals by directly predicting the species and locations of symmetry - inequivalent atoms in the unit cell. we demonstrate the advantages of crystalformer in standard tasks such as symmetric structure initialization and element substitution compared to conventional methods implemented in popular crystal structure prediction software. moreover, we showcase the application of crystalformer of property - guided materials design in a plug - and - play manner. our analysis shows that crystalformer ingests sensible solid - state chemistry knowledge and heuristics by compressing the material dataset, thus enabling systematic exploration of crystalline materials. the simplicity, generality, and flexibility of crystalformer position it as a promising architecture to be the foundational model of the entire crystalline materials space, heralding a new era in materials modeling and discovery. | arxiv:2403.15734 |
in - materia reservoir computing ( rc ) leverages the intrinsic physical responses of functional materials to perform complex computational tasks. magnetic metamaterials are exciting candidates for rc due to their huge state space, nonlinear emergent dynamics, and non - volatile memory. however, to be suitable for a broad range of tasks, the material system is required to exhibit a broad range of properties, and isolating these behaviours experimentally can often prove difficult. by using an electrically accessible device consisting of an array of interconnected magnetic nanorings - - a system shown to exhibit complex emergent dynamics - - here we show how reconfiguring the reservoir architecture allows exploitation of different aspects the system ' s dynamical behaviours. this is evidenced through state - of - the - art performance in diverse benchmark tasks with very different computational requirements, highlighting the additional computational configurability that can be obtained by altering the input / output architecture around the material system. | arxiv:2206.04446 |
we consider the cauchy problem for the integrable nonlocal nonlinear schr \ " odinger ( nnls ) equation $ iq _ { t } ( x, t ) + q _ { xx } ( x, t ) + 2 q ^ { 2 } ( x, t ) \ bar { q } ( - x, t ) = 0, \, x \ in \ mathbb { r }, \, t > 0, $ with a step - like boundary values : $ q ( x, t ) \ to 0 $ as $ x \ to - \ infty $ and $ q ( x, t ) \ to a $ as $ x \ to \ infty $ for all $ t \ geq0 $, where $ a > 0 $ is a constant. the long - time asymptotics of the solution $ q ( x, t ) $ of this problem along the rays $ x / t = c \ ne 0 $ is presented in \ cite { rs2 }. in the present paper, we extend the asymptotics into a region that is asymptotically closer to the ray $ x = 0 $ than these rays with any nonzero constant $ c $. we specify a one - parameter family of wedges in the $ x, t $ - plane, with curved boundaries, characterized by qualitatively different asymptotic behavior of $ q ( x, t ) $, and present the main asymptotic terms for each wedge. particularly, for wedges with $ x < 0 $, we show that the solution decays as $ t ^ { p } \ sqrt { \ ln t } $ with $ p < 0 $ depending on the wedge. for wedges with $ x > 0 $, we show that the asymptotics has an oscillating nature, with the phase functions specific for each wedge and depending on a slow variable parametrizing the wedges. the main tool used in this work is an adaptation of the nonlinear steepest decent method to the case when the stationary phase point of the phase function in the jump of the associated riemann - hilbert problem merges with a point which is singular for the corresponding spectral functions. | arxiv:2004.05987 |
economy can produce just two goods ( say " guns " and " butter " ). the ppf is a table or graph ( as at the right ) that shows the different quantity combinations of the two goods producible with a given technology and total factor inputs, which limit feasible total output. each point on the curve shows potential total output for the economy, which is the maximum feasible output of one good, given a feasible output quantity of the other good. scarcity is represented in the figure by people being willing but unable in the aggregate to consume beyond the ppf ( such as at x ) and by the negative slope of the curve. if production of one good increases along the curve, production of the other good decreases, an inverse relationship. this is because increasing output of one good requires transferring inputs to it from production of the other good, decreasing the latter. the slope of the curve at a point on it gives the trade - off between the two goods. it measures what an additional unit of one good costs in units forgone of the other good, an example of a real opportunity cost. thus, if one more gun costs 100 units of butter, the opportunity cost of one gun is 100 butter. along the ppf, scarcity implies that choosing more of one good in the aggregate entails doing with less of the other good. still, in a market economy, movement along the curve may indicate that the choice of the increased output is anticipated to be worth the cost to the agents. by construction, each point on the curve shows productive efficiency in maximizing output for given total inputs. a point inside the curve ( as at a ), is feasible but represents production inefficiency ( wasteful use of inputs ), in that output of one or both goods could increase by moving in a northeast direction to a point on the curve. examples cited of such inefficiency include high unemployment during a business - cycle recession or economic organisation of a country that discourages full use of resources. being on the curve might still not fully satisfy allocative efficiency ( also called pareto efficiency ) if it does not produce a mix of goods that consumers prefer over other points. much applied economics in public policy is concerned with determining how the efficiency of an economy can be improved. recognizing the reality of scarcity and then figuring out how to organise society for the most efficient use of resources has been described as the " essence of economics ", where the subject " makes its unique contribution. " = = = specialisation = = = special | https://en.wikipedia.org/wiki/Economics |
cavity - based noise detection schemes are combined with ultrafast pulse shaping as a means to diagnose the spectral correlations of both the amplitude and phase noise of an ultrafast frequency comb. the comb is divided into ten spectral regions, and the distribution of noise as well as the correlations between all pairs of spectral regions are measured against the quantum limit. these correlations are then represented in the form of classical noise matrices, which furnish a complete description of the underlying comb dynamics. their eigendecomposition reveals a set of theoretically predicted, decoupled noise modes that govern the dynamics of the comb. finally, the matrices contain the information necessary to deduce macroscopic noise properties of the comb. | arxiv:1410.4499 |
we propose an extension of the chronological calculus, developed by agrachev and gamkrelidze for the case of $ c ^ \ infty $ - smooth dynamical systems on finite - dimensional $ c ^ \ infty $ - smooth manifolds, to the case of $ c ^ m $ - smooth dynamical systems and infinite - dimensional $ c ^ m $ - manifolds. due to a relaxation in the underlying structure of the calculus, this extension provides a powerful computational tool without recourse to the theory of calculus in fr \ ' echet spaces required by the classical chronological calculus. in addition, this extension accounts for flows of vector fields which are merely measurable in time. to demonstrate the utility of this extension, we prove a variant of chow - rashevskii theorem for infinite - dimensional manifolds. | arxiv:1405.3997 |
we present how the formalism of geometric phases in adiabatic quantum dynamics provides geometric realisations permitting to ` ` embody ' ' the everett ' s many - worlds interpretation of quantum mechanics, including interferences between the worlds needed for the probability changes and the decoherence processes needed to solve the preferred basis problem. we show also that this geometric realisation is intimately related to quantum gravity ( especially to matrix models ), showing that the many - world interpretation can be consistent with quantum gravity. the concept of wormhole borrowed to general relativity is central in this geometric realisation. it appears not only as an image by analogy to help the interpretations, but also as a true physical model of quantum wormhole in quantum gravity, the two ones being consistent which each other. | arxiv:2302.13651 |
modeling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by bachelier in 1900. here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model. the resulting stochastic process is a heteroskedastic, non - markovian martingale, which can be used to simulate index evolution on the basis of an auto - regressive strategy. results are fully consistent with volatility clustering and with the multi - scaling properties of the return distribution. the idea of basing the process construction on scaling, and the construction itself, are closely inspired by the probabilistic renormalization group approach of statistical mechanics and by a recent formulation of the central limit theorem for sums of strongly correlated random variables. | arxiv:0804.0331 |
we study the de - equivariantization of a hopf algebra by an affine group scheme and we apply tannakian techniques in order to realize it as the tensor category of comodules over a coquasi - bialgebra. as an application we construct a family of coquasi - hopf algebras $ a ( h, g, \ phi ) $ attached to a coradically - graded pointed hopf algebra $ h $ and some extra data. | arxiv:1206.0410 |
am cvn - type systems are ultracompact, helium - accreting binary systems which are evolutionarily linked to the progenitors of thermonuclear supernovae and are expected to be strong galactic sources of gravitational waves detectable to upcoming space - based interferometers. am cvn binaries with orbital periods $ \ lesssim $ 20 - - 23 min exist in a constant high state with a permanently ionised accretion disc. we present the discovery of tic 378898110, a bright ( $ g = 14. 3 $ mag ), nearby ( $ 309. 3 \ pm 1. 8 $ pc ), high - state am cvn binary discovered in tess two - minute - cadence photometry. at optical wavelengths this is the third - brightest am cvn binary known. the photometry of the system shows a 23. 07172 ( 6 ) min periodicity, which is likely to be the ` superhump ' period and implies an orbital period in the range 22 - - 23 min. there is no detectable spectroscopic variability. the system underwent an unusual, year - long brightening event during which the dominant photometric period changed to a shorter period ( constrained to $ 20. 5 \ pm 2. 0 $ min ), which we suggest may be evidence for the onset of disc - edge eclipses. the estimated mass transfer rate, $ \ log ( \ dot { m } / \ mathrm { m _ \ odot } \ mathrm { yr } ^ { - 1 } ) = - 6. 8 \ pm 1. 0 $, is unusually high and may suggest a high - mass or thermally inflated donor. the binary is detected as an x - ray source, with a flux of $ 9. 2 ^ { + 4. 2 } _ { - 1. 8 } \ times 10 ^ { - 13 } $ erg cm $ ^ { - 2 } $ s $ ^ { - 1 } $ in the 0. 3 - - 10 kev range. tic 378898110 is the shortest - period binary system discovered with tess, and its large predicted gravitational - wave amplitude makes it a compelling verification binary for future space - based gravitational wave detectors. | arxiv:2311.01255 |
we discuss the design and performance of a threshold cerenkov counter for identification of charged hadrons. the radiator is pressurized gas, which is contained in thin - walled cylindrical modules. a mirror system of novel design transports cerenkov photons to photomultiplier tubes. this system is compact, contains relatively little material, and has a large fraction of active volume. a prototype of a module designed for the proposed cleo iii detector has been studied using cosmic rays. results from these studies show good agreement with a detailed monte carlo simulation of the module and indicate that it should achieve separation of pions and kaons at the 2. 5 - 3. 0sigma level in the momentum range 0. 8 - 2. 8 gev / c. we predict performance for specific physics analyses using a geant - based simulation package. | arxiv:hep-ex/9607013 |
this paper builds and extends on the authors ' previous work related to the algorithmic tool, cylindrical algebraic decomposition ( cad ), and one of its core applications, real quantifier elimination ( qe ). these topics are at the heart of symbolic computation and were first implemented in computer algebra systems decades ago, but have recently received renewed interest as part of the ongoing development of smt solvers for non - linear real arithmetic. first, we consider the use of iterated univariate resultants in traditional cad, and how this leads to inefficiencies, especially in the case of an input with multiple equational constraints. we reproduce the workshop paper [ davenport and england, 2023 ], adding important clarifications to our suggestions first made there to make use of multivariate resultants in the projection phase of cad. we then consider an alternative approach to this problem first documented in [ mccallum and brown, 2009 ] which redefines the actual object under construction, albeit only in the case of two equational constraints. we correct an unhelpful typo and provide a proof missing from that paper. we finish by revising the topic of how to deal with smt or real qe problems expressed using rational functions ( as opposed to the usual polynomial ones ) noting that these are often found in industrial applications. we revisit a proposal made in [ uncu, davenport and england, 2023 ] for doing this in the case of satisfiability, explaining why such an approach does not trivially extend to more complicated quantification structure and giving a suitable alternative. | arxiv:2312.16210 |
we propose a novel method for training deep neural networks that are capable of interpolation, that is, driving the empirical loss to zero. at each iteration, our method constructs a stochastic approximation of the learning objective. the approximation, known as a bundle, is a pointwise maximum of linear functions. our bundle contains a constant function that lower bounds the empirical loss. this enables us to compute an automatic adaptive learning rate, thereby providing an accurate solution. in addition, our bundle includes linear approximations computed at the current iterate and other linear estimates of the dnn parameters. the use of these additional approximations makes our method significantly more robust to its hyperparameters. based on its desirable empirical properties, we term our method bundle optimisation for robust and accurate training ( borat ). in order to operationalise borat, we design a novel algorithm for optimising the bundle approximation efficiently at each iteration. we establish the theoretical convergence of borat in both convex and non - convex settings. using standard publicly available data sets, we provide a thorough comparison of borat to other single hyperparameter optimisation algorithms. our experiments demonstrate borat matches the state - of - the - art generalisation performance for these methods and is the most robust. | arxiv:2201.12678 |
standard candles are one of the most important tools to study the universe. in this paper, the constraints of standards candles on the cosmological parameters are estimated for different cases. the dependence of the constraints on the intrinsic scatter of the luminosity relation and the redshift distribution of the standard candles is specifically investigated. the results, especially for the constraints on the components of the universe, clearly show that constraints from standard candles at different redshifts have different degeneracy orientations, thus standard candles with a wide redshift distribution can self break the degeneracy and improve the constraints significantly. as a result of this, even with the current level of tightness of known luminosity relations, gamma - ray bursts ( grbs ) can give comparable tightness of constraint with type ia supernovae ( sne ia ) on the components of the universe as long as the redshifts of the grbs are diversifying enough. however, for a substantial constraint on the dark energy eos, tighter luminosity relations for grbs are needed, since the constraints on the dark energy from standard candles at high redshifts are very weak and are thus less helpful in the degeneracy breaking. | arxiv:1509.07477 |
standard clothing asset generation involves creating forward - facing flat - lay garment images displayed on a clear background by extracting clothing information from diverse real - world contexts, which presents significant challenges due to highly standardized sampling distributions and precise structural requirements in the generated images. existing models have limited spatial perception and often exhibit structural hallucinations in this high - specification generative task. to address this issue, we propose a novel retrieval - augmented generation ( rag ) framework, termed ragdiffusion, to enhance structure determinacy and mitigate hallucinations by assimilating external knowledge from llm and databases. ragdiffusion consists of two core processes : ( 1 ) retrieval - based structure aggregation, which employs contrastive learning and a structure locally linear embedding ( slle ) to derive global structure and spatial landmarks, providing both soft and hard guidance to counteract structural ambiguities ; and ( 2 ) omni - level faithful garment generation, which introduces a three - level alignment that ensures fidelity in structural, pattern, and decoding components within the diffusing. extensive experiments on challenging real - world datasets demonstrate that ragdiffusion synthesizes structurally and detail - faithful clothing assets with significant performance improvements, representing a pioneering effort in high - specification faithful generation with rag to confront intrinsic hallucinations and enhance fidelity. | arxiv:2411.19528 |
we find there are at least two different steady states for transport across noncollinear magnetic multilayers. in the conventional one there is a discontinuity in the spin current across the interfaces which has been identified as the source of current induced magnetic reversal ; in the one advocated herein the spin torque arises from the spin accumulation transverse to the magnetization of a magnetic layer. these two states have quite different attributes which should be discerned by current experiments. | arxiv:cond-mat/0405613 |
we reveal the hydrogen isotope effect of three chemical reactions, i. e, the reflection, the absorption and the penetration ratios, by classical molecular dynamics simulation with a modified brenner ' s reactive empirical bond order ( rebo ) potential potential. we find that the reflection by pi - electron does not depend on the mass of the incident isotope, but the peak of the reflection by nuclear moves to higher side of incident energy. in addition to the reflection, we also find that the absorption ratio in the positive z side of the graphene becomes larger, as the mass of the incident isotope becomes larger. on the other hand, the absorption ratio in the negative z side of the graphene becomes smaller. last, it is found that the penetration ratio does not depend on the mass of the incident isotope because the graphene potential is not affected by the mass. | arxiv:0705.3130 |
atomic self - ordering to a crystalline phase in optical resonators is a consequence of the intriguing non - linear dynamics of strongly coupled atom motion and photons. generally the resulting phase diagrams and atomic states can be largely understood on a mean - field level. however, close to the phase transition point, quantum fluctuations and atom - field entanglement play a key role and initiate the symmetry breaking. here we propose a modified ring cavity geometry, in which the asymmetry imposed by a tilted pump beam reveals clear signatures of quantum dynamics even in a larger regime around the phase transition point. quantum fluctuations become visible both in the dynamic and steady - state properties. most strikingly we can identify a regime where a mean - field approximation predicts a runaway instability, while in the full quantum model the quantum fluctuations of the light field modes stabilize uniform atomic motion. the proposed geometry thus allows to unveil the " quantumness " of atomic self - ordering via experimentally directly accessible quantities. | arxiv:1907.02772 |
let $ \ rho $ be a borelian probability measure on $ \ mathrm { sl } _ d ( \ mathbb { r } ) $. consider the random walk $ ( x _ n ) $ on $ \ mathbb { r } ^ d \ setminus \ { 0 \ } $ defined by $ \ rho $ : for any $ x \ in \ mathbb { r } ^ d \ setminus \ { 0 \ } $, we set $ x _ 0 = x $ and $ x _ { n + 1 } = g _ { n + 1 } x _ n $ where $ ( g _ n ) $ is an iid sequence of $ \ mathrm { sl } _ d ( \ mathbb { r } ) - $ valued random variables of law $ \ rho $. guivarc ' h and raugi proved that under an assumption on the subgroup generated by the support of $ \ rho $ ( strong irreducibility and proximality ), this walk is transient. in particular, this proves that if $ f $ is a compactly supported continuous function on $ \ mathbb { r } ^ d $, then the function $ gf ( x ) : = \ mathbb { e } _ x \ sum _ { n = 0 } ^ { + \ infty } f ( x _ n ) $ is well defined for any $ x \ in \ mathbb { r } ^ d \ setminus \ { 0 \ } $. guivarc ' h and le page proved the renewal theorem in this situation : they study the possible limits of $ gf $ at $ 0 $ and in this article, we study the rate of convergence in their renewal theorem. to do so, we consider the family of operators $ ( p ( it ) ) _ { t \ in \ mathbb { r } } $ defined for any continuous function $ f $ on the sphere $ \ mathbb { s } ^ { d - 1 } $ and any $ x \ in \ mathbb { s } ^ { d - 1 } $ by \ [ p ( it ) f ( x ) = \ int _ { \ mathrm { sl } _ d ( \ mathbb { r } ) } e ^ { - it \ ln \ frac { \ | gx \ | } { \ | x \ | } } f \ left ( \ frac { gx } { \ | gx \ | | arxiv:1603.07214 |
we generalize the topological model recently proposed and investigate the cosmological perturbations of the model. the model has an exact de sitter background solution associated with a becchi - rouet - stora ( brs ) quartet terms which are regarded as a lagrangian density of the topological field theory. the de sitter solution can be selected without spontaneously breaking the brs symmetry, and be interpreted as a gauge fixing of de sitter spacetime. the brs symmetry is preserved for the perturbations around the de sitter background before we solve the constraints of general relativity. we derive action to the second order of the perturbations and confirm that even after solving the constraints, we have the brs symmetry at least for the second order action. we construct the cosmological perturbation theory involving the brs sector, and obtain the two point correlation functions for the curvature perturbation and the isocurvature perturbations which compose the brs sector. our result gives a new description for de sitter spacetime and the quantum field theory in de sitter spacetime. | arxiv:1702.02806 |
knowledge of the intensity and phase profiles of spectral components in a coherent optical field is critical for a wide range of high - precision optical applications. one of these is interferometric gravitational wave detectors, which rely on such fields for precise control of the experiment. here we demonstrate a new device, an \ textit { optical lock - in camera }, and highlight how they can be used within a gravitational wave interferometer to directly image fields at a higher spatial and temporal resolution than previously possible. this improvement is achieved using a pockels cell as a fast optical switch which transforms each pixel on a scmos array into an optical lock - in amplifier. we demonstrate that the optical lock - in camera can image fields with 2 ~ mpx resolution at 10 ~ hz with a sensitivity of - 62 ~ dbc when averaged over 2s. | arxiv:1907.05224 |
, and their union is the entire real line. alternatively, consider the real numbers with the counting measure, which assigns to each finite set of reals the number of points in the set. this measure space is not σ - finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. the σ - finite measure spaces have some very convenient properties ; σ - finiteness can be compared in this respect to the lindelof property of topological spaces. they can be also thought of as a vague generalization of the idea that a measure space may have ' uncountable measure '. = = = strictly localizable measures = = = = = = semifinite measures = = = let x { \ displaystyle x } be a set, let a { \ displaystyle { \ cal { a } } } be a sigma - algebra on x, { \ displaystyle x, } and let μ { \ displaystyle \ mu } be a measure on a. { \ displaystyle { \ cal { a } }. } we say μ { \ displaystyle \ mu } is semifinite to mean that for all a ∈ μ pre { + ∞ }, { \ displaystyle a \ in \ mu ^ { \ text { pre } } \ { + \ infty \ }, } p ( a ) ∩ μ pre ( r > 0 ) = ∅. { \ displaystyle { \ cal { p } } ( a ) \ cap \ mu ^ { \ text { pre } } ( \ mathbb { r } _ { > 0 } ) \ neq \ emptyset. } semifinite measures generalize sigma - finite measures, in such a way that some big theorems of measure theory that hold for sigma - finite but not arbitrary measures can be extended with little modification to hold for semifinite measures. ( to - do : add examples of such theorems ; cf. the talk page. ) = = = = basic examples = = = = every sigma - finite measure is semifinite. assume a = p ( x ), { \ displaystyle { \ cal { a } } = { \ cal { p } } ( x ), } let f : x → [ 0, + ∞ ], { \ displaystyle f : x \ to [ 0, + \ infty ], } and assume μ ( a ) = a ∈ a f ( a ) | https://en.wikipedia.org/wiki/Measure_(mathematics) |
in order to find counterparts of the detected objects in the akari deep field south ( adfs ) in all available wavelengths, we searched public databases ( ned, simbad and others ). checking 500 sources brighter than 0. 0482 jy in the akari wide - s band, we found 114 sources with possible counterparts, among which 78 were known galaxies. we present these sources as well as our first attempt to construct spectral energy distributions ( seds ) for the most secure and most interesting sources among them, taking into account all the known data together with the akari measurements in four bands. | arxiv:0903.3987 |
in this paper, we consider the problem of distinguishing the noisy codewords of a known binary linear block code from a random bit sequence. we propose to use the generalized likelihood ratio test ( glrt ) to solve this problem. we also give a formula to find approximate number of codewords required and compare our results with an existing method. | arxiv:1210.8267 |
we derive the best possible bounds that can be placed on yukawa - and chameleon - like modifications to the newtonian gravitational potential with a cavity optomechanical quantum sensor. by modelling the effects on an oscillating source - sphere on the optomechanical system from first - principles, we derive the fundamental sensitivity with which these modifications can be detected in the absence of environmental noise. in particular, we take into account the large size of the optomechanical probe compared with the range of the fifth forces that we wish to probe and quantify the resulting screening effect when both the source and probe are spherical. our results show that optomechanical systems in high vacuum could, in principle, further constrain the parameters of chameleon - like modifications to newtonian gravity. | arxiv:2108.00742 |
following fr \ " ohlich and spencer, we study one dimensional ising spin systems with ferromagnetic, long range interactions which decay as $ | x - y | ^ { - 2 + \ alpha } $, $ 0 \ leq \ alpha \ leq 1 / 2 $. we introduce a geometric description of the spin configurations in terms of triangles which play the role of contours and for which we establish peierls bounds. this in particular yields a direct proof of the well known result by dyson about phase transitions at low temperatures. | arxiv:math-ph/0211062 |
this doctoral thesis has two objectives. the first objective is to introduce a notion of equivalence for singular foliations that preserves their transverse geometry and is compatible with the notions of morita equivalence of the holonomy groupoids and the transverse equivalence for regular foliations that appeared in the 1980 ' s. the second one is to describe the structures behind quotients of singular foliations and to connect these results with their associated holonomy groupoids. it also wants to give an introduction to the notion of singular foliations as given by androulidakis and skandalis in arxiv : math / 0612370, as well as to their relation with lie groupoids and lie algebroids. | arxiv:2107.10502 |
it has been known for a long time that hyperons produced in hadronic collisions are polarized perpendicular to the production plane of the reaction. this effect cannot be described by using twist - 2 collinear parton correlators only. here we compute the contribution of twist - 3 fragmentation functions to the production of transversely polarized hyperons in unpolarized proton - proton collisions in the framework of collinear factorization. by taking into account the relations among the relevant twist - 3 fragmentation functions which follow from the qcd equation of motion and the lorentz invariance property of the correlators, we present the leading - order cross section for this term. | arxiv:1703.09399 |
compton - thick active galactic nuclei ( ct - agns ), characterized by a significant absorption with column densities of $ \ mathrm { n _ h } \ geqslant 1. 5 \ times 10 ^ { 24 } \ \ mathrm { cm } ^ { - 2 } $, emit feeble x - ray radiation and are even undetectable by x - ray instruments, making them difficult to identify. x - ray radiation from agns is the predominant source of the cosmic x - ray background ( cxb ). based on agn synthesis models for the cxb, the fraction of ct - agns should constitute a substantial portion of agn population, approximately 30 \ % or more. the fraction of ct - agns discovered in the cosmological evolution survey ( cosmos ) is significantly lower than this value. this means that many ct - agns may be hidden in agns that exhibit low photon counts or that have not been detected by x - ray instruments. this work focuses on identifying ct - agns hidden in agns with low photon counts. firstly, we selected 440 agns with abundant multiwavelength data as our sample. secondly, we analyzed multiwavelength data, extracting crucial physical parameters required for the ct - agn diagnosis. finally, we used multiwavelength approaches to identify ct - agns. we have successfully identified 18 ct - agns in our sample. among the ct - agns, four agns show discrepant results across different diagnostic methods. we discuss the potential reasons behind these diagnostic discrepancies. we explore the impact of estimating [ o ~ iii ] $ \ lambda ~ 5007 $ luminosities based on [ o ~ ii ] $ \ lambda ~ 3727 $ luminosities for the ct - agn diagnosis. we have also found that the properties of host galaxies for ct - agns and non - ct - agns do not show significant discrepancies. | arxiv:2502.03745 |
we propose a new approach to the self - consistency equation, which arises in the problem of the motion of a hole in a quantum antiferromagnet, appropriate to the case of small exchange energy $ j $. the functional equation for the green function is transformed into a differential equation ; its solutions are analyzed and compared to the existing numerical calculations. this method allows one to study the limit of $ j \ to 0 $. application to other strongly correlated electron systems is discussed. | arxiv:cond-mat/0604197 |
a new computational method for solving the nucleon - deuteron breakup scattering problem has been applied to study the elastic neutron - and proton - deuteron scattering on the basis of the configuration - space faddeev - noyes - noble - merkuriev equations. this method is based on the spline - decomposition in the angular variable and on a generalization of the numerov method for the hyperradius. the merkuriev - gignoux - laverne approach has been generalized for arbitrary nucleon - nucleon potentials and with an arbitrary number of partial waves. the nucleon - deuteron observables at the incident nucleon energy 3 mev have been calculated using the charge - independent av14 nucleon - nucleon potential including the coulomb force for the proton - deuteron scattering. results have been compared with those of other authors and with experimental proton - deuteron scattering data. | arxiv:1006.1888 |
visual anomaly detection is a highly challenging task, often categorized as a one - class classification and segmentation problem. recent studies have demonstrated that the student - teacher ( s - t ) framework effectively addresses this challenge. however, most s - t frameworks rely solely on pre - trained teacher networks to guide student networks in learning multi - scale similar features, overlooking the potential of the student networks to enhance learning through multi - scale feature fusion. in this study, we propose a novel model named pfadseg, which integrates a pre - trained teacher network, a denoising student network with multi - scale feature fusion, and a guided anomaly segmentation network into a unified framework. by adopting a unique teacher - encoder and student - decoder denoising mode, the model improves the student network ' s ability to learn from teacher network features. furthermore, an adaptive feature fusion mechanism is introduced to train a self - supervised segmentation network that synthesizes anomaly masks autonomously, significantly increasing detection performance. evaluated on the mvtec ad dataset, pfadseg achieves state - of - the - art results with an image - level auc of 98. 9 %, a pixel - level mean precision of 76. 4 %, and an instance - level mean precision of 78. 7 %. | arxiv:2501.12104 |
we present a high - resolution vla study of the total power and polarized radio continuum emission at 8. 46 and 4. 86 ghz of the irregular galaxy ngc 4449, known for its weak rotation and non - systematic gas motions. we found strong galaxy - scale regular magnetic fields, which is surprising because of a lack of ordered rotation required for the dynamo action. the strength of the regular field reaches 8 $ \ mu $ g and that of the total field 14 $ \ mu $ g, comparable to that of the total magnetic field strength in radio - bright spirals. the magnetic vectors in ngc 4449 form radial ` ` fans ' ' in the central region and fragments of a spiral pattern in the galaxy ' s outskirts. these structures are associated with large regions of systematic faraday rotation, implying genuine galaxy - scale magnetic fields rather than random ones compressed and stretched by gas flows. the observed pattern of polarization b - vectors is similar to dynamo - type fields in normal spirals. nonstandard, fast dynamo concepts are required to explain the observed field strengths, though it is unknown what kind of magnetic field geometry can be produced in slowly and chaotically rotating objects. the so far neglected role of magnetic fields for the dynamics and star formation in dwarf irregulars also needs to be revised. | arxiv:astro-ph/0001205 |
a partial monoid $ p $ is a set with a partial multiplication $ \ times $ ( and total identity $ 1 _ p $ ) which satisfies some associativity axiom. the partial monoid $ p $ may be embedded in a free monoid $ p ^ * $ and the product $ \ star $ is simulated by a string rewriting system on $ p ^ * $ that consists in evaluating the concatenation of two letters as a product in $ p $, when it is defined, and a letter $ 1 _ p $ as the empty word $ \ epsilon $. in this paper we study the profound relations between confluence for such a system and associativity of the multiplication. moreover we develop a reduction strategy to ensure confluence and which allows us to define a multiplication on normal forms associative up to a given congruence of $ p ^ * $. finally we show that this operation is associative if, and only if, the rewriting system under consideration is confluent. | arxiv:1002.2166 |
recently, to deliver services directly to the network edge, fog computing, an emerging and developing technology, acts as a layer between the cloud and the iot worlds. the cloud or fog computing nodes could be selected by iots applications to meet their resource needs. due to the scarce resources of fog devices that are available, as well as the need to meet user demands for low latency and quick reaction times, resource allocation in the fog - cloud environment becomes a difficult problem. in this problem, the load balancing between several fog devices is the most important element in achieving resource efficiency and preventing overload on fog devices. in this paper, a new adaptive resource allocation technique for load balancing in a fog - cloud environment is proposed. the proposed technique ranks each fog device using hybrid multi - criteria decision - making approaches fuzzy analytic hierarchy process ( fahp ) and fuzzy technique for order performance by similarity to ideal solution ( ftopsis ), then selects the most effective fog device based on the resulting ranking set. the simulation results show that the proposed technique outperforms existing techniques in terms of load balancing, response time, resource utilization, and energy consumption. the proposed technique decreases the number of fog nodes by 11 %, load balancing variance by 69 % and increases resource utilization to 90 % which is comparatively higher than the comparable methods. | arxiv:2402.01326 |
visual - language models ( vlms ) have shown remarkable performance across various tasks, particularly in recognizing geographic information from images. however, significant challenges remain, including biases and privacy concerns. to systematically address these issues in the context of geographic information recognition, we introduce a benchmark dataset consisting of 1, 200 images paired with detailed geographic metadata. evaluating four vlms, we find that while these models demonstrate the ability to recognize geographic information from images, achieving up to $ 53. 8 \ % $ accuracy in city prediction, they exhibit significant regional biases. specifically, performance is substantially higher for economically developed and densely populated regions compared to less developed ( $ - 12. 5 \ % $ ) and sparsely populated ( $ - 17. 0 \ % $ ) areas. moreover, the models exhibit regional biases, frequently overpredicting certain locations ; for instance, they consistently predict sydney for images taken in australia. the strong performance of vlms also raises privacy concerns, particularly for users who share images online without the intent of being identified. our code and dataset are publicly available at https : / / github. com / uscnlp - lime / fairlocator. | arxiv:2502.11163 |
we revisit the work of k. goeke, m. harvey, f. gr \ " ummer, and j. n. urbano ( phys. rev. { \ bf d37 }, 754 ( 1988 ) ) who considered a chiral model for the nucleon based on the linear sigma model with scalar - isoscalar scalar - isovector mesons coupled to quarks and solved using the coherent - pair approximation. in this way the quantum pion field can be treated in a non - perturbative fashion. in this work we review this model and the coherent pair approximation correcting several errors in the earlier work. we minimize the expectation value of the chiral hamiltonian in the ansatz coherent - pair ground state configuration and solve the resulting equations for nucleon quantum numbers. we calculate the canonical set of nucleon observables and compare with the hedgehog model and experiment. using the corrected equations yield slightly different values for nucleon observables but do not correct the large virial deviation in the $ \ pi $ - nucleon coupling. our results therefore do not significantly alter the conclusions of goeke, et al.. | arxiv:hep-ph/9809473 |
granular aluminum is a promising material for high kinetic inductance devices such as qubit circuits. it has the advantage over atomically disordered materials such as nbn _ x, to maintain a high kinetic inductance concomitantly with a high quality factor. we show that high quality nano - scale granular aluminum films having a sharp superconducting transition with normal state resistivity values of the order of 1x10 ^ 5 \ mu \ omega cm and kinetic inductance values of the order of 10 nh / sq can be obtained, surpassing state of the art values. we argue that this is a result of the different nature of the metal - to - insulator transition, being electronic correlations driven ( mott type ) in the former and disorder driven ( anderson type ) in the latter. | arxiv:2008.02860 |
" bad " data has a direct impact on 88 % of companies, with the average company losing 12 % of its revenue due to it. duplicates - multiple but different representations of the same real - world entities - are among the main reasons for poor data quality, so finding and configuring the right deduplication solution is essential. existing data matching benchmarks focus on the quality of matching results and neglect other important factors, such as business requirements. additionally, they often do not support the exploration of data matching results. to address this gap between the mere counting of record pairs vs. a comprehensive means to evaluate data matching solutions, we present the frost platform. it combines existing benchmarks, established quality metrics, cost and effort metrics, and exploration techniques, making it the first platform to allow systematic exploration to understand matching results. frost is implemented and published in the open - source application snowman, which includes the visual exploration of matching results. | arxiv:2107.10590 |
synchrotron based photoemission electron microscopy with energy filter combines real space imaging with microprobe diffraction ( $ \ mu $ - arpes ), giving access to the local electronic structure of laterally inhomogeneous materials. we present here an overview of the capabilities of this technique, illustrating selected applications of angle resolved photoemission electron microscopy and related microprobe methods. in addition, we report the demonstration of a darkfield xpeem ( df - xpeem ) imaging method for real space mapping of the electronic structure away from $ \ gamma $ at a lateral resolution of few tens of nm. the application of df - xpeem to the ( 1 $ \ times $ 12 ) - o / w ( 110 ) model oxide structure shows the high sensitivity of this technique to the local electronic structure, allowing to image domains with inequivalent adsorption site symmetry. perspectives of angle resolved peem are discussed. | arxiv:1212.6330 |
assume that the aubry set of the time - periodic positive definite lagrangian $ l $ consists of one hyperbolic 1 - periodic orbit. we provide an upper bound estimate of the rate of convergence of the family of new lax - oleinik type operators associated with $ l $ introduced by the authors in \ cite { w - y }. in addition, we construct an example where the aubry set of a time - independent positive definite lagrangian system consists of one hyperbolic periodic orbit and the rate of convergence of the lax - oleinik semigroup cannot be better than $ o ( \ frac { 1 } { t } ) $. | arxiv:1109.3327 |
in this paper, a new five - point targeted essentially non - oscillatory ( teno ) scheme with adaptive dissipation is proposed. with the standard teno weighting strategy, the cut - off parameter $ c _ t $ determines the nonlinear numerical dissipation of the resultant teno scheme. moreover, according to the dissipation - adaptive teno5 - a scheme, the choice of the cut - off parameter $ c _ t $ highly depends on the effective scale sensor. however, the scale sensor in teno5 - a can only roughly detect the discontinuity locations instead of evaluating the local flow wavenumber as desired. in this work, a new five - point scale sensor, which can estimate the local flow wavenumber accurately, is proposed to further improve the performance of teno5 - a. in combination with a hyperbolic tangent function, the new scale sensor is deployed to the teno5 - a framework for adapting the cut - off parameter $ c _ t $, i. e., the local nonlinear dissipation, according to the local flow wavenumber. overall, sufficient numerical dissipation is generated to capture discontinuities, whereas a minimum amount of dissipation is delivered for better resolving the smooth flows. a set of benchmark cases is simulated to demonstrate the performance of the new teno5 - a scheme. | arxiv:2303.10020 |
non - hermitian phenomena, such as exceptional points, non - hermitian skin effects, and topologically nontrivial phases have attracted continued attention. in this work, we reveal how interactions and nonreciprocal hopping could collectively influence the behavior of two interacting bosons on quasiperiodic lattices. focusing on the bose - hubbard model with aubry - andr \ ' e - harper quasiperiodic modulations and hopping asymmetry, we discover that interactions could enlarge the localization transition point of the noninteracting system into an intermediate mobility edge phase, in which localized doublons formed by bosonic pairs can coexist with delocalized states. under the open boundary condition, the bosonic doublons could further show non - hermitian skin effects, realizing doublon condensation at the edges, and their direction of skin - localization can be flexibly tuned by the hopping parameters. a framework is developed to characterize the spectral, localization, and topological transitions accompanying these phenomena. our work advances the understanding of localization and topological phases in non - hermitian systems, particularly in relation to multiparticle interactions. | arxiv:2412.11623 |
today ' s peer review process for scientific articles is unnecessarily opaque and offers few incentives to referees. likewise, the publishing process is unnecessarily inefficient and its results are only rarely made freely available to the public. here we outline a comparatively simple extension of arxiv. org, an online preprint archive widely used in the mathematical and physical sciences, that addresses both of these problems. under the proposal, editors invite referees to write public and signed reviews to be attached to the posted preprints, and then elevate selected articles to " published " status. | arxiv:1011.6590 |
the probabilistic interpretation of the standard regge - gribov model with triple pomeron interactions is discussed. it is stated that introduction of probabilities within this model is not unique and depends on what is meant under the relevant substructures, the traditional interpretation in terms of partons ( quarks and gluons ) is shown to be external to the model, imported from the qcd, and actually referring to the single pomeron exchange without interactions. so this interpretation actually forgets the model as such. alternative probabilities based on the pomerons as basic quantities within the model are discussed. two different approaches are considered, based either on the pomerons in fock ' s expansion of the wave function or on pomeron propagators in feynman diagrams. these pomeron probabilities and entropy turn out to be very different from the mentioned standard ones in the purelyp obabilistic treatment. the entropy, in particular, either rises with the rapidity and saturates at a certain fixed value or first rises, reaches some maximum and goes down to zero afterwards. possible observable manifestations of these probabilities and entropy are to be seen in the distributions of the cross - section in powers of the coupling constants to the participants. 23 | arxiv:2409.01620 |
this paper introduces generalized attention flow ( gaf ), a novel feature attribution method for transformer - based models to address the limitations of current approaches. by extending attention flow and replacing attention weights with the generalized information tensor, gaf integrates attention weights, their gradients, the maximum flow problem, and the barrier method to enhance the performance of feature attributions. the proposed method exhibits key theoretical properties and mitigates the shortcomings of prior techniques that rely solely on simple aggregation of attention weights. our comprehensive benchmarking on sequence classification tasks demonstrates that a specific variant of gaf consistently outperforms state - of - the - art feature attribution methods in most evaluation settings, providing a more reliable interpretation of transformer model outputs. | arxiv:2502.15765 |
non - linear dimensionality reduction can be performed by \ textit { manifold learning } approaches, such as stochastic neighbour embedding ( sne ), locally linear embedding ( lle ) and isometric feature mapping ( isomap ). these methods aim to produce two or three latent embeddings, primarily to visualise the data in intelligible representations. this manuscript proposes extensions of student ' s t - distributed sne ( t - sne ), lle and isomap, for dimensionality reduction and visualisation of multi - view data. multi - view data refers to multiple types of data generated from the same samples. the proposed multi - view approaches provide more comprehensible projections of the samples compared to the ones obtained by visualising each data - view separately. commonly visualisation is used for identifying underlying patterns within the samples. by incorporating the obtained low - dimensional embeddings from the multi - view manifold approaches into the k - means clustering algorithm, it is shown that clusters of the samples are accurately identified. through the analysis of real and synthetic data the proposed multi - sne approach is found to have the best performance. we further illustrate the applicability of the multi - sne approach for the analysis of multi - omics single - cell data, where the aim is to visualise and identify cell heterogeneity and cell types in biological tissues relevant to health and disease. | arxiv:2101.06763 |
the toronto red - sequence cluster survey ( trcs ) is a new galaxy cluster survey designed to provide a large sample of optically selected 0. 1 < z < 1. 4 clusters. the planned survey data is 100 square degrees of two color ( r and z ' ) imaging, with a 5 - sigma depth ~ 2 mag past m * at z = 1. the primary scientific drivers of the survey are a derivation of omega _ m and sigma _ 8 ( from n ( m, z ) for clusters ) and a study of cluster galaxy evolution with a complete sample. this paper gives a brief outline of the trcs survey parameters and sketches the methods by which we intend to pursue the main scientific goals, including an explicit calculation of the expected survey completeness limits. some preliminary results from the first set of data ( ~ 6 deg ^ 2 ) are also given. these preliminary results provide new examples of rich z ~ 1 clusters, strong cluster lensing, and a possible filament at z ~ 1. | arxiv:astro-ph/0002340 |
laser has unique advantages such as abundant spectrum resources and low propagation divergence in wireless charging and wireless communications, compared with radio frequency. resonant beams, as a kind of intra - cavity laser beams, have been proposed as the carrier of wireless charging and communication, as it has unique features including high power, intrinsic safety, and self - aligned mobility. however, this system has problems such as intra - cavity echo interference and power fluctuation. to study the time - domain behavior of the resonant beam system, we create a simulation algorithm by discretizing the laser rate equations which model the dynamics of the excited atom density in the gain medium and the photon density in the cavity. the simulation results are in good agreement with theoretical calculation. we also propose a delay - divide demodulation method to address the echo interference issue, and use the simulation algorithm to verify its feasibility. the results show that the resonant beam charging and communication system with the proposed demodulator is feasible and performs well. the analysis in this work also helps researchers to deeply understand the behavior of the resonant beam system. | arxiv:2203.01076 |
we improve knabe ' s spectral gap bound for frustration - free translation - invariant local hamiltonians in 1d. the bound is based on a relationship between global and local gaps. the global gap is the spectral gap of a size - $ m $ chain with periodic boundary conditions, while the local gap is that of a subchain of size $ n < m $ with open boundary conditions. knabe proved that if the local gap is larger than the threshold value $ 1 / ( n - 1 ) $ for some $ n > 2 $, then the global gap is lower bounded by a positive constant in the thermodynamic limit $ m \ rightarrow \ infty $. here we improve the threshold to $ \ frac { 6 } { n ( n + 1 ) } $, which is better ( smaller ) for all $ n > 3 $ and which is asymptotically optimal. as a corollary we establish a surprising fact about 1d translation - invariant frustration - free systems that are gapless in the thermodynamic limit : for any such system the spectral gap of a size - $ n $ chain with open boundary conditions is upper bounded as $ o ( n ^ { - 2 } ) $. this contrasts with gapless frustrated systems where the gap can be $ \ theta ( n ^ { - 1 } ) $. it also limits the extent to which the area law is violated in these frustration - free systems, since it implies that the half - chain entanglement entropy is $ o ( 1 / \ sqrt { \ epsilon } ) $ as a function of spectral gap $ \ epsilon $. we extend our results to frustration - free systems on a 2d square lattice. | arxiv:1512.00088 |
we introduce vasa, a framework for generating lifelike talking faces with appealing visual affective skills ( vas ) given a single static image and a speech audio clip. our premiere model, vasa - 1, is capable of not only generating lip movements that are exquisitely synchronized with the audio, but also producing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness. the core innovations include a holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos. through extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively. our method not only delivers high video quality with realistic facial and head dynamics but also supports the online generation of 512x512 videos at up to 40 fps with negligible starting latency. it paves the way for real - time engagements with lifelike avatars that emulate human conversational behaviors. | arxiv:2404.10667 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.