text
stringlengths
1
3.65k
source
stringlengths
15
79
in this paper, we develop a basic theory of orlicz affine and geominimal surface areas for convex and $ s $ - concave functions. we prove some basic properties for these newly introduced functional affine invariants and establish related functional affine isoperimetric inequalities as well as functional santal \ ' o type inequalities.
arxiv:1506.02974
we present an empirical study on embedding the lyrics of a song into a fixed - dimensional feature for the purpose of music tagging. five methods of computing token - level and four methods of computing document - level representations are trained on an industrial - scale dataset of tens of millions of songs. we compare simple averaging of pretrained embeddings to modern recurrent and attention - based neural architectures. evaluating on a wide range of tagging tasks such as genre classification, explicit content identification and era detection, we find that averaging word embeddings outperform more complex architectures in many downstream metrics.
arxiv:2112.11436
tidal downsizing is the modern version of the kuiper ( 1951 ) scenario of planet formation. detailed simulations of self - gravitating discs, gas fragments, dust grain dynamics, and planet evolutionary calculations are summarised here and used to build a predictive planet formation model and population synthesis. a new interpretation of exoplanetary and debris disc data, the solar system ' s origins, and the links between planets and brown dwarfs is offered. this interpretation is contrasted with the current observations and the predictions of the core accretion theory. observations that can distinguish the two scenarios are pointed out. in particular, tidal downsizing predicts that presence of debris discs, sub - neptune mass planets, planets more massive than $ \ sim 5 $ ~ jupiter masses and brown dwarfs should not correlate strongly with the metallicity of the host. for gas giants of $ \ sim $ saturn to a few jupiter mass, a strong host star metallicity correlation is predicted only inwards of a few au from the host. composition of massive cores is predicted to be dominated by rock rather than ices. debris discs made by tidal downsizing are distinct from those made by core accretion at birth : they have an innermost edge always larger than about 1 au, have smaller total masses and are usually in a dynamically excited state. it is argued that planet formation in surprisingly young or very dynamic systems such as hl tau and kepler - 444 may be a signature of tidal downsizing. open questions and potential weaknesses of the hypothesis are pointed out.
arxiv:1609.07503
proton radiotherapy promises accurate dose delivery to a tumor and minimal dose deposition to all other tissues. however, in practice the planned dose distribution may not conform to the actual one due to noisy data and different types of errors. one such error comes in a form of potentially inaccurate conversion of the hounsfield units ( hu ) to stopping powers ( sp ) of protons. we propose a method of improving the cc based on a planning ct and proton range measurements acquired during treatment. the range data were simulated using a virtual cc and a planning ct. the range data were given two types of noise : range shift due to patient setup errors ; and range noise due to measurement imprecision, including a misalignment of the range measuring device. the method consists of two parts. the first part involves a taylor expansion of the water equivalent path length ( wepl ) map in terms of the range shift caused by the difference between the planning and the virtual cc. the range shift is then solved for explicitly, leading to a polynomial function of the difference between the two ccs. the second part consists in minimizing a score function relating the range due to the virtual cc and the range due to the optimized cc. tested on ten different ccs, our results show that, with range data collected over a few fractions ( less than 10 ), the optimized cc leads to an overall reduction of the range difference. more precisely, on average, the uncertainty of the cc was reduced from 2. 67 % to 1. 62 %, while the average reduction of the wepl bias was reduced from 2. 14 % to 0. 74 %. the advantage of our method over others is 1 ) its speed, and 2 ) the fact that the range data it necessitates are acquired during the treatment itself, and as such it does not burden the patient with additional dose.
arxiv:1809.01858
this paper reports the machine translation ( mt ) systems submitted by the iiitt team for the english - > marathi and english - > irish language pairs loresmt 2021 shared task. the task focuses on getting exceptional translations for rather low - resourced languages like irish and marathi. we fine - tune indictrans, a pretrained multilingual nmt model for english - > marathi, using external parallel corpus as input for additional training. we have used a pretrained helsinki - nlp opus mt english - > irish model for the latter language pair. our approaches yield relatively promising results on the bleu metrics. under the team name iiitt, our systems ranked 1, 1, and 2 in english - > marathi, irish - > english, and english - > irish, respectively.
arxiv:2108.08556
a famous problem in discrete geometry is to find all monohedral plane tilers, which is still open to the best of our knowledge. this paper concerns with one of its variants that to determine all convex polyhedra whose every cross - section tiles the plane. we call such polyhedra universal tilers. we obtain that a convex polyhedron is a universal tiler only if it is a tetrahedron or a pentahedron.
arxiv:1109.0813
we present the results of mesoscopic dissipative particle dynamics ( dpd ) simulations of coupled electrohydrodynamic phenomena on the micro - and nanoscale. the effects of electroosmotic flow and slippage combined with polyelectrolyte electrophoresis are investigated in detail, taking full account of hydrodynamic and electrostatic interactions. our numerical results are in excellent agreement with analytical calculations.
arxiv:1007.3585
gravitational - wave detections are now starting to probe the mass distribution of stellar - mass black holes ( bhs ). robust predictions from stellar models are needed to interpret these. theory predicts the existence of a gap in the bh mass distribution because of pair - instability supernova. the maximum bh mass below the gap is the result of pulsational mass loss. we evolve massive helium stars through their late hydrodynamical phases of evolution using the open - source mesa stellar evolution code. we find that the location of the lower edge of the mass gap at 45 $ m _ \ odot $ is remarkably robust against variations in the metallicity ( $ \ approx 3m _ \ odot $ ), the treatment of internal mixing ( $ \ approx 1m _ \ odot $ ), stellar wind mass loss ( $ \ approx 4m _ \ odot $ ), making it the most robust predictions for the final stages of massive star evolution. the reason is that the onset of the instability is dictated by the near - final core mass, which in turn sets the resulting bh mass. however, varying $ ^ { 12 } c \ left ( \ alpha, \ gamma \ right ) ^ { 16 } o $ reaction rate within its $ 1 \ sigma $ uncertainties shifts the location of the gap between $ 40m _ \ odot $ and $ 56m _ \ odot $. we provide updated analytic fits for population synthesis simulations. our results imply that the detection of merging bhs can provide constraints on nuclear astrophysics. furthermore, the robustness against metallicity suggests that there is a universal maximum for the location of the lower edge of the gap, which is insensitive to the formation environment and redshift for first - generation bhs. this is promising for the possibility to use the location of the gap as a " standard siren " across the universe.
arxiv:1910.12874
we experimentally demonstrate a simple and robust optical fibers based method to achieve simultaneously efficient excitation and fluorescence collection from nitrogen - vacancy ( nv ) defects containing micro - crystalline diamond. we fabricate a suitable micro - concave ( mc ) mirror that focuses scattered excitation laser light into the diamond located at the focal point of the mirror. at the same instance, the mirror also couples the fluorescence light exiting out of the diamond crystal in the opposite direction of the optical fiber back into the optical fiber within its light acceptance cone. this part of fluorescence would have been otherwise lost from reaching the detector. our proof - of - principle demonstration achieves a 25 times improvement in fluorescence collection compared to the case of not using any mirrors. the increase in light collection favors getting high signal - to - noise ratio ( snr ) optically detected magnetic resonance ( odmr ) signals hence offers a practical advantage in fiber - based nv quantum sensors. additionally, we compacted the nv sensor system by replacing some bulky optical elements in the optical path with a 1x2 fiber optical coupler in our optical system. this reduces the complexity of the system and provides portability and robustness needed for applications like magnetic endoscopy and remote - magnetic sensing.
arxiv:1804.04631
babylonian mathematics ( also known as assyro - babylonian mathematics ) is the mathematics developed or practiced by the people of mesopotamia, as attested by sources mainly surviving from the old babylonian period ( 1830 – 1531 bc ) to the seleucid from the last three or four centuries bc. with respect to content, there is scarcely any difference between the two groups of texts. babylonian mathematics remained constant, in character and content, for over a millennium. in contrast to the scarcity of sources in egyptian mathematics, knowledge of babylonian mathematics is derived from hundreds of clay tablets unearthed since the 1850s. written in cuneiform, tablets were inscribed while the clay was moist, and baked hard in an oven or by the heat of the sun. the majority of recovered clay tablets date from 1800 to 1600 bc, and cover topics that include fractions, algebra, quadratic and cubic equations and the pythagorean theorem. the babylonian tablet ybc 7289 gives an approximation of 2 { \ displaystyle { \ sqrt { 2 } } } accurate to three significant sexagesimal digits ( about six significant decimal digits ). = = origins of babylonian mathematics = = babylonian mathematics is a range of numeric and more advanced mathematical practices in the ancient near east, written in cuneiform script. study has historically focused on the first babylonian dynasty old babylonian period in the early second millennium bc due to the wealth of data available. there has been debate over the earliest appearance of babylonian mathematics, with historians suggesting a range of dates between the 5th and 3rd millennia bc. babylonian mathematics was primarily written on clay tablets in cuneiform script in the akkadian or sumerian languages. " babylonian mathematics " is perhaps an unhelpful term since the earliest suggested origins date to the use of accounting devices, such as bullae and tokens, in the 5th millennium bc. = = babylonian numerals = = the babylonian system of mathematics was a sexagesimal ( base 60 ) numeral system. from this we derive the modern - day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 degrees in a circle. the babylonians were able to make great advances in mathematics for two reasons. firstly, the number 60 is a superior highly composite number, having factors of 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 ( including those that are themselves composite ), facilitating calculations with fractions. additionally, unlike the egyptians and romans, the babylonians had
https://en.wikipedia.org/wiki/Babylonian_mathematics
in this study we explore the cosmological behavior of a non - minimally coupled scalar field that is linked to torsion gravity. we demonstrate the sorkin - schutz formalism with general power law teleparallel torsion coupling. the autonomous dynamical system has been formulated. the phase space diagrams have been analysed at each critical point. the critical points representing different eras of universe evolution starting from radiation, dark matter ( dm ), and dark energy ( de ) have been investigated. the scaling attractors with the viable range of model parameters have been obtained using exponential scalar field couplings. this modified version of the formalism describes some novel scaling solutions.
arxiv:2504.04406
the segment anything model ( sam ) gained significant success in natural image segmentation, and many methods have tried to fine - tune it to medical image segmentation. an efficient way to do so is by using adapters, specialized modules that learn just a few parameters to tailor sam specifically for medical images. however, unlike natural images, many tissues and lesions in medical images have blurry boundaries and may be ambiguous. previous efforts to adapt sam ignore this challenge and can only predict distinct segmentation. it may mislead clinicians or cause misdiagnosis, especially when encountering rare variants or situations with low model confidence. in this work, we propose a novel module called the uncertainty - aware adapter, which efficiently fine - tuning sam for uncertainty - aware medical image segmentation. utilizing a conditional variational autoencoder, we encoded stochastic samples to effectively represent the inherent uncertainty in medical imaging. we designed a new module on a standard adapter that utilizes a condition - based strategy to interact with samples to help sam integrate uncertainty. we evaluated our method on two multi - annotated datasets with different modalities : lidc - idri ( lung abnormalities segmentation ) and refuge2 ( optic - cup segmentation ). the experimental results show that the proposed model outperforms all the previous methods and achieves the new state - of - the - art ( sota ) on both benchmarks. we also demonstrated that our method can generate diverse segmentation hypotheses that are more realistic as well as heterogeneous.
arxiv:2403.10931
full - graph training on graph neural networks ( gnn ) has emerged as a promising training method for its effectiveness. full - graph training requires extensive memory and computation resources. to accelerate this training process, researchers have proposed employing multi - gpu processing. however the scalability of existing frameworks is limited as they necessitate maintaining the training data for every layer in gpu memory. to efficiently train on large graphs, we present hongtu, a scalable full - graph gnn training system running on gpu - accelerated platforms. hongtu stores vertex data in cpu memory and offloads training to gpus. hongtu employs a memory - efficient full - graph training framework that reduces runtime memory consumption by using partition - based training and recomputation - caching - hybrid intermediate data management. to address the issue of increased host - gpu communication caused by duplicated neighbor access among partitions, hongtu employs a deduplicated communication framework that converts the redundant host - gpu communication to efficient inter / intra - gpu data access. further, hongtu uses a cost model - guided graph reorganization method to minimize communication overhead. experimental results on a 4xa100 gpu server show that hongtu effectively supports billion - scale full - graph gnn training while reducing host - gpu data communication by 25 % - 71 %. compared to the full - graph gnn system distgnn running on 16 cpu nodes, hongtu achieves speedups ranging from 7. 8x to 20. 2x. for small graphs where the training data fits into the gpus, hongtu achieves performance comparable to existing gpu - based gnn systems.
arxiv:2311.14898
in this paper we use our recently generalized black hole entropy formula to propose a quantum version of the friedmann equations. in particular, starting from the differential version of the first law of thermodynamics, we are able to find planckian ( non commutative ) corrections to the friedmann flat equations. the so modified equations are formally similar to the ones present in gauss - bonnet gravity, but in the ordinary 3 + 1 dimensions. as a consequence of these corrections, by considering negative fluctuations in the internal energy that are allowed by quantum field theory, our equations imply a maximum value both for the energy density $ \ rho $ and for the hubble flow $ h $, i. e. the big bang is ruled out. conversely, by considering positive quantum fluctuations, we found no maximum for $ \ rho $ and $ h $. nevertheless, by starting with an early time energy density $ \ rho \ sim 1 / t ^ 2 $, we obtain a value for the scale factor $ a ( t ) \ sim e ^ { \ sqrt { t } } $, implying a finite planckian universe at $ t = 0 $, i. e. the point - like big bang singularity is substituted by a universe of planckian size at $ t = 0 $. finally, we found possible higher order planckian terms to our equations together with the related corrections of our generalized bekenstein - hawking entropy.
arxiv:1511.06511
let k be a number field, n _ k its degree, and d _ k the absolute value of its discriminant. we prove that, if d _ k is sufficiently large, then the dedekind zeta function associated to k has no zeros in the region : re ( s ) > 1 - 1 / ( 12. 55 log d _ k + 9. 69 n _ k log | im s | + 3. 03 n _ k + 58. 63 ) and | im s | > 1. moreover, it has at most one zero in the region : re ( s ) > 1 - 1 / ( 12. 74 log d _ k ) and | im s | < 1. this zero if it exists is simple and is real. this argument also improves a result of stark by a factor of 2 : there is at most one zero in the region re ( s ) > 1 - 1 / ( 2 log d _ k ) and | im s | < 1 / ( 2 log d _ k ).
arxiv:1106.1868
in this work, we propose a novel method to incorporate corpus - level discourse information into language modelling. we call this larger - context language model. we introduce a late fusion approach to a recurrent language model based on long short - term memory units ( lstm ), which helps the lstm unit keep intra - sentence dependencies and inter - sentence dependencies separate from each other. through the evaluation on three corpora ( imdb, bbc, and penntree bank ), we demon - strate that the proposed model improves perplexity significantly. in the experi - ments, we evaluate the proposed approach while varying the number of context sentences and observe that the proposed late fusion is superior to the usual way of incorporating additional inputs to the lstm. by analyzing the trained larger - context language model, we discover that content words, including nouns, adjec - tives and verbs, benefit most from an increasing number of context sentences. this analysis suggests that larger - context language model improves the unconditional language model by capturing the theme of a document better and more easily.
arxiv:1511.03729
we study numerically the thermal emission of $ e ^ + e ^ - $ pairs from a bare strange star heated by energy input onto its surface ; heating starts at some moment, and is steady afterwards. the thermal luminosity in $ e ^ + e ^ - $ pairs increases to some constant value. the rise time and the steady thermal luminosity are evaluated. both normal and colour superconducting states of strange quark matter are considered. the results are used to test the magnetar model of soft gamma - ray repeaters where the bursting activity is explained by fast decay of superstrong magnetic fields and heating of the strange star surface. it is shown that the rise times observed in typical bursts may be explained in this model only if strange quark matter is a superconductor with an energy gap of more that 1 mev.
arxiv:astro-ph/0107020
nft ( non - fungible token ) has drastically increased in its size, accounting for over \ $ 16. 9b of total market capitalization. despite the rapid growth of nfts, this market has not been examined thoroughly from a financial perspective. in this paper, we conduct methodical analyses to identify nft market movers who play a significant role in potentially manipulating and oscillating nft values. we collect over 3. 8m nft transaction data from the ethereum blockchain from january 2021 to february 2022 to extract trading information in line with the nft lifecycle : ( i ) mint, ( ii ) transfer / sale, and ( iii ) burn. based on the size of held nft values, we classify nft traders into three groups ( whales, dolphins, and minnows ). in total, we analyze 430k traders from 91 different nft collection sources. we find that the top 0. 1 \ % of nft traders ( i. e., whales ) drive the nft market with consistent, high returns. we then identify and characterize the nft whales ' unique investment strategies ( e. g., mint / sale patterns, wash trading ) to empirically understand the whales in the nft market for the first time.
arxiv:2303.09393
a two state model for opinion forming, which has proven heuristic power, is reviewed with a novel emphasis on the existence or absence of a threshold for the dynamics. monitored by repeated small groups discussions floater agents update their opinion according to a local majority rule. a threshold makes the initial supports to flow towards eitheir one of two opposite attractors with only one single opinion. while odd sizes yield a threshold at fifty percent even sizes, which allow the inclusion of doubt at an opinion tie, produces a threshold shift toward either 0 or 1 giving rise to minority opinion spreading. considering heterogeneous agents like contrarians and inflexibles turn the dynamics threshold less beyond sone critical values. one unique attractor at fifty - fifty drives then the dynamics. in addition inflexibles can generate asymmetry and if one sided they erase the threshold ensuring the associated opinion to eventually gain the all population support. it may shed a new and counter intuitive light on some social aspect of the global warming phenomenon.
arxiv:0803.2453
three - dimensional ( 3d ) dirac semimetals are new quantum materials and can be viewed as 3d analogues of graphene. many fascinating electronic properties have been proposed and realized in 3d dirac semimetals, which demonstrates their potential applications in next generation quantum devices. bismuth - antimony bi1 - xsbx can be tuned from a topological insulator to a band insulator through a quantum critical point at x ~ 4 %, where 3d dirac fermions appear. here, we report on a magnetotransport study of bi1 - xsbx at such a quantum critical point. an unusual magnetic - field induced semimetal - semiconductor phase transition was observed in the bi0. 96sb0. 04 single crystals. in a magnetic field of 8 t, bi0. 96sb0. 04 single crystals show giant magnetoresistances of up to 6000 % at low - temperature, 5 k, and 300 % at room - temperature, 300 k. the observed magnetoresistances keep linear down to approximate zero - field when the temperature is below 200 k. our experimental results are not only interesting for the fundamental physics of 3d dirac semimetals, but also for potential applications of 3d dirac semimetals in magnetoelectronic devices.
arxiv:1509.02244
the concept of joint bivariate signature, introduced by navarro et al. ( 2013 ), is a useful tool for quantifying the reliability of two systems with shared components. as with the univariate system signature, introduced by samaniego ( 2007 ), its applications are limited to systems with only one type of components, which restricts its practical use. coolen and coolen - maturi ( 2012 ) introduced the survival signature, which generalizes samaniego ' s signature and can be used for systems with multiple types of components. this paper introduces a joint survival signature for multiple systems with multiple types of components and with some components shared between systems. a particularly important feature is that the functioning of these systems can be considered at different times, enabling computation of relevant conditional probabilities with regard to a system ' s functioning conditional on the status of another system with which it shares components. several opportunities for practical application and related challenges for further development of the presented concept are briefly discussed, setting out an important direction for future research.
arxiv:2007.02189
we review ( and extend ) the analysis of general theories of all interactions ( gravity included ) where the mass scales are due to dimensional transmutation. quantum consistency requires the presence of terms in the action with four derivatives of the metric. it is shown, nevertheless, how unitary is achieved and the classical ostrogradsky instabilities can be avoided. the four - derivative terms also allow us to have a uv complete framework and a naturally small ratio between the higgs mass and the planck scale. moreover, black holes of einstein gravity with horizons smaller than a certain ( microscopic ) scale are replaced by horizonless ultracompact objects that are free from any singularity and have interesting phenomenological applications. we also discuss the predictions that can be compared with observations of the microwave background radiation anisotropies and find that this scenario is viable and can be tested with future data. finally, how strong phase transitions can emerge in models of this type with approximate scale symmetry and how to test them with gw detectors is reviewed and explained.
arxiv:2012.11608
clearly articulating the assumptions of the execution environment is crucial for the successful application of code - level formal verification. the process of specifying a model for the environment can be both laborious and error - prone, often requiring domain experts. in contrast, when engineers write unit tests, they frequently employ mocks ( tmocks ) to define the expected behavior of the environment in which the function under test operates. these tmocks describe how the environment behaves, e. g., the return types of an external api call ( stateless behaviour ) or the correct sequence of function calls ( stateful behaviour ). mocking frameworks have proven to be highly effective tools for crafting unit tests. in our work, we draw inspiration from tmocks and introduce their counterpart in the realm of formal verification, which we term vmocks. vmocks offer an intuitive framework for specifying a plausible environment when conducting code - level formal verification. we implement a vmock library for the verification of c programs called seamock. we investigate the practicality of vmocks by, first, comparing specifications styles in the communication layer of the android trusty trusted execution environment ( tee ) open source project, and second, in the verification of mbedtls, a widely used open source c library that provides secure communication protocols and cryptography primitives for embedded systems. based on our experience, we conclude that vmocks complement other forms of environment models. we believe that vmocks ease adoption of code - level formal verification among developers already familiar with tmocks.
arxiv:2409.12269
this paper deals with continuity preservation when minimizing generalized total variation with a $ l ^ 2 $ fidelity term or a dirichlet boundary condition. we extend several recent results in the two cases, mainly by showing comparison principles for the prescribed mean curvature problem satisfied by the level - sets of such minimizers.
arxiv:1605.09655
using the contraction of the su ( 3 ) algebra to the algebra of the rigid rotator in the large boson number limit of the interacting boson approximation ( iba ) model, a line is found inside the symmetry triangle of the iba, along which the su ( 3 ) symmetry is preserved. the line extends from the su ( 3 ) vertex to near the critical line of the first order shape / phase transition separating the spherical and prolate deformed phases, and lies within the alhassid - - whelan arc of regularity, the unique valley of regularity connecting the su ( 3 ) and u ( 5 ) vertices amidst chaotic regions. in addition to providing an explanation for the existence of the arc of regularity, the present line represents the first example of an analytically determined approximate symmetry in the interior of the symmetry triangle of the iba. the method is applicable to algebraic models possessing subalgebras amenable to contraction. this condition is equivalent to algebras in which the equilibrium ground state ( and its rotational band ) become energetically isolated from intrinsic excitations, as typified by deformed solutions to the iba for large numbers of valence nucleons.
arxiv:1104.2104
fast radio bursts ( frbs ) are energetic millisecond phenomena in radio band. polarimetric studies of repeating frbs indicate that many of these sources occupy extreme and complex magneto - ionized environments. recently, a frequency - dependent depolarization has been discovered in several repeating frbs. however, the temporal evolution of polarization properties is limited by the burst rate and observational cadence of telescopes. in this letter, the temporal evolution of depolarization in repeating frb 20201124a is explored. using the simultaneous variation of rotation measure and dispersion measure, we also measure the strength of a magnetic field parallel to the line - of - sight. the strength ranges from a few $ \ mu { \ rm g } $ to $ 10 ^ 3 \ \ mu { \ rm g } $. in addition, we find that the evolution of depolarization and magnetic field traces the evolution of rotation measure. our result supports that the variation of depolarization, rotation measure and the magnetic field are determined by the same complex magneto - ionized screen surrounding the frb source. the derived properties of the screen are consistent with the wind and the decretion disk of a massive star.
arxiv:2309.06653
many statistical inference problems correspond to recovering the values of a set of hidden variables from sparse observations on them. for instance, in a planted constraint satisfaction problem such as planted 3 - sat, the clauses are sparse observations from which the hidden assignment is to be recovered. in the problem of community detection in a stochastic block model, the community labels are hidden variables that are to be recovered from the edges of the graph. inspired by ideas from statistical physics, the presence of a stable fixed point for belief propogation has been widely conjectured to characterize the computational tractability of these problems. for community detection in stochastic block models, many of these predictions have been rigorously confirmed. in this work, we consider a general model of statistical inference problems that includes both community detection in stochastic block models, and all planted constraint satisfaction problems as special cases. we carry out the cavity method calculations from statistical physics to compute the regime of parameters where detection and recovery should be algorithmically tractable. at precisely the predicted tractable regime, we give : ( i ) a general polynomial - time algorithm for the problem of detection : distinguishing an input with a planted signal from one without ; ( ii ) a general polynomial - time algorithm for the problem of recovery : outputting a vector that correlates with the hidden assignment significantly better than a random guess would.
arxiv:2101.10882
in this paper, we investigate the heavy quark potential and the jet quenching parameter in a system with lifshitz and hyperscaling violation exponents, by using the ads / cft correspondence. it is shown that the heavy quark potential and the jet quenching parameter are dependent upon the nonrelativistic parameters. we show how the heavy quark potential changes with the hyperscaling violation parameter { \ theta }, dynamical parameter z, temperature t and charge q. increasing z and { \ theta } lead to increasing and decreasing the potential respectively. the potential decreases and increases by increasing q and t. it is investigated how the jet quenching parameter changes with the hyperscaling violation parameter { \ theta } and dynamical parameter z. also, we add some electromagnetic field and obtain its effect on the jet quenching parameter and see as z and { \ theta } increasing this parameter decreases and the electric and magnetic fields affect differently on that.
arxiv:2210.13911
we correct our proof of a theorem stating that satisfiability of frequency linear - time temporal logic is undecidable [ tase 2012 ].
arxiv:2010.00296
as large language models ( llms ) become widely adopted, understanding how they learn from, and memorize, training data becomes crucial. memorization in llms is widely assumed to only occur as a result of sequences being repeated in the training data. instead, we show that llms memorize by assembling information from similar sequences, a phenomena we call mosaic memory. we show major llms to exhibit mosaic memory, with fuzzy duplicates contributing to memorization as much as 0. 8 of an exact duplicate and even heavily modified sequences contributing substantially to memorization. despite models display reasoning capabilities, we somewhat surprisingly show memorization to be predominantly syntactic rather than semantic. we finally show fuzzy duplicates to be ubiquitous in real - world data, untouched by deduplication techniques. taken together, our results challenge widely held beliefs and show memorization to be a more complex, mosaic process, with real - world implications for privacy, confidentiality, model utility and evaluation.
arxiv:2405.15523
by comparing 3 constituents of orion a ( gas, protostars, and pre - main - sequence stars ), both morhologically and kinematically, we derive the following. the gas surface density near the integral - shaped filament ( isf ) is well represented by a power law, sigma ( b ) = 72 msun / pc ^ 2 ( b / pc ) ^ { - 5 / 8 } for our entire range, 0. 05 < b / pc < 8. 5, of distance from the filament ridge. essentially all protostars lie on the isf or other filament ridges, while almost all pre - main - sequence stars do not. combined with the fact that protostars move < 1 kms relative to the filaments while stars move several times faster, this implies that protostellar accretion is terminated by a slingshot ejection from the filaments. the isf is the 3rd in a series of star bursts that are progressively moving south, with separations of a few myr in time and 3 pc in space. this, combined with the filament ' s observed undulations ( spatial and velocity ), suggests that repeated propagation of transverse waves thru the filament is progressively digesting the material that formerly connected orion a and b into stars in discrete episodes. we construct an axially symmetric gas density profile rho ( r ) = 16 msun / pc ^ 3 ( r / pc ) ^ { - 13 / 8 }. the model implies that the observed magnetic fields are supercritical on scales of the observed undulations, suggesting that the filament ' s transverse waves are magnetically induced. because the magnetic fields are subcritical on scales of the filament on larger scales, the system as a whole is relatively stable and long lived. protostellar ejection occurs because the gas accelerates away from the protostars, not the other way around. the model also implies that the isf is kinematically young, which is consistent with other lines of evidence. the southern filament has a broken power law, which matches the isf profile for 2. 5 < b / pc < 8. 5, but is shallower closer in. it is also kinematically older than the isf.
arxiv:1512.04944
we propose a new computationally efficient sampling scheme for bayesian inference involving high dimensional probability distributions. our method maps the original parameter space into a low - dimensional latent space, explores the latent space to generate samples, and maps these samples back to the original space for inference. while our method can be used in conjunction with any dimension reduction technique to obtain the latent space, and any standard sampling algorithm to explore the low - dimensional space, here we specifically use a combination of auto - encoders ( for dimensionality reduction ) and hamiltonian monte carlo ( hmc, for sampling ). to this end, we first run an hmc to generate some initial samples from the original parameter space, and then use these samples to train an auto - encoder. next, starting with an initial state, we use the encoding part of the autoencoder to map the initial state to a point in the low - dimensional latent space. using another hmc, this point is then treated as an initial state in the latent space to generate a new state, which is then mapped to the original space using the decoding part of the auto - encoder. the resulting point can be treated as a metropolis - hasting ( mh ) proposal, which is either accepted or rejected. while the induced dynamics in the parameter space is no longer hamiltonian, it remains time reversible, and the markov chain could still converge to the canonical distribution using a volume correction term. dropping the volume correction step results in convergence to an approximate but reasonably accurate distribution. the empirical results based on several high - dimensional problems show that our method could substantially reduce the computational cost of bayesian inference.
arxiv:1910.05692
in the field of natural language processing, the rapid development of large language model ( llm ) has attracted more and more attention. llms have shown a high level of creativity in various tasks, but the methods for assessing such creativity are inadequate. the assessment of llm creativity needs to consider differences from humans, requiring multi - dimensional measurement while balancing accuracy and efficiency. this paper aims to establish an efficient framework for assessing the level of creativity in llms. by adapting the modified torrance tests of creative thinking, the research evaluates the creative performance of various llms across 7 tasks, emphasizing 4 criteria including fluency, flexibility, originality, and elaboration. in this context, we develop a comprehensive dataset of 700 questions for testing and an llm - based evaluation method. in addition, this study presents a novel analysis of llms ' responses to diverse prompts and role - play situations. we found that the creativity of llms primarily falls short in originality, while excelling in elaboration. besides, the use of prompts and the role - play settings of the model significantly influence creativity. additionally, the experimental results also indicate that collaboration among multiple llms can enhance originality. notably, our findings reveal a consensus between human evaluations and llms regarding the personality traits that influence creativity. the findings underscore the significant impact of llm design on creativity and bridges artificial intelligence and human creativity, offering insights into llms ' creativity and potential applications.
arxiv:2401.12491
deep learning models have demonstrated superior performance in several application problems, such as image classification and speech processing. however, creating a deep learning model using health record data requires addressing certain privacy challenges that bring unique concerns to researchers working in this domain. one effective way to handle such private data issues is to generate realistic synthetic data that can provide practically acceptable data quality and correspondingly the model performance. to tackle this challenge, we develop a differentially private framework for synthetic data generation using r \ ' enyi differential privacy. our approach builds on convolutional autoencoders and convolutional generative adversarial networks to preserve some of the critical characteristics of the generated synthetic data. in addition, our model can also capture the temporal information and feature correlations that might be present in the original data. we demonstrate that our model outperforms existing state - of - the - art models under the same privacy budget using several publicly available benchmark medical datasets in both supervised and unsupervised settings.
arxiv:2012.11774
the amplituhedra arise as images of the totally nonnegative grassmannians by projections that are induced by linear maps. they were introduced in physics by arkani - hamed \ & trnka ( journal of high energy physics, 2014 ) as model spaces that should provide a better understanding of the scattering amplitudes of quantum field theories. the topology of the amplituhedra has been known only in a few special cases, where they turned out to be homeomorphic to balls. the amplituhedra are special cases of grassmann polytopes introduced by lam ( current developments in mathematics 2014, int. \ press ). in this paper we show that that some further amplituhedra are homeomorphic to balls, and that some more grassmann polytopes and amplituhedra are contractible.
arxiv:1806.00827
environmental radioactivity is a dominant background for rare decay search experiments, and it is difficult to completely remove such an impurity from detector vessels. we propose a scintillation balloon as the active vessel of a liquid scintillator in order to identify this undesirable radioactivity. according to our feasibility studies, the scintillation balloon enables the bismuth - - polonium sequential decay to be tagged with a 99. 7 \ % efficiency, assuming a kamland ( kamioka liquid scintillator antineutrino detector ) - type liquid scintillator detector. this tagging of sequential decay using alpha - ray from the polonium improves the sensitivity to neutrinoless double - beta decay with rejecting beta - ray background from the bismuth.
arxiv:1903.10736
we describe regularized methods for image reconstruction and focus on the question of hyperparameter and instrument parameter estimation, i. e. unsupervised and myopic problems. we developed a bayesian framework that is based on the \ post density for all unknown quantities, given the observations. this density is explored by a markov chain monte - carlo sampling technique based on a gibbs loop and including a metropolis - hastings step. the numerical evaluation relies on the spire instrument of the herschel observatory. using simulated and real observations, we show that the hyperparameters and instrument parameters are correctly estimated, which opens up many perspectives for imaging in astrophysics.
arxiv:1211.3603
winds and outflows in starburst galaxies and agn provide important information on the physics of the " central engine ", the presence and evolution of ( nuclear ) starbursts, and the metal enrichment of the nuclear environment and the intergalactic medium. here, we concentrate on two examples, x - ray observations of the ( u ) lirg ngc6240 and the bal quasar apm08279 + 5255.
arxiv:astro-ph/0310881
the detection of political fake statements is crucial for maintaining information integrity and preventing the spread of misinformation in society. historically, state - of - the - art machine learning models employed various methods for detecting deceptive statements. these methods include the use of metadata ( w. wang et al., 2018 ), n - grams analysis ( singh et al., 2021 ), and linguistic ( wu et al., 2022 ) and stylometric ( islam et al., 2020 ) features. recent advancements in large language models, such as gpt - 3 ( brown et al., 2020 ) have achieved state - of - the - art performance on a wide range of tasks. in this study, we conducted experiments with gpt - 3 on the liar dataset ( w. wang et al., 2018 ) and achieved higher accuracy than state - of - the - art models without using any additional meta or linguistic features. additionally, we experimented with zero - shot learning using a carefully designed prompt and achieved near state - of - the - art performance. an advantage of this approach is that the model provided evidence for its decision, which adds transparency to the model ' s decision - making and offers a chance for users to verify the validity of the evidence provided.
arxiv:2306.08190
the conformal range, which is a horizontal projection of the davis - wielandt shell, can be considered as the hyperbolic version of the numerical range. here we explain ( the analogue of ) the elliptical range theorem of $ 2 \ times2 $ complex matrices for the conformal range. in that course, comparison to the davis - wielandt shell and the numerical range is made.
arxiv:2211.13145
in this study we compared the temporal and periodic variations of the maximum cme speed index ( mcmesi ) and the number of different class ( c, m, and x ) solar x - ray flares for the last two solar cycles ( cycle 23 and 24 ). to obtain the correlation between the mcmesi and solar flare numbers the cross correlation analysis was applied to monthly data sets. also to investigate the periodic behavior of all data sets the multi taper method ( mtm ) and the morlet wavelet analysis method were performed with daily data from 2009 to 2018. to evaluate our wavelet analysis cross wavelet transform ( xwt ) and wavelet transform coherence ( wtc ) methods were performed. causal relationships between datasets were further examined by convergence cross mapping ( ccm ) method. in results of our analysis we found followings ; 1 ) the c class x - ray flare numbers increased about 16 % during the solar cycle 24 compared to cycle 23, while all other data sets decreased ; the mcmesi decreased about 16 % and the number of m and x class flares decreased about 32 %. 2 ) all the x - ray solar flare classes show remarkable positive correlation with the mcmesi. while the correlation between the mcmesi and c class flares comes from the general solar cycle trend, it mainly results from the fluctuations in the data in case of the x class flares. 3 ) in general, all class flare numbers and the mcmesi show similar periodic behavior. 4 ) the 546 days periodicity detected in the mcmesi may not be of solar origin or at least the solar flares are not the source of this periodicity. 5 ) c and m class solar flares have a stronger causative effect on the mcmesi compared to x class solar flares. however the only bidirectional causal relationship is obtained between the mcmesi and c class flare numbers.
arxiv:2008.11506
the objective of this work is to determine the nonlinear flux - force relations for systems out of onsager ' s region that respect the existing thermodynamic theorems for systems far from equilibrium. to this aim, a thermodynamic theory for irreversible processes [ referred to as the thermodynamical field theory ( tft ) ] has been developed. the tft rests upon the concept of equivalence between thermodynamic systems : " the equivalent character of two alternative descriptions of a thermodynamic system is ensured if, and only if, the entropy production and the glansdorff - prigogine dissipative quantity remain unaltered under the thermodynamic forces transformation ". the tct leads naturally to the " thermodynamic covariance principle " ( tcp ) stating that " the nonlinear closure equations, i. e., the flux - force relations, must be covariant under tct ". in this work, we provide the explicit expression of the nonlinear pdes, subjected to the appropriate boundary conditions, which have to be satisfied by transport coefficients when the skew - symmetric piece is absent. the solution of these equations allows to determine the flux - force closure relations for systems out of the onsager region. since the proposed pdes are obtained without neglecting any term present in the balance equations ( i. e., the mass, momentum, and energy balance equations ), we propose them as a good candidate for describing transport in thermodynamic systems also in turbulent regime. a preliminary test is carried out by analysing a concrete example where onsager ' s relations manifestly disagree with experience : losses in magnetically confined tokamak - plasmas in fully collisional and in turbulent regimes. we show the good agreement between the theoretical predictions and the experimental data. the aim is to apply our approach to the " divertor tokamak test facility " ( dtt ), to be built in italy, and to iter.
arxiv:2205.15315
graph neural networks ( gnns ) have recently empowered various novel computer vision ( cv ) tasks. in gnn - based cv tasks, a combination of cnn layers and gnn layers or only gnn layers are employed. this paper introduces gcv - turbo, a domain - specific accelerator on fpga for end - to - end acceleration of gnn - based cv tasks. gcv - turbo consists of two key components : ( 1 ) a \ emph { novel } hardware architecture optimized for the computation kernels in both cnns and gnns using the same set of computation resources. ( 2 ) a pytorch - compatible compiler that takes a user - defined model as input, performs end - to - end optimization for the computation graph of a given gnn - based cv task, and produces optimized code for hardware execution. the hardware architecture and the compiler work synergistically to support a variety of gnn - based cv tasks. we implement gcv - turbo on a state - of - the - art fpga and evaluate its performance across six representative gnn - based cv tasks with diverse input data modalities ( e. g., image, human skeleton, point cloud ). compared with state - of - the - art cpu ( gpu ) implementations, gcv - turbo achieves an average latency reduction of $ 68. 4 \ times $ ( $ 4. 1 \ times $ ) on these six gnn - based cv tasks. moreover, gcv - turbo supports the execution of the standalone cnns or gnns, achieving performance comparable to that of state - of - the - art cnn ( gnn ) accelerators for widely used cnn - only ( gnn - only ) models.
arxiv:2404.07188
spoken language understanding ( slu ) is indispensable for half of all living languages that lack a formal writing system, since these languages cannot pair automatic speech recognition ( asr ) with language models to benefit from language technology. even if low - resource languages possess a writing system, asr for these languages remains unreliable due to limited bimodal speech and text training data. better slu can strengthen the robustness of massively multilingual asr by levering language semantics to disambiguate utterances via context or exploiting semantic similarities across languages. however, the evaluation of multilingual slu remains limited to shallow tasks such as intent classification or language identification. to address this, we present fleurs - slu, a multilingual slu benchmark that encompasses ( i ) 692 hours of speech for topical utterance classification in 102 languages and ( ii ) multiple - choice question answering through listening comprehension spanning 944 hours of speech across 92 languages. we extensively evaluate both end - to - end speech classification models and cascaded systems that combine speech - to - text transcription with subsequent classification by large language models on fleurs - slu. our results show that cascaded systems exhibit greater robustness in multilingual slu tasks, though speech encoders can achieve competitive performance in topical speech classification when appropriately pre - trained. we further find a strong correlation between robust multilingual asr, effective speech - to - text translation, and strong multilingual slu, highlighting the mutual benefits between acoustic and semantic speech representations.
arxiv:2501.06117
it is well - known that wide - area networks face today several performance and reliability problems. in this work, we propose to solve these problems by connecting two or more local - area networks together via a redundant array of internet links ( or rail ) and by proactively replicating each packet over these links. in that sense, rail is for networks what raid ( redundant array of inexpensive disks ) was for disks. in this paper, we describe the rail approach, present our prototype ( called the railedge ), and evaluate its performance. first, we demonstrate that using multiple internet links significantly improves the end - to - end performance in terms of network - level as well as application - level metrics for voice - over - ip and tcp. second, we show that a delay padding mechanism is needed to complement rail when there is significant delay disparity between the paths. third, we show that two paths provide most of the benefit, if carefully managed. finally, we discuss a rail - network architecture, where railedges make use of path redundancy, route control and application - specific mechanisms, to improve wan performance.
arxiv:cs/0701133
we give a categorical formulation of the $ p $ - adic local langlands correspondence for $ \ mathrm { gl } _ 2 ( \ mathbb { q } _ p ) $, as an embedding of the derived category of locally admissible representations into the category of ind - coherent sheaves on the moduli stack of two - dimensional representations of $ \ mathrm { gal } ( \ overline { \ mathbb { q } } _ p / \ mathbb { q } _ p ) $. moreover, we relate our version of the $ p $ - adic local langlands correspondence for $ \ mathrm { gl } _ 2 ( \ mathbb { q } _ p ) $ to the cohomology of modular curves through a local - global compatibility formula.
arxiv:2403.19565
recent nuclear magnetic resonance measurements on isotope engineered double walled carbon nanotubes ( dwcnts ) surprisingly suggest a uniformly metallic character of all nanotubes, which can only be explained by the interaction between the layers. here we study the inter - shell interaction in dwcnts by density functional theory and inter - molecular h \ " uckel model. we find charge transfer between the layers using both methods. we show that not only does the charge transfer appear already at the fundamental level of the inter - molecular h \ " uckel model, but also that the spatial distribution of the change in the electron density is well described already at this level of theory. we find that the charge transfer between the walls is on the order of 0. 001 e / atom and that the inner tube is always negatively charged. we also observe orbital mixing between the states of the layers. we find that these two effects combined can in some cases lead to a semiconductor - - to - - metal transition of the double walled tube, but not necessarily in all cases.
arxiv:cond-mat/0603407
online advertising is progressively moving towards a programmatic model in which ads are matched to actual interests of individuals collected as they browse the web. letting the huge debate around privacy aside, a very important question in this area, for which little is known, is : how much do advertisers pay to reach an individual? in this study, we develop a first of its kind methodology for computing exactly that - - the price paid for a web user by the ad ecosystem - - and we do that in real time. our approach is based on tapping on the real time bidding ( rtb ) protocol to collect cleartext and encrypted prices for winning bids paid by advertisers in order to place targeted ads. our main technical contribution is a method for tallying winning bids even when they are encrypted. we achieve this by training a model using as ground truth prices obtained by running our own " probe " ad - campaigns. we design our methodology through a browser extension and a back - end server that provides it with fresh models for encrypted bids. we validate our methodology using a one year long trace of 1600 mobile users and demonstrate that it can estimate a user ' s advertising worth with more than 82 % accuracy.
arxiv:1701.07058
between the microscopic domain ruled by quantum gravity, and the macroscopic scales described by general relativity, there might be an intermediate, " mesoscopic " regime, where spacetime can still be approximately treated as a differentiable pseudo - riemannian manifold, with small corrections of quantum gravitational origin. we argue that, unless one accepts to give up the relativity principle, either such a regime does not exist at all ( hence, the quantum - to - classical transition is sharp ), or the only mesoscopic, tiny corrections conceivable are on the behaviour of physical fields, rather than on the geometric structures.
arxiv:1405.5085
we investigate the numerical computation of maass cusp forms for the modular group corresponding to large eigenvalues. we present fourier coefficients of two cusp forms whose eigenvalues exceed r = 40000. these eigenvalues are the largest that have so far been found in the case of the modular group. they are larger than the 130millionth eigenvalue.
arxiv:math-ph/0305047
we consider the classical motion of a probe d - brane moving in the background geometry of a ring of ns5 branes, assuming that the latter are non - dynamical. we analyse the solutions to the dirac - born - infield ( dbi ) action governing the approximate dynamics of the system. in the near horizon ( throat ) approximation we find several exact solutions for the probe brane motion. these are compared to numerical solutions obtained in more general cases. one solution of particular interest is when the probe undergoes oscillatory motion through the centre of the ring ( and perpendicular to it ). by taking the ring radius sufficiently large, this solution should remain stable to any stringy corrections coming from open - strings stretching between the probe and the ns5 - branes along the ring.
arxiv:hep-th/0411130
reverse engineering ( also known as backwards engineering or back engineering ) is a process or method through which one attempts to understand through deductive reasoning how a previously made device, process, system, or piece of software accomplishes a task with very little ( if any ) insight into exactly how it does so. depending on the system under consideration and the technologies employed, the knowledge gained during reverse engineering can help with repurposing obsolete objects, doing security analysis, or learning how something works. although the process is specific to the object on which it is being performed, all reverse engineering processes consist of three basic steps : information extraction, modeling, and review. information extraction is the practice of gathering all relevant information for performing the operation. modeling is the practice of combining the gathered information into an abstract model, which can be used as a guide for designing the new object or system. review is the testing of the model to ensure the validity of the chosen abstract. reverse engineering is applicable in the fields of computer engineering, mechanical engineering, design, electrical and electronic engineering, civil engineering, nuclear engineering, aerospace engineering, software engineering, chemical engineering, systems biology and more. = = overview = = there are many reasons for performing reverse engineering in various fields. reverse engineering has its origins in the analysis of hardware for commercial or military advantage. : 13 however, the reverse engineering process may not always be concerned with creating a copy or changing the artifact in some way. it may be used as part of an analysis to deduce design features from products with little or no additional knowledge about the procedures involved in their original production. : 15 in some cases, the goal of the reverse engineering process can simply be a redocumentation of legacy systems. : 15 even when the reverse - engineered product is that of a competitor, the goal may not be to copy it but to perform competitor analysis. reverse engineering may also be used to create interoperable products and despite some narrowly - tailored united states and european union legislation, the legality of using specific reverse engineering techniques for that purpose has been hotly contested in courts worldwide for more than two decades. software reverse engineering can help to improve the understanding of the underlying source code for the maintenance and improvement of the software, relevant information can be extracted to make a decision for software development and graphical representations of the code can provide alternate views regarding the source code, which can help to detect and fix a software bug or vulnerability. frequently, as some software develops, its design information and improvements are often lost over time, but that lost information can
https://en.wikipedia.org/wiki/Reverse_engineering
galaxy evolution is driven by many complex interrelated processes as galaxies accrete gas, form new stars, grow their stellar masses and central black holes, and subsequently quench. the processes that drive these transformations is poorly understood, but it is clear that the local environment on multiple scales plays a significant role. today ' s massive clusters are dominated by spheroidal galaxies with low levels of star formation while those in the field are mostly still actively forming their stars. in order to understand the physical processes that drive both the mass build up in galaxies and the quenching of star formation, we need to investigate galaxies and their surrounding gas within and around the precursors of today ' s massive galaxy clusters - - protoclusters at z > 2. the transition period before protoclusters began to quench and become the massive clusters we observe today is a crucial time to investigate their properties and the mechanisms driving their evolution. however, until now, progress characterizing the galaxies within protoclusters has been slow, due the difficulty of obtaining highly complete spectroscopic observations of faint galaxies at z > 2 over large areas of the sky. the next decade will see a transformational shift in our understanding of protoclusters as deep spectroscopy over wide fields of view will be possible in conjunction with high resolution deep imaging in the optical and near - infrared.
arxiv:1903.05026
irrespective of the dark matter ( dm ) candidate, several potentially observable signatures derive from the velocity distribution of dm in halos, in particular in the milky way ( mw ) halo. examples include direct searches for weakly - interacting massive particles ( wimps ), $ p $ - wave suppressed or sommerfeld - enhanced annihilation signals, microlensing events of primordial black holes ( pbhs ), { \ em etc }. most current predictions are based on the maxwellian approximation which is not only theoretically inconsistent in bounded systems, but also not supported by cosmological simulations. a more consistent method sometimes used in calculations for direct wimp searches relies on the so - called eddington inversion method, which relates the dm phase - space distribution function ( df ) to its mass density profile and the total gravitational potential of the system. originally built upon the isotropy assumption, this method can be extended to anisotropic systems. we investigate these inversion methods in the context of galactic dm searches, motivated by the fact that the mw is a strongly constrained system, and should be even more so with the ongoing gaia survey. we still draw conclusions that apply to the general case. in particular, we illustrate how neglecting the radial boundary of the dm halo leads to theoretical inconsistencies. we also show that several realistic configurations of the dm halo and the mw baryonic content entail ill - defined dfs, significantly restricting the configuration space over which these inversion methods can apply. we propose consistent solutions to these issues. finally, we compute several observables inferred from constrained galactic mass models relevant to dm searches ( wimps or pbhs ), { \ em e. g. } moments and inverse moments of the dm speed and relative speed distributions.
arxiv:1805.02403
i show that an experimental technique used in nuclear physics may be successfully applied to quantum teleportation ( qt ) of spin states of massive matter. a new non - local physical effect the ` quantum - teleportation - effect ' is discovered for the nuclear polarization measurement. enhancement of the neutron polarization is expected in the proposed experiment for qt that discriminates { \ it only } one of the bell states.
arxiv:quant-ph/0312153
we consider the closed string propagating in the weakly curved background which consists of constant metric and kalb - ramond field with infinitesimally small coordinate dependent part. we propose the procedure for constructing the t - dual theory, performing t - duality transformations along coordinates on which the kalb - ramond field depends. the obtained theory is defined in the non - geometric double space, described by the lagrange multiplier $ y _ \ mu $ and its $ t $ - dual $ \ tilde { y } _ \ mu $. we apply the proposed t - duality procedure to the t - dual theory and obtain the initial one. we discuss the standard relations between t - dual theories that the equations of motion and momenta modes of one theory are the bianchi identities and the winding modes of the other.
arxiv:1205.1991
when studying the expressive power of neural networks, a main challenge is to understand how the size and depth of the network affect its ability to approximate real functions. however, not all functions are interesting from a practical viewpoint : functions of interest usually have a polynomially - bounded lipschitz constant, and can be computed efficiently. we call functions that satisfy these conditions " benign ", and explore the benefits of size and depth for approximation of benign functions with relu networks. as we show, this problem is more challenging than the corresponding problem for non - benign functions. we give barriers to showing depth - lower - bounds : proving existence of a benign function that cannot be approximated by polynomial - size networks of depth $ 4 $ would settle longstanding open problems in computational complexity. it implies that beyond depth $ 4 $ there is a barrier to showing depth - separation for benign functions, even between networks of constant depth and networks of nonconstant depth. we also study size - separation, namely, whether there are benign functions that can be approximated with networks of size $ o ( s ( d ) ) $, but not with networks of size $ o ( s ' ( d ) ) $. we show a complexity - theoretic barrier to proving such results beyond size $ o ( d \ log ^ 2 ( d ) ) $, but also show an explicit benign function, that can be approximated with networks of size $ o ( d ) $ and not with networks of size $ o ( d / \ log d ) $. for approximation in $ l _ \ infty $ we achieve such separation already between size $ o ( d ) $ and size $ o ( d ) $. moreover, we show superpolynomial size lower bounds and barriers to such lower bounds, depending on the assumptions on the function. our size - separation results rely on an analysis of size lower bounds for boolean functions, which is of independent interest : we show linear size lower bounds for computing explicit boolean functions with neural networks and threshold circuits.
arxiv:2102.00314
let $ g $ be $ sl _ n, sp ( 2n ) $ or so ( 2n ). we consider the moduli space $ m $ of semistable principal $ g $ - bundles over a curve $ x $. our main result is that if $ u $ is a zariski open subset of $ m $ then there is no universal bundle on $ u \ times x $.
arxiv:0908.0313
e. segal proved that any autoequivalence of an enhanced triangulated category can be realised as a spherical twist. however, when exhibiting an autoequivalence as a spherical twist one has various choices for the source category of the spherical functor. we describe a construction that realises the composition of two spherical twists as the twist around a single spherical functor whose source category semiorthogonally decomposes into the source categories for the spherical functors we started with. we give a description of the cotwist for this spherical functor and prove, in the special case when our starting twists are around spherical objects, that the cotwist is the serre functor ( up to a shift ). we finish with an explicit treatment for the case of p - objects.
arxiv:2006.06016
let $ g $ be a finite pseudoreflection group and $ \ omega \ subseteq \ mathbb c ^ d $ be a bounded domain which is a $ g $ - space. we establish identities involving toeplitz operators on the weighted bergman spaces of $ \ omega $ and $ \ omega / g $ using invariant theory and representation theory of $ g. $ this, in turn, provides techniques to study algebraic properties of toeplitz operators on the weighted bergman space on $ \ omega / g. $ we specialize on the generalized zero - product problem and characterization of commuting pairs of toeplitz operators. as a consequence, more intricate results on toeplitz operators on the weighted bergman spaces on some specific quotient domains ( namely symmetrized polydisc, monomial polyhedron, rudin ' s domain ) have been obtained.
arxiv:2202.03184
vision transformers have achieved great success in computer visions, delivering exceptional performance across various tasks. however, their inherent reliance on sequential input enforces the manual partitioning of images into patch sequences, which disrupts the image ' s inherent structural and semantic continuity. to handle this, we propose a novel pattern transformer ( patternformer ) to adaptively convert images to pattern sequences for transformer input. specifically, we employ the convolutional neural network to extract various patterns from the input image, with each channel representing a unique pattern that is fed into the succeeding transformer as a visual token. by enabling the network to optimize these patterns, each pattern concentrates on its local region of interest, thereby preserving its intrinsic structural and semantic information. only employing the vanilla resnet and transformer, we have accomplished state - of - the - art performance on cifar - 10 and cifar - 100, and have achieved competitive results on imagenet.
arxiv:2308.10729
recent observations have detected extended tev gamma - ray emission surrounding young and middle - aged pulsars. the morphology of these " tev halos " requires cosmic - ray diffusion to be locally suppressed by a factor of ~ 100 - 1000 compared to the typical interstellar medium. no model currently explains this suppression. we show that cosmic - ray self - confinement can significantly inhibit diffusion near pulsars. the steep cosmic - ray gradient generates alfven waves that resonantly scatter the same cosmic - ray population, suppressing diffusion within ~ 20 pc of pulsars younger than ~ 100 kyr. in this model, tev halos evolve through two phases, a growth phase where alfven waves are resonantly generated and cosmic - ray diffusion becomes increasingly suppressed, and a subsequent relaxation phase where the diffusion coefficient returns to the standard interstellar value. intriguingly, cosmic - rays are not strongly confined early in the tev halo evolution, allowing a significant fraction of injected e + e - to escape. if these e + e - also escape from the surrounding supernova remnant, they would provide a natural explanation for the positron excess observed by pamela and ams - 02. recently created tev cosmic - rays are confined in the tev halo, matching observations by hawc and h. e. s. s. while our default model relaxes too rapidly to explain the confinement of tev cosmic rays around mature pulsars, such as geminga, models utilizing a kraichnan turbulence spectrum experience much slower relaxation. thus, observations of tev halos around mature pulsars may provide a probe into our understanding of interstellar turbulence.
arxiv:1807.09263
the state of a 2 - d random resistor network, resulting from the simultaneous evolutions of two competing biased percolations, is studied in a wide range of bias values. monte carlo simulations show that when the external current $ i $ is below the threshold value for electrical breakdown, the network reaches a steady state with a nonlinear current - voltage characteristic. the properties of this nonlinear regime are investigated as a function of different model parameters. a scaling relation is found between $ < r > / < r > _ 0 $ and $ i / i _ 0 $, where $ < r > $ is the average resistance, $ < r > _ 0 $ the linear regime resistance and $ i _ 0 $ the threshold value for the onset of nonlinearity. the scaling exponent is found to be independent of the model parameters. a similar scaling behavior is also found for the relative variance of resistance fluctuations. these results compare well with resistance measurements in composite materials performed in the joule regime up to breakdown.
arxiv:cond-mat/0110646
we discuss the failure dynamics of the fiber bundle model, especially in the equal - load - sharing scheme. we also highlight the " critical " aspects of their dynamics in comparison with those in standard thermodynamic systems undergoing phase transitions.
arxiv:1810.02145
multi - hop relay channels use multiple relay stages, each with multiple relay nodes, to facilitate communication between a source and destination. previously, distributed space - time coding was used to maximize diversity gain. assuming a low - rate feedback link from the destination to each relay stage and the source, this paper proposes end - to - end antenna selection strategies as an alternative to distributed space - time coding. one - way ( where only the source has data for destination ) and two - way ( where the destination also has data for the source ) multi - hop relay channels are considered with both the full - duplex and half duplex relay nodes. end - to - end antenna selection strategies are designed and proven to achieve maximum diversity gain by using a single antenna path ( using single antenna of the source, each relay stage and the destination ) with the maximum signal - to - noise ratio at the destination. for the half - duplex case, two single antenna paths with the two best signal - to - noise ratios in alternate time slots are used to overcome the rate loss with half - duplex nodes, with a small diversity gain penalty. finally to answer the question, whether to code ( distributed space - time code ) or not ( the proposed end - to - end antenna selection strategy ) in a multi - hop relay channel, end - to - end antenna selection strategy and distributed space - time coding is compared with respect to several important performance metrics.
arxiv:0805.3164
let $ \ sigma $ be a closed surface, $ g $ a compact lie group, with lie algebra $ g $, and $ \ xi \ colon p \ to \ sigma $ a principal $ g $ - bundle. in earlier work we have shown that the moduli space $ n ( \ xi ) $ of central yang - mills connections, for appropriate additional data, is stratified by smooth symplectic manifolds and that the holonomy yields a diffeomorphism from $ n ( \ xi ) $ onto a certain representation space $ \ roman { rep } _ { \ xi } ( \ gamma, g ) $, with reference to suitable smooth structures $ c ^ { \ infty } ( n ( \ xi ) ) $ and $ c ^ { \ infty } ( \ roman { rep } _ { \ xi } ( \ gamma, g ) ) $ where $ \ gamma $ denotes the universal central extension of the fundamental group of $ \ sigma $. given an invariant symmetric bilinear form on $ g ^ * $, we construct here poisson structures on $ c ^ { \ infty } ( n ( \ xi ) ) $ and $ c ^ { \ infty } ( \ roman { rep } _ { \ xi } ( \ gamma, g ) ) $ in such a way that the mentioned diffeomorphism identifies them. when the form on $ g ^ * $ is non - degenerate the poisson structures are compatible with the stratifications where $ \ roman { rep } _ { \ xi } ( \ gamma, g ) $ is endowed with the corresponding stratification and, furthermore, yield structures of a { \ it stratified symplectic space \ / }, preserved by the induced action of the mapping class group of $ \ sigma $.
arxiv:dg-ga/9411009
ultralight bosons, which are predicted in a variety of beyond - standard - model scenarios as dark - matter candidates, can trigger the superradiant instability around spinning black holes. this instability gives rise to oscillating boson condensates which then dissipate through the emission of nearly monochromatic gravitational waves. such systems are promising sources for current and future gravitational - wave detectors. in this work, we consider minimally - coupled, massive vector bosons, which can produce a significantly stronger gravitational - wave signal compared to the scalar case. we adopt recently obtained numerical results for the gravitational - wave flux, and astrophysical models of black hole populations that include both isolated black holes and binary merger remnants, to compute and study in detail the stochastic gravitational - wave background emitted by these sources. using a bayesian framework, we search for such a background signal emitted using data from the first and second observing runs of advanced ligo. we find no evidence for such a signal. therefore, the results allow us to constrain minimally coupled vector fields with masses in the range $ 0. 8 \ times10 ^ { - 13 } \ mathrm { ev } \ leq m _ b \ leq 6. 0 \ times10 ^ { - 13 } \ mathrm { ev } $ at 95 % credibility, assuming optimistically that the dimensionless spin distribution for the isolated black hole population is uniform in the range $ [ 0, 1 ] $. with more pessimistic assumptions, a narrower range around $ m _ b \ approx 10 ^ { - 13 } \ mathrm { ev } $ can still be excluded as long as the upper end of the uniform distribution for dimensionless black hole spin is $ \ gtrsim 0. 2 $.
arxiv:2011.06995
we prove a conjecture of roe by constructing unified warped cones that violate the coarse baum - connes conjecture. interestingly, the reason for this is probably not what roe expected, as the obstruction arises in odd rather than even degree.
arxiv:2504.21811
the self - consistent gw { \ gamma } method satisfies the ward - takahashi identity ( i. e., the gauge invariance or the local charge continuity ) for arbitrary energy ( $ \ omega $ ) and momentum ( $ \ bf q $ ) transfers. its self - consistent first - principles treatment of the vertex $ \ gamma = \ gamma _ v $ or $ \ gamma _ w $ is possible to first order in the bare ( $ v $ ) or dynamically - screened ( $ w $ ) coulomb interaction. it is developed within a linearized scheme and combined with the bethe - salpeter equation ( bse ) to accurately calculate photoabsorption spectra ( pas ) and photoemission ( or inverse photoemission ) spectra ( pes ) simultaneously. the method greatly improves the pas of na, na $ _ 3 $, b $ _ 2 $, and c $ _ 2 $ h $ _ 2 $ calculated using the standard one - shot $ g _ 0w _ 0 $ + bse method that results in significantly redshifted pas by 0. 8 - 3. 1 ev, although the pes are well reproduced already in $ g _ 0w _ 0 $.
arxiv:1609.05298
we prove a montel theorem for hilbert space valued functions, and a non - commutative version of this theorem, by composing with unitaries to achieve convergence.
arxiv:1706.05376
we prove the existence of multiple positive radial solutions to the sign - indefinite elliptic boundary blow - up problem \ [ \ left \ { \ begin { array } { ll } \ delta u + \ bigl ( a ^ + ( \ vert x \ vert ) - \ mu a ^ - ( \ vert x \ vert ) \ bigr ) g ( u ) = 0, & \ ; \ vert x \ vert < 1, \ \ u ( x ) \ to \ infty, & \ ; \ vert x \ vert \ to 1, \ end { array } \ right. \ ] where $ g $ is a function superlinear at zero and at infinity, $ a ^ + $ and $ a ^ - $ are the positive / negative part, respectively, of a sign - changing function $ a $ and $ \ mu > 0 $ is a large parameter. in particular, we show how the number of solutions is affected by the nodal behavior of the weight function $ a $. the proof is based on a careful shooting - type argument for the equivalent singular ode problem. as a further application of this technique, the existence of multiple positive radial homoclinic solutions to $ $ \ delta u + \ bigl ( a ^ + ( \ vert x \ vert ) - \ mu a ^ - ( \ vert x \ vert ) \ bigr ) g ( u ) = 0, \ qquad x \ in \ mathbb { r } ^ n, $ $ is also considered.
arxiv:1607.05585
traffic - responsive signal control is a cost - effective and easy - to - implement network management strategy with high potential in improving performance in congested networks with dynamic characteristics. max pressure ( mp ) distributed controller gained significant popularity due to its theoretically proven ability of queue stabilization and throughput maximization under specific assumptions. however, its effectiveness under saturated conditions is questionable, while network - wide application is limited due to high instrumentation cost. perimeter control ( pc ) based on the concept of the macroscopic fundamental diagram ( mfd ) is a state - of - the - art aggregated strategy that regulates exchange flows between regions, in order to maintain maximum regional travel production and prevent over - saturation. yet, homogeneity assumption is hardly realistic in congested states, thus compromising pc efficiency. in this paper, the effectiveness of network - wide, parallel application of pc and mp embedded in a two - layer control framework is assessed with mesoscopic simulation. aiming at reducing implementation cost of mp without significant performance loss, we propose a method to identify critical nodes for partial mp deployment. a modified version of store - and - forward paradigm incorporating finite queue and spill - back consideration is used to test different configurations of the proposed framework, for a real large - scale network, in moderately and highly congested scenarios. results show that : ( i ) combined control of mp and pc outperforms separate mp and pc applications in both demand scenarios ; ( ii ) mp control in reduced critical node sets leads to similar or even better performance compared to full - network implementation, thus allowing for significant cost reduction ; iii ) the proposed control schemes improve system performance even under demand fluctuations of up to 20 % of mean.
arxiv:2210.10453
we performed high - throughput density functional theory calculations of optical matrix elements between band edges across a diverse set of non - magnetic two - dimensional monolayers with direct band gaps. materials were ranked as potential optical emitters, leading to the identification of transition - metal nitrogen halides ( zrncl, tinbr, tincl ) and bismuth chalcohalides ( bitecl ) with optical coupling comparable to or exceeding mos $ _ 2 $. despite strong in - plane dipole transitions, most two - dimensional materials underperform bulk semiconductors due to the absence of out - of - plane components. to elucidate interband transitions, we introduced the orbital overlap tensor and established a correlation between anomalous born effective charges and optical coupling, linking charge redistribution to transition strength. we also identified chalcogen - mediated $ d $ - $ d $ transition as a key mechanism enabling optical responses in transition - metal dichalcogenides. we derived an analytical radiative recombination model incorporating multi - valley effects and found that excitonic corrections are essential for accurate lifetime predictions. some direct - gap materials exhibit dark excitons as their lowest - energy states, classifying them as quasi - direct band gap semiconductors, which is critical for tuning excitonic recombination dynamics.
arxiv:2409.18287
the aim of this paper is to present a general algebraic formulation for the decoherence - free subspaces ( dfss ). for this purpose, we initially generalize some results of pauli and artin about semisimple algebras. then we derive orthogonality theorems for algebras analogous to finite groups. in order to build the dfss we consider the tensor product of clifford algebras and left minimal ideals. furthermore, we show that standard applications of group theory in quantum chemistry can be obtained in our formalism. advantages and some perspectives are also discussed.
arxiv:1405.0611
we present a convex - concave reformulation of the reversible markov chain estimation problem and outline an efficient numerical scheme for the solution of the resulting problem based on a primal - dual interior point method for monotone variational inequalities. extensions to situations in which information about the stationary vector is available can also be solved via the convex - concave reformulation. the method can be generalized and applied to the discrete transition matrix reweighting analysis method to perform inference from independent chains with specified couplings between the stationary probabilities. the proposed approach offers a significant speed - up compared to a fixed - point iteration for a number of relevant applications.
arxiv:1603.01640
we report evidence of a fully established galaxy cluster at z = 2. 07, consisting of a ~ 20sigma overdensity of red, compact spheroidal galaxies spatially coinciding with extended x - ray emission detected with xmm - newton. we use vlt vimos and fors2 spectra and deep subaru, vlt and spitzer imaging to estimate the redshift of the structure from a prominent z = 2. 07 spectroscopic redshift spike of emission - line galaxies, concordant with the accurate 12 - band photometric redshifts of the red galaxies. using nicmos and keck ao observations, we find that the red galaxies have elliptical morphologies and compact cores. while they do not form a tight red sequence, their colours are consistent with that of a > 1. 3 $ ~ gyr population observed at z ~ 2. 1. from an x - ray luminosity of. 2 * 10 ^ 43 erg s ^ - 1 and the stellar mass content of the red galaxy population, we estimate a halo mass of 5. 3 - 8 * 10 ^ 13 msun, comparable to the nearby virgo cluster. these properties imply that this structure could be the most distant, mature cluster known to date and that x - ray luminous, elliptical - dominated clusters are already forming at substantially earlier epochs than previously known.
arxiv:1011.1837
we demonstrate an enhancement of the plane wave expansion method treating two - dimensional photonic crystals by applying fourier factorization with generally elliptic polarization bases. by studying three examples of periodically arranged cylindrical elements, we compare our approach to the classical ho method in which the permittivity function is simply expanded without changing coordinates, and to the normal vector method using a normal - tangential polarization transform. the compared calculations clearly show that our approach yields the best convergence properties owing to the complete continuity of our distribution of polarization bases. the presented methodology enables us to study more general systems such as periodic elements with an arbitrary cross - section or devices such as photonic crystal waveguides.
arxiv:1005.4219
in this note we prove the payne - type conjecture about the behaviour of the nodal set of least energy sign - changing solutions for the equation $ - \ delta _ p u = f ( u ) $ in bounded steiner symmetric domains $ \ omega \ subset \ mathbb { r } ^ n $ under the zero dirichlet boundary conditions. the nonlinearity $ f $ is assumed to be either superlinear or resonant. in the latter case, least energy sign - changing solutions are second eigenfunctions of the zero dirichlet $ p $ - laplacian in $ \ omega $. we show that the nodal set of any least energy sign - changing solution intersects the boundary of $ \ omega $. the proof is based on a moving polarization argument.
arxiv:1707.02816
an optical source that produces single photon pulses on demand has potential applications in linear optics quantum computation, provided that stringent requirements on indistinguishability and collection efficiency of the generated photons are met. we show that these are conflicting requirements for anharmonic emitters that are incoherently pumped via reservoirs. as a model for a coherently pumped single photon source, we consider cavity - assisted spin - flip raman transitions in a single charged quantum dot embedded in a microcavity. we demonstrate that using such a source, arbitrarily high collection efficiency and indistinguishability of the generated photons can be obtained simultaneously with increased cavity coupling. we analyze the role of errors that arise from distinguishability of the single photon pulses in linear optics quantum gates by relating the gate fidelity to the strength of the two - photon interference dip in photon cross - correlation measurements. we find that performing controlled phase operations with error < 1 % requires nano - cavities with purcell factors f _ p > = 40 in the absence of dephasing, without necessitating the strong coupling limit.
arxiv:quant-ph/0308117
analysis of word embedding properties to inform their use in downstream nlp tasks has largely been studied by assessing nearest neighbors. however, geometric properties of the continuous feature space contribute directly to the use of embedding features in downstream models, and are largely unexplored. we consider four properties of word embedding geometry, namely : position relative to the origin, distribution of features in the vector space, global pairwise distances, and local pairwise distances. we define a sequence of transformations to generate new embeddings that expose subsets of these properties to downstream models and evaluate change in task performance to understand the contribution of each property to nlp models. we transform publicly available pretrained embeddings from three popular toolkits ( word2vec, glove, and fasttext ) and evaluate on a variety of intrinsic tasks, which model linguistic information in the vector space, and extrinsic tasks, which use vectors as input to machine learning models. we find that intrinsic evaluations are highly sensitive to absolute position, while extrinsic tasks rely primarily on local similarity. our findings suggest that future embedding models and post - processing techniques should focus primarily on similarity to nearby points in vector space.
arxiv:1904.04866
hyperspectral images ( hsis ) provide exceptional spatial and spectral resolution of a scene, crucial for various remote sensing applications. however, the high dimensionality, presence of noise and outliers, and the need for precise labels of hsis present significant challenges to hsis analysis, motivating the development of performant hsi clustering algorithms. this paper introduces a novel unsupervised hsi clustering algorithm, superpixel - based and spatially - regularized diffusion learning ( s2dl ), which addresses these challenges by incorporating rich spatial information encoded in hsis into diffusion geometry - based clustering. s2dl employs the entropy rate superpixel ( ers ) segmentation technique to partition an image into superpixels, then constructs a spatially - regularized diffusion graph using the most representative high - density pixels. this approach reduces computational burden while preserving accuracy. cluster modes, serving as exemplars for underlying cluster structure, are identified as the highest - density pixels farthest in diffusion distance from other highest - density pixels. these modes guide the labeling of the remaining representative pixels from ers superpixels. finally, majority voting is applied to the labels assigned within each superpixel to propagate labels to the rest of the image. this spatial - spectral approach simultaneously simplifies graph construction, reduces computational cost, and improves clustering performance. s2dl ' s performance is illustrated with extensive experiments on three publicly available, real - world hsis : indian pines, salinas, and salinas a. additionally, we apply s2dl to landscape - scale, unsupervised mangrove species mapping in the mai po nature reserve, hong kong, using a gaofen - 5 hsi. the success of s2dl in these diverse numerical experiments indicates its efficacy on a wide range of important unsupervised remote sensing analysis tasks.
arxiv:2312.15447
let $ \ { u ( t \,, x ) \ } _ { ( t, x ) \ in \ mathbb { r } _ + \ times \ mathbb { r } } $ be the density of one - dimensional super - brownian motion starting from lebesgue measure. using the laplace functional of super - brownian motion, we prove that as $ n \ to \ infty $, the normalized spatial integral $ n ^ { - 1 / 2 } \ int _ 0 ^ { xn } [ u ( t \,, z ) - 1 ] \ rm { d } z $ converges jointly in $ ( t, x ) $ to brownian sheet in distribution.
arxiv:2111.08423
we prove a smooth transfer statement analogous to jacquet - rallis ' s fundamental lemma, use it to compute a local spherical character appearing in the ichino - ikeda conjecture, and prove a statement on the existence of local newforms for unitary groups as a corollary.
arxiv:2311.17700
we numerically and experimentally investigate the influence of single defects consisting of a missing antidot on the spin configurations in rectangular permalloy antidot lattices. the introduction of such lattice defects leads to the nucleation of complex domain structures after the decay of a saturating magnetic field. micromagnetic simulations yield four typical domain configurations around the defect having distinct energy densities. the existence of the four spin configurations is confirmed by magnetic force microscopy on antidot lattices containing individual defects.
arxiv:1204.1183
over the last few years a number of software and hardware improvements have been implemented to the 32 - m cassegrain radio telescope located near toru \ ' n. the 19 - bit angle encoders have been upgraded to 29 - bit in azimuth and elevation axes. the control system has been substantially improved, in order to account for a number of previously - neglected, astrometric effects that are relevant for milli - degree pointing. in the summer 2015, as a result of maintenance works, the orientation of the secondary mirror has been slightly altered, which resulted in worsening of the pointing precision, much below the nominal telescope capabilities. in preparation for observations at the highest available frequency of 30 - ghz, we use one centimeter receiver array ( ocra ), to take the most accurate pointing data ever collected with the telescope, and we analyze it in order to improve the pointing precision. we introduce a new generalized pointing model that, for the first time, accounts for the rail irregularities, and we show that the telescope can have root mean square pointing accuracy at the level $ { < } 8 " $ and $ { < } 12 " $ in azimuth and elevation respectively. finally, we discuss the implemented pointing improvements in the light of effects that may influence their long - term stability.
arxiv:1707.08793
this paper addresses the oscillatory synchronization problem for multiple uncertain mechanical systems with a virtual leader, and the interaction topology among them is assumed to contain a directed spanning tree. we propose an adaptive control scheme to achieve the goal of oscillatory synchronization. using the similarity decomposition approach, we show that the position and velocity synchronization errors between each mechanical system ( or follower ) and the virtual leader converge to zero. the performance of the proposed adaptive scheme is shown by numerical simulation results.
arxiv:1402.7305
the purpose of this paper is to review some combinatorial ideas behind the mirror symmetry for calabi - yau hypersurfaces and complete intersections in gorenstein toric fano varieties. we suggest as a basic combinatorial object the notion of a gorenstein polytope of index r. a natural combinatorial duality for d - dimensional gorenstein polytopes of index r extends the well - known polar duality for reflexive polytopes ( case r = 1 ). we consider the borisov duality between two nef - partitions as a duality between two gorenstein polytopes p and p ^ * of index r together with selected special ( r - 1 ) - dimensional simplices s in p and s ' in p ^ *. different choices of these simplices suggest an interesting relation to homological mirror symmetry.
arxiv:math/0703456
the logarithmic sigma model describes the interactions between quarks via sigma and pion exchanges. the effective mesonic potential is extended to the finite temperature and it is numerically calculated using n - midpoint rule. meson properties such as the phase transition, the sigma and pion masses, and the critical point temperature are examined as functions of temperature. the obtained results are compared with other approaches. we conclude that the calculated effective potential is successfully to predict the meson properties
arxiv:1212.2276
we revisit various results, which have been obtained by the babar and belle collaborations over the last twelve years, concerning symmetry properties of the hamiltonian, which governs the time evolution and the decay of neutral b mesons. we find that those measurements, which established cp violation in b meson decay, 12 years ago, had as well established t ( time - reversal ) symmetry violation. they also confirmed cpt symmetry in the decay ( t _ cpt = 0 ) and symmetry with respect to time - reversal ( epsilon? = 0 ) and to cpt ( delta? = 0 ) in the b0? b0bar oscillation.
arxiv:1312.3770
in recent paper fakkousy et al. show that the 3d h \ ' { e } non - heiles system with hamiltonian $ h = \ frac { 1 } { 2 } ( p _ 1 ^ 2 + p _ 2 ^ 2 + p _ 3 ^ 2 ) + \ frac { 1 } { 2 } ( a q _ 1 ^ 2 + c q _ 2 ^ 2 + b q _ 3 ^ 2 ) + ( \ alpha q _ 1 ^ 2 + \ gamma q _ 2 ^ 2 ) q _ 3 + \ frac { \ beta } { 3 } q _ 3 ^ 3 $ is integrable in sense of liouville when $ \ alpha = \ gamma, \ frac { \ alpha } { \ beta } = 1, a = b = c $ ; or $ \ alpha = \ gamma, \ frac { \ alpha } { \ beta } = \ frac { 1 } { 6 }, a = c $, $ b $ - arbitrary ; or $ \ alpha = \ gamma, \ frac { \ alpha } { \ beta } = \ frac { 1 } { 16 }, a = c, \ frac { a } { b } = \ frac { 1 } { 16 } $ ( and of course, when $ \ alpha = \ gamma = 0 $, in which case the hamiltonian is separable ). it is known that the second case remains integrable for $ a, c, b $ arbitrary. using morales - ramis theory, we prove that there are no other cases of integrability for this system.
arxiv:2106.14067
we adopt a scenario in which the galactic thick disk was formed by minor merging between the first generation of the galactic thin disk ( fgtd ) and a dwarf galaxy about 9 gyr ago and thereby investigate chemical and dynamical properties of the galactic thick disk. in this scenario, the dynamical properties of the thick disk have long been influenced both by the mass growth of the second generation of the galactic thin disk ( i. e., the present thin disk ) and by its non - axisymmetric structures. on the other hand, the early star formation history and chemical evolution of the thin disk was influenced by the remaining gas of the thick disk. based on n - body simulations and chemical evolution models, we investigate the radial metallicity gradient, structural and kinematical properties, and detailed chemical abundance patterns of the thick disk. our numerical simulations show that the ancient minor merger event can significantly flatten the original radial metallicity gradient of the fgtd, in particular, in the outer part, and also can be responsible for migration of inner metal - rich stars into the outer part ( r > 10kpc ). the simulations show that the central region of the thick disk can develop a bar due to dynamical effects of a separate bar in the thin disk. the simulated orbital eccentricity distributions in the thick disk for models with higher mass - ratios ( ~ 0. 2 ) and lower orbital eccentricities ( ~ 0. 5 ) of minor mergers are in good agreement with the corresponding observations. the simulated v _ { phi } - | z | relation of the thick disk in models with low orbital inclination angles of mergers are also in good agreement with the latest observational results. our galactic chemical evolution models can explain both the observed metallicity distribution functions ( mdfs ) and correlations between [ mg / fe ] and [ fe / h ] for the two disks in a self - consistent manner.
arxiv:1105.5864
bug localization techniques for just - in - time ( jit ) compilers are based on analyzing the execution behaviors of the target jit compiler on a set of test programs generated for this purpose ; characteristics of these test inputs can significantly impact the accuracy of bug localization. however, current approaches for automatic test program generation do not work well for bug localization in jit compilers. this paper proposes a novel technique for automatic test program generation for jit compiler bug localization that is based on two key insights : ( 1 ) the generated test programs should contain both passing inputs ( which do not trigger the bug ) and failing inputs ( which trigger the bug ) ; and ( 2 ) the passing inputs should be as similar as possible to the initial seed input, while the failing programs should be as different as possible from it. we use a structural analysis of the seed program to determine which parts of the code should be mutated for each of the passing and failing cases. experiments using a prototype implementation indicate that test inputs generated using our approach result in significantly improved bug localization results than existing approaches.
arxiv:2307.08885
the goldstone mode in the ordered phase of itinerant helimagnets, such as mnsi or fege, is determined and shown to have a strongly anisotropic dispersion relation. the softness of this mode is, in a well - defined sense, in between that of ferromagnetic and antiferromagnetic magnons, respectively. it is shown that this soft mode leads to nonanalytic corrections to fermi - liquid behavior, with a t - contribution to the specific heat coefficient, and a t ^ { 5 / 2 } - contribution to the resistivity. the quasi - particle inelastic lifetime shows anisotropic behavior in momentum space.
arxiv:cond-mat/0506770
this talk gives an overview, aimed at non - experts, of the recent progress on the studies of technicolor models on the lattice. phenomenologically successful technicolor models require walking coupling ; thus, an emphasis is put on the determination of the beta - function of various models. as a case study we consider su ( 2 ) gauge field theory with two adjoint representation fermions, so - called minimal walking technicolor theory.
arxiv:1101.5875
the different scenarios of spontaneous breaking of d - parity have been studied in both non - supersymmetric and supersymmetric version of the left - right symmetric models ( lrsm ). we explore the possibility of a tev scale $ su ( 2 ) _ r $ breaking scale $ m _ r $ and hence tev scale right handed neutrinos from both minimization of the scalar potential as well as the coupling constant unification point of view. we show that although minimization of the scalar potential allows the possibility of a tev scale $ m _ r $ and tiny neutrino masses in lrsm with spontaneous d - parity breaking, the gauge coupling unification at a high scale $ \ sim 10 ^ { 16 } $ gev does not favour a tev scale symmetry breaking except in the supersymmetric left - right ( susylr ) model with higgs doublet and bidoublet. the phenomenology of neutrino mass is also discussed.
arxiv:1006.2245
we present 353 ghz full - sky maps of the polarization fraction $ p $, angle $ \ psi $, and dispersion of angles $ s $ of galactic dust thermal emission produced from the 2018 release of planck data. we confirm that the mean and maximum of $ p $ decrease with increasing $ n _ h $. the uncertainty on the maximum polarization fraction, $ p _ \ mathrm { max } = 22. 0 $ % at 80 arcmin resolution, is dominated by the uncertainty on the zero level in total intensity. the observed inverse behaviour between $ p $ and $ s $ is interpreted with models of the polarized sky that include effects from only the topology of the turbulent galactic magnetic field. thus, the statistical properties of $ p $, $ \ psi $, and $ s $ mostly reflect the structure of the magnetic field. nevertheless, we search for potential signatures of varying grain alignment and dust properties. first, we analyse the product map $ s \ times p $, looking for residual trends. while $ p $ decreases by a factor of 3 - - 4 between $ n _ h = 10 ^ { 20 } $ cm $ ^ { - 2 } $ and $ n _ h = 2 \ times 10 ^ { 22 } $ cm $ ^ { - 2 } $, $ s \ times p $ decreases by only about 25 %, a systematic trend observed in both the diffuse ism and molecular clouds. second, we find no systematic trend of $ s \ times p $ with the dust temperature, even though in the diffuse ism lines of sight with high $ p $ and low $ s $ tend to have colder dust. we also compare planck data with starlight polarization in the visible at high latitudes. the agreement in polarization angles is remarkable. two polarization emission - to - extinction ratios that characterize dust optical properties depend only weakly on $ n _ h $ and converge towards the values previously determined for translucent lines of sight. we determine an upper limit for the polarization fraction in extinction of 13 %, compatible with the $ p _ \ mathrm { max } $ observed in emission. these results provide strong constraints for models of galactic dust in diffuse gas.
arxiv:1807.06212
the las vergnas ' strong map conjecture, states that any strong map of oriented matroids $ f : \ mathcal { m } _ 1 \ rightarrow \ mathcal { m } _ 2 $ can be factored into extensions and contractions. the conjecture is known to be false due to a construction by richter - gebert, he find a non - factorizable strong map $ f : \ mathcal { m } _ 1 \ rightarrow \ mathcal { m } _ 2 $, however in his example $ \ mathcal { m } _ 1 $ is not realizable. the problem that whether there exists a non - factorizable strong map between realizable oriented matroids still remains open. in this paper we provide a counterexample to the strong map conjecture on realizable oriented matroids, which is a strong map $ f : \ mathcal { m } _ 1 \ rightarrow \ mathcal { m } _ 2 $, $ \ mathcal { m } _ 1 $ is an alternating oriented matroid of rank $ 4 $ and $ f $ has corank $ 2 $. we prove it is not factorizable by showing that there is no uniform oriented matroid $ \ mathcal { m } ^ { \ prime } $ of rank $ 3 $ such that $ \ mathcal { m } _ 1 \ rightarrow \ mathcal { m } ^ { \ prime } \ rightarrow \ mathcal { m } _ 2 $.
arxiv:1803.06825
the inner crust of neutron stars consists of nuclei of various shapes immersed in a neutron gas and stabilized by the coulomb interaction in the form of a crystal lattice. the scattering of neutrons on nuclear inhomegeneities leads to the quantum correction to the total energy of the system. this correction resembles the casimir energy and turns out to have a large influence on the structure of the crust.
arxiv:nucl-th/0112002
a substantial number of super - earths have been discovered, and atmospheres of transiting super - earths have also been observed by transmission spectroscopy. several lines of observational evidence indicate that most super - earths do not possess massive h $ _ 2 $ / he atmospheres. however, accretion and retention of less massive atmospheres on super - earths challenge planet formation theory. we consider the following three mechanisms : ( i ) envelope heating by pebble accretion, ( ii ) mass loss during giant impacts, and ( iii ) atmospheric loss by stellar x - ray and euv photoevaporation. we investigate whether these mechanisms influence the amount of the atmospheres that form around super - earths. we develop a code combining an n - body simulation of pebble - driven planetary formation and an atmospheric evolution simulation. we demonstrate that the observed orbital properties of super - earths are well reproduced by the results of our simulations. however, ( i ) heating by pebble accretion ceases prior to disk dispersal, ( ii ) the frequency of giant impact events is too low to sculpt massive atmospheres, and ( iii ) many super - earths having h $ _ 2 $ / he atmospheres of $ \ gtrsim 10 $ wt % survive against stellar irradiations for 1 gyr. therefore, it is likely that other mechanisms such as suppression of gas accretion are required to explain less massive atmospheres ( $ \ lesssim 10 $ wt % ) of super - earths.
arxiv:2003.05934
in this paper we study an analogue of the classical riemann - hilbert problem stated for the classes of difference and $ q $ - difference systems. the birkhoff ' s existence theorem was generalized in this paper.
arxiv:1702.08323