text
stringlengths
1
3.65k
source
stringlengths
15
79
there exists a connection between the creation of toroidal moments ( tm ) and the breaking of the one - cell relativistic crystalline symmetry ( rcs ) associated to any given crystal into which non - trivial magnetoelectric coupling effects ( me ) exist. indeed, in this kind of crystals, any interaction between a charge carrier and an elementary magnetic cell can breaks the rcs of this previous given cell by varying, in the simplest case, the continuous defining parameters of the initial rcs. this breaking can be associated to a change of the initial galilean proper frame of any given carrier to an " effective " one, into which the rcs of the interacting cell is kept. we can speak of a kind of " inverse " kineto - magnetoelectric effect. the magnetic groups compatible with such process have been computed. moreover, one can notice that the tm ' s break the p and t symmetries but not the pt one as in anyons theories. this breaking creates so - called nambu - goldstone bosons generating " effective " magnetic monopoles. these consequences allow us to claim, first, that anyons are charge carriers associated with " effective " magnetic monopoles, both with tm ' s, and second, that me can be highly considered in superconductors theory.
arxiv:cond-mat/0208004
the population of known extrasolar planets includes giant and terrestrial planets that closely orbit their host star. such planets experience significant tidal distortions that can force the planet into synchronous rotation. the combined effects of tidal deformation and centripetal acceleration induces significant asphericity in the shape of these planets, compared to the mild oblateness of earth, with maximum gravitational acceleration at the poles. here we show that this latitudinal variation in gravitational acceleration is relevant for modeling the climate of oblate planets including jovian planets within the solar system, closely - orbiting hot jupiters, and planets within the habitable zone of white dwarfs. we compare first - and third - order approximations for gravitational acceleration on an oblate spheroid and calculate the geostrophic wind that would result from this asphericity on a range of solar system planets and exoplanets. third - order variations in gravitational acceleration are negligible for earth but become significant for jupiter, saturn, and jovian exoplanets. this latitudinal variation in gravitational acceleration can be measured remotely, and the formalism presented here can be implemented for use in general circulation climate modeling studies of exoplanet atmospheres.
arxiv:1608.02536
winner - take - all competitions in forecasting and machine - learning suffer from distorted incentives. witkowski et al. 2018 identified this problem and proposed elf, a truthful mechanism to select a winner. we show that, from a pool of $ n $ forecasters, elf requires $ \ theta ( n \ log n ) $ events or test data points to select a near - optimal forecaster with high probability. we then show that standard online learning algorithms select an $ \ epsilon $ - optimal forecaster using only $ o ( \ log ( n ) / \ epsilon ^ 2 ) $ events, by way of a strong approximate - truthfulness guarantee. this bound matches the best possible even in the nonstrategic setting. we then apply these mechanisms to obtain the first no - regret guarantee for non - myopic strategic experts.
arxiv:2102.08358
we define functorial isomorphisms of parallel transport along \ ' etale paths for a class of principal $ g $ - bundles on a $ p $ - adic curve. here $ g $ is a connected reductive algebraic group of finite presentation and the considered principal bundles are just those with potentially strongly semistable reduction of degree zero. the constructed isomorphisms yield continous functors from the \ ' etale fundamental groupoid of the given curve to the category of topological spaces with a simply transitive continous right $ g ( \ mathbb { c } _ { p } ) $ - action. this generalizes a construction in the case of vector bundles on a $ p $ - adic curve by deninger and werner. it may be viewed as a partial $ p $ - adic analogue of the classical theory by ramanathan of principal bundles on compact riemann surfaces, which generalizes the classical narasimhan - - seshadri theory of vector bundles on compact riemann surfaces.
arxiv:0706.0925
we introduce and study a new scheme to construct relativistic observables from post - processing light cone data. this construction is based on a novel approach, lc - metric, which takes general light cone or snapshot output generated by arbitrary n - body simulations or emulations and solves the linearized einstein equations to determine the spacetime metric on the light cone. we find that this scheme is able to determine the metric to high precision, and subsequently generate accurate mock cosmological observations sensitive to effects such as post - born lensing and nonlinear isw contributions. by comparing to conventional methods in quantifying those general relativistic effects, we show that this scheme is able to accurately construct the lensing convergence signal. we also find the accuracy of this method in quantifying the isw effects in the highly nonlinear regime outperforms conventional methods by an order of magnitude. this scheme opens a new path for exploring and modeling higher - order and nonlinear general relativistic contributions to cosmological observables, including mock observations of gravitational lensing and the moving lens and rees - sciama effects.
arxiv:2110.00893
the strategy of using cuda - compatible gpus as a parallel computation solution to improve the performance of programs has been more and more widely approved during the last two years since the cuda platform was released. its benefit extends from the graphic domain to many other computationally intensive domains. tiling, as the most general and important technique, is widely used for optimization in cuda programs. new models of gpus with better compute capabilities have, however, been released, new versions of cuda sdks were also released. these updated compute capabilities must to be considered when optimizing using the tiling technique. in this paper, we implement image interpolation algorithms as a test case to discuss how different tiling strategies affect the program ' s performance. we especially focus on how the different models of gpus affect the tiling ' s effectiveness by executing the same program on two different models of gpus equipped testing platforms. the results demonstrate that an optimized tiling strategy on one gpu model is not always a good solution when execute on other gpu models, especially when some external conditions were changed.
arxiv:1001.1718
recent work suggests that the mass - loss rate of the primary star ( eta a ) in the massive colliding wind binary eta carinae dropped by a factor of 2 - 3 between 1999 and 2010. we present results from large - ( r = 1545au ) and small - ( r = 155au ) domain, 3d smoothed particle hydrodynamic ( sph ) simulations of eta car ' s colliding winds for 3 eta a mass - loss rates ( 2. 4, 4. 8, and 8. 5 x 10 ^ - 4 m _ sun / yr ), investigating the effects on the dynamics of the binary wind - wind collision ( wwc ). these simulations include orbital motion, optically thin radiative cooling, and radiative forces. we find that eta a ' s mass - loss rate greatly affects the time - dependent hydrodynamics at all spatial scales investigated. the simulations also show that the post - shock wind of the companion star ( eta b ) switches from the adiabatic to the radiative - cooling regime during periastron passage. the sph simulations together with 1d radiative transfer models of eta a ' s spectra reveal that a factor of 2 or more drop in eta a ' s mass - loss rate should lead to substantial changes in numerous multiwavelength observables. recent observations are not fully consistent with the model predictions, indicating that any drop in eta a ' s mass - loss rate was likely by a factor < 2 and occurred after 2004. we speculate that most of the recent observed changes in eta car are due to a small increase in the wwc opening angle that produces significant effects because our line - of - sight to the system lies close to the dense walls of the wwc zone. a modest decrease in eta a ' s mass - loss rate may be responsible, but changes in the wind / stellar parameters of eta b cannot yet be fully ruled out. we suggest observations during eta car ' s next periastron in 2014 to further test for decreases in eta a ' s mass - loss rate. if eta a ' s mass - loss rate is declining and continues to do so, the 2014 x - ray minimum should be even shorter than that of 2009.
arxiv:1310.0487
we assess the effectiveness of the jeans - anisotropic - mge ( jam ) technique with a state - of - the - art cosmological hydrodynamic simulation, the illustris project. we perform jam modelling on 1413 simulated galaxies with stellar mass m ^ * > 10 ^ { 10 } m _ { sun }, and construct an axisymmetric dynamical model for each galaxy. combined with a markov chain monte carlo ( mcmc ) simulation, we recover the projected root - mean - square velocity ( v _ rms ) field of the stellar component, and investigate constraints on the stellar mass - to - light ratio, m ^ * / l, and the fraction of dark matter f _ { dm } within 2. 5 effective radii ( r _ e ). we find that the enclosed total mass within 2. 5 r _ e is well constrained to within 10 %. however, there is a degeneracy between the dark matter and stellar components with correspondingly larger individual errors. the 1 sigma scatter in the recovered m ^ * / l is 30 - 40 % of the true value. the accuracy of the recovery of m ^ * / l depends on the triaxial shape of a galaxy. there is no significant bias for oblate galaxies, while for prolate galaxies the jam - recovered stellar mass is on average 18 % higher than the input values. we also find that higher image resolutions alleviate the dark matter and stellar mass degeneracy and yield systematically better parameter recovery.
arxiv:1511.00789
we propose a novel dynamical method for beating decoherence and dissipation in open quantum systems. we demonstrate the possibility of filtering out the effects of unwanted ( not necessarily known ) system - environment interactions and show that the noise - suppression procedure can be combined with the capability of retaining control over the effective dynamical evolution of the open quantum system. implications for quantum information processing are discussed.
arxiv:quant-ph/9809071
the standard definition of the electromagnetic radius of a charged particle ( in particular the proton ) is ambiguous once electromagnetic corrections are considered. we argue that a natural definition can be given within an effective field theory framework in terms of a matching coefficient. the definition of the neutron radius is also discussed. we elaborate on the effective field theory relevant for the hydrogen and muonic hydrogen, specially for the latter. we compute the hadronic corrections to the lamb shift ( for the polarizability effects only with logarithmic accuracy ) within heavy baryon effective theory. we find that they diverge in the inverse of the pion mass in the chiral limit.
arxiv:hep-ph/0412142
point cloud is an important type of geometric data structure. due to its irregular format, most researchers transform such data to regular 3d voxel grids or collections of images. this, however, renders data unnecessarily voluminous and causes issues. in this paper, we design a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input. our network, named pointnet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. though simple, pointnet is highly efficient and effective. empirically, it shows strong performance on par or even better than state of the art. theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.
arxiv:1612.00593
the multi - dimensional nature of user experience warrants rigorous assessment of the interactive experience in systems. user experience assessments are based on product evaluations and subsequent analysis of the collected data using quantitative and qualitative techniques. the quality of user experience assessments are dependent on the effectiveness of the techniques deployed. this paper presents the results of a quantitative analysis of desirability aspects of the user experience in a comparative product evaluation study. the data collection was conducted using 118 item microsoft product reaction cards ( prc ) tool followed by data analysis based on the surface measure of overall performance ( smop ) approach. the results of this study suggest that the incorporation of smop as an approach for prc data analysis derive conclusive evidence of desirability in user experience. the significance of the paper is that it presents a novel analysis method incorporating product reaction cards and surface measure of overall performance approach for an effective quantitative analysis which can be used in academic research and industrial practice.
arxiv:1606.03544
electromagnetic ( em ) radiation off strongly interacting matter created in high - energy heavy - ion collisions ( hics ) encodes information on the high - temperature phases of nuclear matter. microscopic calculations of thermal em emission rates are usually rather involved and not readily accessible to broad applications in models of the fireball evolution which are required to compare to experimental data. an accurate and universal parametrization of the microscopic calculations is thus key to honing the theory behind the em spectra. here we provide such a parametrization for photon emission rates from hadronic matter, including the contributions from in - medium rho mesons ( which incorporate effects from anti - / baryons ), as well as bremsstrahlung from pi - pi scattering. individual parametrizations for each contribution are numerically determined through nested fitting functions for photon energies from 0. 2 to 5 gev in chemically equilibrated matter of temperatures 100 - 180 mev and baryon chemical potentials 0 - 400 mev. special care is taken to extent the parameterizations to chemical off - equilibrium as encountered in hics after chemical freezeout. this provides a functional description of thermal photon rates within a 20 % variation of the microscopically calculated values.
arxiv:1411.7012
it is anticipated that the gravitational radiation detected in future gravitational wave ( gw ) detectors from binary neutron star ( ns ) mergers can probe the high - density equation of state ( eos ). we perform the first simulations of binary ns mergers which adopt various parametrizations of the quark - hadron crossover ( qhc ) eos. these are constructed from combinations of a hadronic eos ( $ n _ { b } < 2 ~ n _ 0 $ ) and a quark - matter eos ( $ n _ { b } > ~ 5 ~ n _ 0 $ ), where $ n _ { b } $ and $ n _ 0 $ are the baryon number density and the nuclear saturation density, respectively. at the crossover densities ( $ 2 ~ n _ 0 < n _ { b } < 5 ~ n _ 0 $ ) the qhc eoss continuously soften, while remaining stiffer than hadronic and first - order phase transition eoss, achieving the stiffness of strongly correlated quark matter. this enhanced stiffness leads to significantly longer lifetimes of the postmerger ns than that for a pure hadronic eos. we find a dual nature of these eoss such that their maximum chirp gw frequencies $ f _ { max } $ fall into the category of a soft eos while the dominant peak frequencies ( $ f _ { peak } $ ) of the postmerger stage fall in between that of a soft and stiff hadronic eos. an observation of this kind of dual nature in the characteristic gw frequencies will provide crucial evidence for the existence of strongly interacting quark matter at the crossover densities for qcd.
arxiv:2203.05461
the ability to detect anomalies in time series is considered highly valuable in numerous application domains. the sequential nature of time series objects is responsible for an additional feature complexity, ultimately requiring specialized approaches in order to solve the task. essential characteristics of time series, situated outside the time domain, are often difficult to capture with state - of - the - art anomaly detection methods when no transformations have been applied to the time series. inspired by the success of deep learning methods in computer vision, several studies have proposed transforming time series into image - like representations, used as inputs for deep learning models, and have led to very promising results in classification tasks. in this paper, we first review the signal to image encoding approaches found in the literature. second, we propose modifications to some of their original formulations to make them more robust to the variability in large datasets. third, we compare them on the basis of a common unsupervised task to demonstrate how the choice of the encoding can impact the results when used in the same deep learning architecture. we thus provide a comparison between six encoding algorithms with and without the proposed modifications. the selected encoding methods are gramian angular field, markov transition field, recurrence plot, grey scale encoding, spectrogram, and scalogram. we also compare the results achieved with the raw signal used as input for another deep learning model. we demonstrate that some encodings have a competitive advantage and might be worth considering within a deep learning framework. the comparison is performed on a dataset collected and released by airbus sas, containing highly complex vibration measurements from real helicopter flight tests. the different encodings provide competitive results for anomaly detection.
arxiv:2005.07031
in this article, we present a novel scheme for segmenting the image boundary ( with the background ) in optoacoustic small animal in vivo imaging systems. the method utilizes a multiscale edge detection algorithm to generate a binary edge map. a scale dependent morphological operation is employed to clean spurious edges. thereafter, an ellipse is fitted to the edge map through constrained parametric transformations and iterative goodness of fit calculations. the method delimits the tissue edges through the curve fitting model, which has shown high levels of accuracy. thus, this method enables segmentation of optoacoutic images with minimal human intervention, by eliminating need of scale selection for multiscale processing and seed point determination for contour mapping.
arxiv:1506.03124
in this paper, some general properties of shannon information measures are investigated over sets of probability distributions with restricted marginals. certain optimization problems associated with these functionals are shown to be np - hard, and their special cases are found to be essentially information - theoretic restatements of well - known computational problems, such as the subset sum and the 3 - partition. the notion of minimum entropy coupling is introduced and its relevance is demonstrated in information - theoretic, computational, and statistical contexts. finally, a family of pseudometrics ( on the space of discrete probability distributions ) defined by these couplings is studied, in particular their relation to the total variation distance, and a new characterization of the conditional entropy is given.
arxiv:1303.3235
the formation of correlations due to collisions in an interacting nucleonic system is investigated shortly after a disturbance. results from one - time kinetic equations are compared with the kadanoff and baym two - time equation with collisions included in second order born approximation. a reasonable agreement is found for a proposed approximation of the memory effects by a finite duration of collisions. the formation of correlations and the build up time is calculated analytically for the high temperature and the low temperature limit. this translates into a time dependent increase of the effective temperature on time scales which interfere with standard fire ball scenarios of heavy ion collisions. the consequences of the formation of correlations on the two - particle interferometry are investigated and it is found that standard extracted lifetimes should be corrected downwards.
arxiv:nucl-th/9807046
this paper is concerned with improving the empirical convergence speed of block - coordinate descent algorithms for approximate nonnegative tensor factorization ( ntf ). we propose an extrapolation strategy in - between block updates, referred to as heuristic extrapolation with restarts ( her ). her significantly accelerates the empirical convergence speed of most existing block - coordinate algorithms for dense ntf, in particular for challenging computational scenarios, while requiring a negligible additional computational budget.
arxiv:2001.04321
we report the optical polarization of a gamma ray burst ( grb ) afterglow, obtained 203 seconds after the initial burst of gamma rays from grb 060418, using a ring polarimeter on the robotic liverpool telescope. our robust ( 2 - sigma ) upper limit on the percentage of polarization, less than 8 %, coincides with the fireball deceleration time at the onset of the afterglow. the combination of the rate of decay of the optical brightness and the low polarization at this critical time constrains standard models of grb ejecta, ruling out the presence of a large - scale ordered magnetic field in the emitting region.
arxiv:astro-ph/0703654
we investigate the stability of theories in which lorentz invariance is spontaneously broken by fixed - norm vector " aether " fields. models with generic kinetic terms are plagued either by ghosts or by tachyons, and are therefore physically unacceptable. there are precisely three kinetic terms that are not manifestly unstable : a sigma model $ ( \ partial _ \ mu a _ \ nu ) 2 $, the maxwell lagrangian $ f _ { \ mu \ nu } f ^ { \ mu \ nu } $, and a scalar lagrangian $ ( \ partial _ \ mu a ^ \ mu ) 2 $. the timelike sigma - model case is well - defined and stable when the vector norm is fixed by a constraint ; however, when it is determined by minimizing a potential there is necessarily a tachyonic ghost, and therefore an instability. in the maxwell and scalar cases, the hamiltonian is unbounded below, but at the level of perturbation theory there are fewer degrees of freedom and the models are stable. however, in these two theories there are obstacles to smooth evolution for certain choices of initial data.
arxiv:0812.1049
evolutionary complexity is here measured by the number of trials / evaluations needed for evolving a logical gate in a non - linear medium. behavioural complexity of the gates evolved is characterised in terms of cellular automata behaviour. we speculate that hierarchies of behavioural and evolutionary complexities are isomorphic up to some degree, subject to substrate specificity of evolution and the spectrum of evolution parameters.
arxiv:0802.3875
network data are often sampled with auxiliary information or collected through the observation of a complex system over time, leading to multiple network snapshots indexed by a continuous variable. many methods in statistical network analysis are traditionally designed for a single network, and can be applied to an aggregated network in this setting, but that approach can miss important functional structure. here we develop an approach to estimating the expected network explicitly as a function of a continuous index, be it time or another indexing variable. we parameterize the network expectation through low dimensional latent processes, whose components we represent with a fixed, finite - dimensional functional basis. we derive a gradient descent estimation algorithm, establish theoretical guarantees for recovery of the low dimensional structure, compare our method to competitors, and apply it to a data set of international political interactions over time, showing our proposed method to adapt well to data, outperform competitors, and provide interpretable and meaningful results.
arxiv:2210.07491
a significant number of the parameters of a gamma - ray burst ( grb ) and its host galaxy are calculated from the afterglow. there are various methods obtaining extinction values for the necessary correction for galactic foreground. these are : galaxy counts, from hi 21 cm surveys, from spectroscopic measurements and colors of nearby galactic stars, or using extinction maps calculated from infrared surveys towards the grb. we demonstrate that akari far - infrared surveyor sky surface brightness maps are useful uncovering the fine structure of the galactic foreground of grbs. galactic cirrus structures of a number of grbs are calculated with a 2 arcminute resolution, and the results are compared to that of other methods.
arxiv:1706.01296
we consider an experimentally obtainable sup operator, defined by using a generalized superposition of products of field annihilation ( $ a $ ) and creation ( $ a ^ \ dagger $ ) operators of the type, $ a = saa ^ \ dagger + t { a ^ \ dagger } a $ with $ s ^ 2 + t ^ 2 = 1 $. we apply this sup operator on coherent and thermal quantum states, the states thus produced are referred as sup - operated coherent state ( socs ) and sup - operated thermal state ( sots ), respectively. in the present work, we report a comparative study between the higher - order nonclassical properties of socs and sots. the comparison is performed by using a set of nonclassicality witnesses ( e. g., higher - order antiubunching, higher - order sub - poissonian photon statistics, higher - order squeezing, agarwal - tara parameter, klyshko ' s condition ). the existence of higher - order nonclassicalities in socs and sots have been investigated for the first time. in view of possible experimental verification of the proposed scheme, we present exact calculations to reveal the effect of non - unit quantum efficiency of quantum detector on higher - order nonclassicalities.
arxiv:2204.06712
in the first paper we presented 27 hydrodynamical cosmological simulations of galaxies with total masses between $ 5 \ times 10 ^ 8 $ and $ 10 ^ { 10 } \, \ mathrm { m } _ \ odot $. in this second paper we use a subset of these cosmological simulations as initial conditions ( ics ) for more than forty hydrodynamical simulations of satellite and host galaxy interaction. our cosmological ics seem to suggest that galaxies on these mass scales have very little rotational support and are velocity dispersion ( $ \ sigma $ ) dominated. accretion and environmental effects increase the scatter in the galaxy scaling relations ( e. g. size - velocity dispersion ) in very good agreement with observations. star formation is substantially quenched after accretion. mass removal due to tidal forces has several effects : it creates a very flat stellar velocity dispersion profiles, and it reduces the dark matter content at all scales ( even in the centre ), which in turn lowers the stellar velocity on scales around 0. 5 kpc even when the galaxy does not lose stellar mass. satellites that start with a cored dark matter profile are more prone to either be destroyed or to end up in a very dark matter poor galaxy. finally, we found that tidal effects always increase the " cuspyness " of the dark matter profile, even for haloes that infall with a core.
arxiv:1707.01102
surgical instrument segmentation is extremely important for computer - assisted surgery. different from common object segmentation, it is more challenging due to the large illumination and scale variation caused by the special surgical scenes. in this paper, we propose a novel bilinear attention network with adaptive receptive field to solve these two challenges. for the illumination variation, the bilinear attention module can capture second - order statistics to encode global contexts and semantic dependencies between local pixels. with them, semantic features in challenging areas can be inferred from their neighbors and the distinction of various semantics can be boosted. for the scale variation, our adaptive receptive field module aggregates multi - scale features and automatically fuses them with different weights. specifically, it encodes the semantic relationship between channels to emphasize feature maps with appropriate scales, changing the receptive field of subsequent convolutions. the proposed network achieves the best performance 97. 47 % mean iou on cata7 and comes first place on endovis 2017 by 10. 10 % iou overtaking second - ranking method.
arxiv:2001.07093
we compute the index for the conifold gauge theory from type iib supergravity ( superstring ) on ads _ 5 \ times t ^ { 1, 1 }. we discuss its implication from the gauge theory viewpoint.
arxiv:hep-th/0602284
in this paper, we study uncharged, non - conformal, and anisotropic systems with strong interactions using the gauge - gravity duality by considering einstein - quadratic - axion - dilaton action in five dimensions. in fact, we would like to gain insight into the influence of higher derivative gravity on the qcd system. at finite temperature, we obtain an anisotropic black brane solution to a 5d einstein - gauss - bonnet - axion - dilaton system. the system has been investigated and the effect of the parameter of theory has been considered. the blackening function supports the thermodynamical phase transition between small / large and ads / large black brane for suitable parameters. we also study transport and diffusion properties and observe in particular that the butterfly velocity that characterizes both diffusion and growth of chaos transverse to the anisotropic direction saturates a constant value in the ir which can exceed the bound given by the conformal value. we also determine the imaginary part of the heavy quark potential in a strongly coupled plasma dual to gauss - bonnet gravity.
arxiv:2308.05159
document - level natural language inference ( docnli ) is a new challenging task in natural language processing, aiming at judging the entailment relationship between a pair of hypothesis and premise documents. current datasets and baselines largely follow sentence - level settings, but fail to address the issues raised by longer documents. in this paper, we establish a general solution, named retrieval, reading and fusion ( r2f ) framework, and a new setting, by analyzing the main challenges of docnli : interpretability, long - range dependency, and cross - sentence inference. the basic idea of the framework is to simplify document - level task into a set of sentence - level tasks, and improve both performance and interpretability with the power of evidence. for each hypothesis sentence, the framework retrieves evidence sentences from the premise, and reads to estimate its credibility. then the sentence - level results are fused to judge the relationship between the documents. for the setting, we contribute complementary evidence and entailment label annotation on hypothesis sentences, for interpretability study. our experimental results show that r2f framework can obtain state - of - the - art performance and is robust for diverse evidence retrieval methods. moreover, it can give more interpretable prediction results. our model and code are released at https : / / github. com / phoenixsecularbird / r2f.
arxiv:2210.12328
in this article we analyze totally periodic pseudo - anosov flows in graph three manifolds. this means that in each seifert fibered piece of the torus decomposition, the free homotopy class of regular fibers has a finite power which is also a finite power of the free homotopy class of a closed orbit of the flow. we show that each such flow is topologically equivalent to one of the model pseudo - anosov flows which we constructed in a previous article. a model pseudo - anosov flow is obtained by glueing standard neighborhoods of birkhoff annuli and perhaps doing dehn surgery on certain orbits. we also show that two model flows on the same graph manifold are isotopically equivalent ( ie. there is a isotopy of the manifold mapping the oriented orbits of the first flow to the oriented orbits of the second flow ) if and only if they have the same topological and dynamical data in the collection of standard neighborhoods of the birkhoff annuli.
arxiv:1211.7327
are continuous lines used to depict edges directly visible from a particular angle. hidden – are short - dashed lines that may be used to represent edges that are not directly visible. center – are alternately long - and short - dashed lines that may be used to represent the axes of circular features. cutting plane – are thin, medium - dashed lines, or thick alternately long - and double short - dashed that may be used to define sections for section views. section – are thin lines in a pattern ( pattern determined by the material being " cut " or " sectioned " ) used to indicate surfaces in section views resulting from " cutting ". section lines are commonly referred to as " cross - hatching ". phantom – ( not shown ) are alternately long - and double short - dashed thin lines used to represent a feature or component that is not part of the specified part or assembly. e. g. billet ends that may be used for testing, or the machined product that is the focus of a tooling drawing. lines can also be classified by a letter classification in which each line is given a letter. type a lines show the outline of the feature of an object. they are the thickest lines on a drawing and done with a pencil softer than hb. type b lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. a harder pencil should be used, such as a 2h pencil. type c lines are used for breaks when the whole object is not shown. these are freehand drawn and only for short breaks. 2h pencil type d lines are similar to type c, except these are zigzagged and only for longer breaks. 2h pencil type e lines indicate hidden outlines of internal features of an object. these are dotted lines. 2h pencil type f lines are type e lines, except these are used for drawings in electrotechnology. 2h pencil type g lines are used for centre lines. these are dotted lines, but a long line of 10 – 20 mm, then a 1 mm gap, then a small line of 2 mm. 2h pencil type h lines are the same as type g, except that every second long line is thicker. these indicate the cutting plane of an object. 2h pencil type k lines indicate the alternate positions of an object and the line taken by that object. these are drawn with a long line of 10 – 20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2h
https://en.wikipedia.org/wiki/Engineering_drawing
energetic electrons are a common feature of interplanetary shocks and planetary bow shocks, and they are invoked as a key component of models of nonthermal radio emission, such as solar radio bursts. a simulation study is carried out of electron acceleration for high mach number, quasi - perpendicular shocks, typical of the shocks in the solar wind. two dimensional self - consistent hybrid shock simulations provide the electric and magnetic fields in which test particle electrons are followed. a range of different shock types, shock normal angles, and injection energies are studied. when the mach number is low, or the simulation configuration suppresses fluctuations along the magnetic field direction, the results agree with theory assuming magnetic moment conserving reflection ( or fast fermi acceleration ), with electron energy gains of a factor only 2 - 3. for high mach number, with a realistic simulation configuration, the shock front has a dynamic rippled character. the corresponding electron energization is radically different : energy spectra display : ( 1 ) considerably higher maximum energies than fast fermi acceleration ; ( 2 ) a plateau, or shallow sloped region, at intermediate energies 2 - 5 times the injection energy ; ( 3 ) power law fall off with increasing energy, for both upstream and downstream particles, with a slope decreasing as the shock normal angle approaches perpendicular ; ( 4 ) sustained flux levels over a broader region of shock normal angle than for adiabatic reflection. all these features are in good qualitative agreement with observations, and show that dynamic structure in the shock surface at ion scales produces effective scattering and can be responsible for making high mach number shocks effective sites for electron acceleration.
arxiv:astro-ph/0610714
a strong $ \ ell $ - ification of a matrix polynomial $ p ( \ lambda ) = \ sum a _ i \ lambda ^ i $ of degree $ d $ is a matrix polynomial $ \ mathcal { l } ( \ lambda ) $ of degree $ \ ell $ having the same finite and infinite elementary divisors, and the same numbers of left and right minimal indices as $ p ( \ lambda ) $. strong $ \ ell $ - ifications can be used to transform the polynomial eigenvalue problem associated with $ p ( \ lambda ) $ into an equivalent polynomial eigenvalue problem associated with a larger matrix polynomial $ \ mathcal { l } ( \ lambda ) $ of lower degree. typically $ \ ell = 1 $ and, in this case, $ \ mathcal { l } ( \ lambda ) $ receives the name of strong linearization. however, there exist some situations, e. g., the preservation of algebraic structures, in which it is more convenient to replace strong linearizations by other low degree matrix polynomials. in this work, we investigate the eigenvalue conditioning of $ \ ell $ - ifications from a family of matrix polynomials recently identified and studied by dopico, p \ ' erez and van dooren, the so - called block kronecker companion forms. we compare the conditioning of these $ \ ell $ - ifications with that of the matrix polynomial $ p ( \ lambda ) $, and show that they are about as well conditioned as the original polynomial, provided we scale $ p ( \ lambda ) $ so that $ \ max \ { \ | a _ i \ | _ 2 \ } = 1 $, and the quantity $ \ min \ { \ | a _ 0 \ | _ 2, \ | a _ d \ | _ 2 \ } $ is not too small. moreover, under the scaling assumption $ \ max \ { \ | a _ i \ | _ 2 \ } = 1 $, we show that any block kronecker companion form, regardless of its degree or block structure, is about as well - conditioned as the well - known frobenius companion forms. our theory is illustrated by numerical examples.
arxiv:1808.01078
we consider the problem of constrained multi - objective ( mo ) blackbox optimization using expensive function evaluations, where the goal is to approximate the true pareto set of solutions satisfying a set of constraints while minimizing the number of function evaluations. we propose a novel framework named uncertainty - aware search framework for multi - objective optimization with constraints ( usemoc ) to efficiently select the sequence of inputs for evaluation to solve this problem. the selection method of usemoc consists of solving a cheap constrained mo optimization problem via surrogate models of the true functions to identify the most promising candidates and picking the best candidate based on a measure of uncertainty. we applied this framework to optimize the design of a multi - output switched - capacitor voltage regulator via expensive simulations. our experimental results show that usemoc is able to achieve more than 90 % reduction in the number of simulations needed to uncover optimized circuits.
arxiv:2008.07029
we study the impact of electrode band structure on transport through single - molecule junctions by measuring the conductance of pyridine - based molecules using ag and au electrodes. our experiments are carried out using the scanning tunneling microscope based break - junction technique and are supported by density functional theory based calculations. we find from both experiments and calculations that the coupling of the dominant transport orbital to the metal is stronger for au - based junctions when compared with ag - based junctions. we attribute this difference to relativistic effects, which results in an enhanced density of d - states at the fermi energy for au compared with ag. we further show that the alignment of the conducting orbital relative to the fermi level does not follow the work function difference between two metals and is different for conjugated and saturated systems. we thus demonstrate that the details of the molecular level alignment and electronic coupling in metal - organic interfaces do not follow simple rules, but are rather the consequence of subtle local interactions.
arxiv:1504.00242
a group, whose presentation is explicitly derived in a certain way from a word labelled oriented graph ( in short, wlog ), is called a wlog group. in this work, we study homological version of bogomolov multiplier ( denoted by $ \ widetilde { b _ 0 } $ ) for this family of groups. we prove how to compute the generators for the $ \ widetilde { b _ 0 } ( g ) $ of a wlog group $ g $ from the underlying wlog. we exhibit finitely presented bestvina - - brady groups and artin groups as wlog groups. as applications, we compute both the multipliers : the homological version of bogomolov multipliers and schur multipliers, of these groups utilizing their respective wlog group presentations. our computation gives a new proof of the structure of the schur multiplier of a finitely presented bestvina - - brady group.
arxiv:2504.12409
the electromagnetic mean squared radii, < r ^ 2 > _ e and < r ^ 2 > _ m, of lambda ( 1405 ) are calculated in the chiral unitary model. we describe the excited baryons as dynamically generated resonances in the octet meson and octet baryon scattering. we evaluate values of < r ^ 2 > _ e and < r ^ 2 > _ m for the lambda ( 1405 ) on the resonance pole and obtain their complex values. we also consider lambda ( 1405 ) obtained by neglecting decay channels. for the latter case, we obtain negative and larger absolute electric mean squared radius than that of typical ground state baryons. this implies that lambda ( 1405 ) has structure that k ^ - is widely spread around p.
arxiv:0803.4068
in the first part of our study, we demonstrated how a simple physical benchmark model can be used to assess assumptions of the conceptual models, based on a lumped probability distributed model ( pdm ) formulated by lamb ( 1999 ). in this second part, we extend the scope of our study to distributed models, which aim to represent the spatial variability of model ' s elements ( e. g. input precipitation, soil moisture levels, flow components etc. ). for demonstration purposes, we assess the assumptions of the grid and grid - to - grid models, commonly used for flood real - time forecasting in the uk. while the distributed character of these models is conceptually closer to the physical model, we demonstrate that its exact implementation leads to many qualitative and quantitative differences in the model behaviour. for example, we show that the main assumption, namely that the speed of surface and subsurface flow is constant, causes the grid - to - grid model to significantly misrepresent scenarios with no rainfall, leading to too fast river flow decay, and scenarios with upstream rainfall, failing to capture characteristic flash flood formation. we argue that this analytical approach of finding fundamental differences between models may help us to develop more theoretically - justified rainfall - runoff models, e. g. models that can better handle the two aforementioned scenarios and other scenarios in which the spatial dependence is crucial to properly represent the catchment dynamics.
arxiv:2312.01372
most massive galaxies are now thought to go through an active galactic nucleus ( agn ) phase one or more times. yet, the cause of triggering and the variations in the intrinsic and observed properties of agn population are still poorly understood. young, compact radio sources associated with accreting supermassive black holes ( smbhs ) represent an important phase in the life cycles of jetted agn for understanding agn triggering and duty cycles. the superb sensitivity and resolution of the ngvla, coupled with its broad frequency coverage, will provide exciting new insights into our understanding of the life cycles of radio agn and their impact on galaxy evolution. the high spatial resolution of the ngvla will enable resolved mapping of young radio agn on sub - kiloparsec scales over a wide range of redshifts. with broad continuum coverage from 1 to 116 ghz, the ngvla will excel at estimating ages of sources as old as $ 30 - 40 $ myr at $ z \ sim 1 $. in combination with lower - frequency ( $ \ nu < 1 $ ghz ) instruments such as nglobo and the square kilometer array, the ngvla will robustly characterize the spectral energy distributions of young radio agn.
arxiv:1810.07527
we give an example showing that tight closure does not commute with localization.
arxiv:0710.2913
the 24 micron array on board the spitzer space telescope is one of three arrays in the multi - band imaging photometer for spitzer ( mips ) instrument. it provides 5. 3 x 5. 3 arcmin images at a scale of ~ 2. 5 arcsec per pixel corresponding to sampling of the point spread function which is slightly better than critical ( ~ 0. 4 \ lambda / d ). a scan - mirror allows dithering of images on the array without the overhead of moving and stabilizing the spacecraft. it also enables efficient mapping of large areas of sky without significant compromise in sensitivity. we present an overview of the pipeline flow and reduction steps involved in the processing of image data acquired with the 24 micron array. residual instrumental signatures not yet removed in automated processing and strategies for hands - on mitigation thereof are also given.
arxiv:astro-ph/0411316
using data taken with the cleo iii detector, 1. 09 fb - 1 at upsilon ( 1s ), and 1. 28 fb - 1 at upsilon ( 2s ), branching fractions have been measured for the first time for exclusive decays of each resonance into one hundred different final states consisting of 4 to 10 light hadrons, pions, kaons, and protons. significant strength is found in 73 decay modes of upsilon ( 1s ) and 17 decay modes of upsilon ( 2s ), with branching fractions ranging from 0. 3x10 ^ - 5 to 110x10 ^ - 5. upper limits at 90 % confidence level are presented for the other decay modes.
arxiv:1205.5070
two - photon absorption in molecules, of significance for high - resolution imaging applications, is typically characterised with low cross sections. to enhance the tpa signal, one effective approach exploits plasmonic enhancement. for this method to be efficient, it must meet several criteria, including broadband operational capability and a high fluorescence rate to ensure effective signal detection. in this context, we introduce a novel plus - shaped silver nanostructure designed to exploit the coupling of bright and dark plasmonic modes. this configuration considerably improves both the absorption and fluorescence of molecules across near - infrared and visible spectra. by fine - tuning the geometrical parameters of the nanostructure, we align the plasmonic resonances with the optical properties of specific tpa - active dyes, i. e., atto 700, rhodamine 6g, and atto 610. the expected tpa signal enhancement is evaluated using classical estimations based on the assumption of independent enhancement of absorption and fluorescence. these results are then compared with outcomes obtained in a quantum - mechanical approach to evaluate the stationary photon emission rate. our findings reveal the important role of molecular saturation determining the regimes where either absorption or fluorescence enhancement leads to an improved tpa signal intensity, considerably below the classical predictions. the proposed nanostructure design not only addresses these findings, but also might serve for their experimental verification, allowing for active polarization tuning of the plasmonic response targeting the absorption, fluorescence, or both. the insight into quantum - mechanical mechanisms of plasmonic signal enhancement provided in our work is a step forward in the more effective control of light - matter interactions at the nanoscale.
arxiv:2408.14859
models of neutrino mixing involving one or more sterile neutrinos have resurrected their importance in the light of recent cosmological data. in this case, reactor antineutrino experiments offer an ideal place to look for signatures of sterile neutrinos due to their impact on neutrino flavor transitions. in this work, we show that the high - precision data of the daya bay experi \ - ment constrain the 3 + 1 neutrino scenario imposing upper bounds on the relevant active - sterile mixing angle $ \ sin ^ 2 2 \ theta _ { 14 } \ lesssim 0. 06 $ at 3 $ \ sigma $ confidence level for the mass - squared difference $ \ delta m ^ 2 _ { 41 } $ in the range $ ( 10 ^ { - 3 }, 10 ^ { - 1 } ) \, { \ rm ev ^ 2 } $. the latter bound can be improved by six years of running of the juno experiment, $ \ sin ^ 22 \ theta _ { 14 } \ lesssim 0. 016 $, although in the smaller mass range $ \ delta m ^ 2 _ { 41 } \ in ( 10 ^ { - 4 }, 10 ^ { - 3 } ) \, { \ rm ev } ^ 2 $. we have also investigated the impact of sterile neutrinos on precision measurements of the standard neutrino oscillation parameters $ \ theta _ { 13 } $ and $ \ delta m ^ 2 _ { 31 } $ ( at daya bay and juno ), $ \ theta _ { 12 } $ and $ \ delta m ^ 2 _ { 21 } $ ( at juno ), and most importantly, the neutrino mass hierarchy ( at juno ). we find that, except for the obvious situation where $ \ delta m ^ 2 _ { 41 } \ sim \ delta m ^ 2 _ { 31 } $, sterile states do not affect these measurements substantially.
arxiv:1405.6540
we develop a theory of the epr - like effects due to neutrino oscillations in the $ \ pi \ to \ mu \ nu $ decays. its experimental implications are space - time correlations of neutrino and muon when they are both detected, while the pion decay point is not fixed. however, the more radical possibility of muon oscillations in experiments where only muons are detected ( as it has been suggested in hep - ph / 9509261 ), is ruled out. we start by discussing decays of monochromatic pions, and point out a few ` ` paradoxes ' '. then we consider pion wave packets, solve the ` ` paradoxes ' ', and show that the formulas for $ \ mu nu $ correlations can be transformed into the usual expressions, describing neutrino oscillations, as soon as the pion decay point is fixed.
arxiv:hep-ph/9703241
we express the condition for a phase space gaussian to be the wigner distribution of a mixed quantum state in terms of the symplectic capacity of the associated wigner ellipsoid. our results are motivated by hardy ' s formulation of the uncertainty principle for a function and its fourier transform. as a consequence we are able to state a more general form of hardy ' s theorem.
arxiv:quant-ph/0703063
the commodity single board computers ( sbcs ) are increasingly becoming powerful and can execute standard operating systems and mainstream workloads. in the context of cloud - based smart city applications, sbcs can be utilized as edge computing devices reducing the network communication. in this paper, we investigate the design and implementation of a sbc based edge cluster ( sbc - ec ) framework for a smart parking application. since sbcs are resource constrained devices, we devise a container - based framework for a lighter foot - print. kubernetes was used as an orchestration tool to orchestrate various containers in the framework. to validate our approach, we implemented a proof - of - concept of the sbc based edge cluster for a smart parking application, as a possible iot use - case. our implementation shows that, the use of sbc devices at the edge of a cloud based smart parking application is a cost effective and low energy, green computing solution. the proposed framework can be extended to similar cloud - based applications in the context of a smart city.
arxiv:1902.06661
during the recent coronavirus disease 2019 ( covid - 19 ) outbreak, the microblogging service twitter has been widely used to share opinions and reactions to events. italy was one of the first european countries to be severely affected by the outbreak and to establish lockdown and stay - at - home orders, potentially leading to country reputation damage. we resort to sentiment analysis to investigate changes in opinions about italy reported on twitter before and after the covid - 19 outbreak. using different lexicons - based methods, we find a breakpoint corresponding to the date of the first established case of covid - 19 in italy that causes a relevant change in sentiment scores used as proxy of the country reputation. next, we demonstrate that sentiment scores about italy are strongly associated with the levels of the ftse - mib index, the italian stock exchange main index, as they serve as early detection signals of changes in the values of ftse - mib. finally, we make a content - based classification of tweets into positive and negative and use two machine learning classifiers to validate the assigned polarity of tweets posted before and after the outbreak.
arxiv:2103.13871
the wireless allowing data and power transmission ( wadapt ) proposal was formed to study the feasibility of wireless technologies in hep experiments. a strong motivation for using wireless data transmission at the atlas detector is the absence of wires and connectors to reduce the passive material. however, the tracking layers are almost hermetic, acting as a faraday cage, that doesn ' t allow propagation between the layers. for radial readout of the detector data through the layers, we have developed an active repeater board, which is passed through a 2 - 3 mm wide slit between the modules on the tracking layers. the repeater is also advantageous in building topological radial networks for neuromorphic tracking. the active repeater board consists of an rx antenna, an amplifier, and a tx antenna, and is tested on a mockup in a way that the rx antenna will be on the inner side of a module, and the tx antenna will be on the outer side of the same module, as the 10 - mil thick conformal board is passed through the small slit. transmission through the tracking layers using the repeater has been demonstrated with two horn antennas, a signal generator, and a spectrum analyzer. for 20 cm distance between the horn antenna and the repeater board, a receive level of - 19. 5 dbm was achieved. in comparison, with the same setup but with the amplifier turned off, the receive level was ~ - 46 dbm. the results show a significant milestone towards the implementation of 60 ghz links for detector data readout.
arxiv:2503.18735
we define a " renormalized energy " as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. the definition is inspired by ideas of [ ss1, ss3 ]. roughly speaking, it is obtained by subtracting two leading terms from the coulomb potential on a growing number of charges. the functional is expected to be a good measure of disorder of a configuration of points. we give certain formulas for its expectation for general stationary random point processes. for the random matrix $ \ beta $ - sine processes on the real line ( beta = 1, 2, 4 ), and ginibre point process and zeros of gaussian analytic functions process in the plane, we compute the expectation explicitly. moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. we also prove that the beta = 2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.
arxiv:1201.2853
neural - network decoders can achieve a lower logical error rate compared to conventional decoders, like minimum - weight perfect matching, when decoding the surface code. furthermore, these decoders require no prior information about the physical error rates, making them highly adaptable. in this study, we investigate the performance of such a decoder using both simulated and experimental data obtained from a transmon - qubit processor, focusing on small - distance surface codes. we first show that the neural network typically outperforms the matching decoder due to better handling errors leading to multiple correlated syndrome defects, such as $ y $ errors. when applied to the experimental data of [ google quantum ai, nature 614, 676 ( 2023 ) ], the neural network decoder achieves logical error rates approximately $ 25 \ % $ lower than minimum - weight perfect matching, approaching the performance of a maximum - likelihood decoder. to demonstrate the flexibility of this decoder, we incorporate the soft information available in the analog readout of transmon qubits and evaluate the performance of this decoder in simulation using a symmetric gaussian - noise model. considering the soft information leads to an approximately $ 10 \ % $ lower logical error rate, depending on the probability of a measurement error. the good logical performance, flexibility, and computational efficiency make neural network decoders well - suited for near - term demonstrations of quantum memories.
arxiv:2307.03280
we introduce a method for unsupervised parsing that relies on bootstrapping classifiers to identify if a node dominates a specific span in a sentence. there are two types of classifiers, an inside classifier that acts on a span, and an outside classifier that acts on everything outside of a given span. through self - training and co - training with the two classifiers, we show that the interplay between them helps improve the accuracy of both, and as a result, effectively parse. a seed bootstrapping technique prepares the data to train these classifiers. our analyses further validate that such an approach in conjunction with weak supervision using prior branching knowledge of a known language ( left / right - branching ) and minimal heuristics injects strong inductive bias into the parser, achieving 63. 1 f $ _ 1 $ on the english ( ptb ) test set. in addition, we show the effectiveness of our architecture by evaluating on treebanks for chinese ( ctb ) and japanese ( ktb ) and achieve new state - of - the - art results. our code and pre - trained models are available at https : / / github. com / nickil21 / weakly - supervised - parsing.
arxiv:2110.02283
we present first - principles calculations of the impact ionization rate ( iir ) in the $ gw $ approximation ( $ gw $ a ) for semiconductors. the iir is calculated from the quasiparticle ( qp ) width in the $ gw $ a, since it can be identified as the decay rate of a qp into lower energy qp plus an independent electron - hole pair. the quasiparticle self - consistent $ gw $ method was used to generate the noninteracting hamiltonian the $ gw $ a requires as input. small empirical corrections were added so as to reproduce experimental band gaps. our results are in reasonable agreement with previous work, though we observe some discrepancy. in particular we find high iir at low energy in the narrow gap semiconductor inas.
arxiv:0812.2923
we report on the experimental generation of an entangled state with a spectrally pure heralded single - photon state and a weak coherent state. by choosing group - velocity matching in the nonlinear crystal, our system for producing entangled photons was 60 times brighter than that in the earlier experiment [ phys. rev. lett. 90, 240401 ( 2003 ) ], with no need of bandpass filters. this entanglement system is useful for quantum information protocols that require indistinguishable photons from independent sources.
arxiv:1303.2780
screening effects are important to understand various aspects of ion - solid interactions and, in particular, play a crucial role in the stopping of ions in solids. in this paper the phase shifts and scattering amplitudes for the quantum - mechanical elastic scattering within up to the second - order born ( b2 ) approximation are revisited for an arbitrary spherically - symmetric electron - ion interaction potential. the b2 phase shifts and scattering amplitudes are then used to derive the friedel sum rule ( fsr ) involving the second - order born corrections. this results in a simple equation for the b2 perturbative screening parameter of an impurity ion immersed in a fully degenerate electron gas which, as expected, turns out to depend on the ion atomic number $ z _ { 1 } $ unlike the first - order born ( b1 ) screening parameter reported earlier by some authors. furthermore, our analytical results for the yukawa, hydrogenic, hulth \ ' { e } n, and mensing potentials are compared, for both positive and negative ions and a wide range of one - electron radii, to the exact screening parameters calculated self - consistently by imposing the fsr requirement. it is shown that the b2 screening parameters agree excellently with the exact values at large and moderate densities of the degenerate electron gas, while at lower densities they progressively deviate from the exact numerical solutions but are nevertheless more accurate than the prediction of the b1 approximation. in addition, a simple pad \ ' { e } approximant to the born series has been developed that improves the performance of the perturbative fsr for any negative ion as well as for $ z _ { 1 } = + 1 $.
arxiv:1305.2106
known as functional programming languages, are designed such that they do not set up a block of statements for explicit repetition, as with the for loop. instead, those programming languages exclusively use recursion. rather than call out a block of code to be repeated a pre - defined number of times, the executing code block instead " divides " the work to be done into a number of separate pieces, after which the code block executes itself on each individual piece. each piece of work will be divided repeatedly until the " amount " of work is as small as it can possibly be, at which point the algorithm will do that work very quickly. the algorithm then " reverses " and reassembles the pieces into a complete whole. the classic example of recursion is in list - sorting algorithms, such as merge sort. the merge sort recursive algorithm will first repeatedly divide the list into consecutive pairs ; each pair is then ordered, then each consecutive pair of pairs, and so forth until the elements of the list are in the desired order. the code below is an example of a recursive algorithm in the scheme programming language that will output the same result as the pseudocode under the previous heading. = = education = = in some schools of pedagogy, iterations are used to describe the process of teaching or guiding students to repeat experiments, assessments, or projects, until more accurate results are found, or the student has mastered the technical skill. this idea is found in the old adage, " practice makes perfect. " in particular, " iterative " is defined as the " process of learning and development that involves cyclical inquiry, enabling multiple opportunities for people to revisit ideas and critically reflect on their implication. " unlike computing and math, educational iterations are not predetermined ; instead, the task is repeated until success according to some external criteria ( often a test ) is achieved. = = see also = = recursion fractal brute - force search iterated function infinite compositions of analytic functions = = references = =
https://en.wikipedia.org/wiki/Iteration
a detailed analysis of the galaxy distribution in the southern sky redshift survey ( ssrs ) by means of the multifractal or scaling formalism is presented. it is shown that galaxies cluster in different ways according to their morphological type as well as their size. ellipticals are more clustered than spirals, even at scales up to 15 h $ ^ { - 1 } $ mpc, whereas no clear segregation between early and late spirals is found. it is also shown that smaller galaxies distribute more homogeneously than larger galaxies.
arxiv:astro-ph/9407041
dilute granular flows are routinely described by collisional kinetic theory, but dense flows require a fundamentally different approach, due to long - lasting, many - body contacts. in the case of silo drainage, many continuum models have been developed for the mean flow, but no realistic statistical theory is available. here, we propose that particles undergo cooperative displacements in response to diffusing ` ` spots ' ' of free volume. the typical spot size is several particle diameters, so cages of nearest neighbors tend to remain intact over large distances. the spot hypothesis relates diffusion and cage - breaking to volume fluctuations and spatial velocity correlations, in agreement with new experimental data. it also predicts density waves caused by weak spot interactions. spots enable fast, multiscale simulations of dense flows, in which a small, internal relaxation enforces packing constraints during spot - induced motion. in the continuum limit of the model, tracer diffusion is described by a new stochastic differential equation, where the drift velocity and diffusion tensor are coupled non - locally to the spot density. the same mathematical formalism may also find applications to glassy relaxation, as a compelling alternative to void ( or hole ) random walks.
arxiv:cond-mat/0307379
domain adaptation aims to leverage the supervision signal of source domain to obtain an accurate model for target domain, where the labels are not available. to leverage and adapt the label information from source domain, most existing methods employ a feature extracting function and match the marginal distributions of source and target domains in a shared feature space. in this paper, from the perspective of information theory, we show that representation matching is actually an insufficient constraint on the feature space for obtaining a model with good generalization performance in target domain. we then propose variational bottleneck domain adaptation ( vbda ), a new domain adaptation method which improves feature transferability by explicitly enforcing the feature extractor to ignore the task - irrelevant factors and focus on the information that is essential to the task of interest for both source and target domains. extensive experimental results demonstrate that vbda significantly outperforms state - of - the - art methods across three domain adaptation benchmark datasets.
arxiv:1911.09310
we consider problems related to initial meshing and adaptive mesh refinement for the electromagnetic simulation of various structures. the quality of the initial mesh and the performance of the adaptive refinement are of great importance for the finite element solution of the maxwell equations, since they directly affect the accuracy and the computational time. in this paper, we describe the complete meshing workflow, which allows the simulation of arbitrary structures. test simulations confirm that the presented approach allows to reach the quality of the industrial simulation software.
arxiv:2311.06693
a two - variable extension of the bannai - ito polynomials is presented. they are obtained via $ q \ to - 1 $ limits of the bivariate $ q $ - racah and askey - wilson orthogonal polynomials introduced by gasper and rahman. their orthogonality relation is obtained. these new polynomials are also shown to be multispectral. two dunkl shift operators are seen to be diagonalized by the bivariate bannai - ito polynomials and 3 - and 9 - term recurrence relations are provided.
arxiv:1809.09705
the constant growth in the present day real - world databases pose computational challenges for a single computer. cloud - based platforms, on the other hand, are capable of handling large volumes of information manipulation tasks, thereby necessitating their use for large real - world data set computations. this work focuses on creating a novel generalized flow within the cloud - based computing platform : microsoft azure machine learning studio ( mamls ) that accepts multi - class and binary classification data sets alike and processes them to maximize the overall classification accuracy. first, each data set is split into training and testing data sets, respectively. then, linear and nonlinear classification model parameters are estimated using the training data set. data dimensionality reduction is then performed to maximize classification accuracy. for multi - class data sets, data centric information is used to further improve overall classification accuracy by reducing the multi - class classification to a series of hierarchical binary classification tasks. finally, the performance of optimized classification model thus achieved is evaluated and scored on the testing data set. the classification characteristics of the proposed flow are comparatively evaluated on 3 public data sets and a local data set with respect to existing state - of - the - art methods. on the 3 public data sets, the proposed flow achieves 78 - 97. 5 % classification accuracy. also, the local data set, created using the information regarding presence of diabetic retinopathy lesions in fundus images, results in 85. 3 - 95. 7 % average classification accuracy, which is higher than the existing methods. thus, the proposed generalized flow can be useful for a wide range of application - oriented " big data sets ".
arxiv:1603.08070
we introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. one code summarizes the specified factors of variation associated with the labels. the other summarizes the remaining unspecified variability. during training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class. examples of such observations include images of a set of labeled objects captured at different viewpoints, or recordings of set of speakers dictating multiple phrases. in both instances, the intra - class diversity is the source of the unspecified factors of variation : each object is observed at multiple viewpoints, and each speaker dictates multiple phrases. learning to disentangle the specified factors from the unspecified ones becomes easier when strong supervision is possible. suppose that during training, we have access to pairs of images, where each pair shows two different objects captured from the same viewpoint. this source of alignment allows us to solve our task using existing methods. however, labels for the unspecified factors are usually unavailable in realistic scenarios where data acquisition is not strictly controlled. we address the problem of disentanglement in this more general setting by combining deep convolutional autoencoders with a form of adversarial training. both factors of variation are implicitly captured in the organization of the learned embedding space, and can be used for solving single - image analogies. experimental results on synthetic and real datasets show that the proposed method is capable of generalizing to unseen classes and intra - class variabilities.
arxiv:1611.03383
simple calculations indicate that the partition function for a black hole is defined only if the temperature is fixed on a finite boundary. consequences of this result are discussed. ( contribution to the proceedings of the lanczos centenary conference. )
arxiv:gr-qc/9404006
when an agent, person, vehicle or robot is moving through an unknown environment without gnss signals, online mapping of nonlinear terrains can be used to improve position estimates when the agent returns to a previously mapped area. mapping algorithms using online gaussian process ( gp ) regression are commonly integrated in algorithms for simultaneous localisation and mapping ( slam ). however, gp mapping algorithms have increasing computational demands as the mapped area expands relative to spatial field variations. this is due to the need for estimating an increasing amount of map parameters as the area of the map grows. contrary to this, we propose a recursive gp mapping estimation algorithm which uses local basis functions in an information filter to achieve spatial scalability. our proposed approximation employs a global grid of finite support basis functions but restricts computations to a localized subset around each prediction point. as our proposed algorithm is recursive, it can naturally be incorporated into existing algorithms that uses gaussian process maps for slam. incorporating our proposed algorithm into an extended kalman filter ( ekf ) for magnetic field slam reduces the overall computational complexity of the algorithm. we show experimentally that our algorithm is faster than existing methods when the mapped area is large and the map is based on many measurements, both for recursive mapping tasks and for magnetic field slam.
arxiv:2210.09168
modern scientific applications are getting more diverse, and the vector lengths in those applications vary widely. contemporary vector processors ( vps ) are designed either for short vector lengths, e. g., fujitsu a64fx with 512 - bit arm sve vector support, or long vectors, e. g., nec aurora tsubasa with 16kbits maximum vector length ( mvl ). unfortunately, both approaches have drawbacks. on the one hand, short vector length vp designs struggle to provide high efficiency for applications featuring long vectors with high data level parallelism ( dlp ). on the other hand, long vector vp designs waste resources and underutilize the vector register file ( vrf ) when executing low dlp applications with short vector lengths. therefore, those long vector vp implementations are limited to a specialized subset of applications, where relatively high dlp must be present to achieve excellent performance with high efficiency. to overcome these limitations, we propose an adaptable vector architecture ( ava ) that leads to having the best of both worlds. ava is designed for short vectors ( mvl = 16 elements ) and is thus area and energy - efficient. however, ava has the functionality to reconfigure the mvl, thereby allowing to exploit the benefits of having a longer vector ( up to 128 elements ) microarchitecture when abundant dlp is present. we model ava on the gem5 simulator and evaluate the performance with six applications taken from the rivec benchmark suite. to obtain area and power consumption metrics, we model ava on mcpat for 22nm technology. our results show that by reconfiguring our small vrf ( 8kb ) plus our novel issue queue scheme, ava yields a 2x speedup over the default configuration for short vectors. additionally, ava shows competitive performance when compared to a long vector vp, while saving 50 % of area.
arxiv:2111.05301
we prove that for every $ t \ in \ mathbb { n } $ there is a constant $ \ gamma _ t $ such that every graph with twin - width at most $ t $ and clique number $ \ omega $ has chromatic number bounded by $ 2 ^ { \ gamma _ t \ log ^ { 4t + 3 } \ omega } $. in other words, we prove that graph classes of bounded twin - width are quasi - polynomially $ \ chi $ - bounded. this provides a significant step towards resolving the question of bonnet et al. [ icalp 2021 ] about whether they are polynomially $ \ chi $ - bounded.
arxiv:2202.07608
we present a study of the properties of network of political discussions on one of the most popular polish internet forums. this provides the opportunity to study the computer mediated human interactions in strongly bipolar environment. the comments of the participants are found to be mostly disagreements, with strong percentage of invective and provocative ones. binary exchanges ( quarrels ) play significant role in the network growth and topology. statistical analysis shows that the growth of the discussions depends on the degree of controversy of the subject and the intensity of personal conflict between the participants. this is in contrast to most previously studied social networks, for example networks of scientific citations, where the nature of the links is much more positive and based on similarity and collaboration rather than opposition and abuse. the work discusses also the implications of the findings for more general studies of consensus formation, where our observations of increased conflict contradict the usual assumptions that interactions between people lead to averaging of opinions and agreement.
arxiv:0905.3751
in this article we shall study the analytic theory and the representation theoretic interpretations of hankel transforms and fundamental bessel kernels of an arbitrary rank over an archimedean field.
arxiv:1411.6710
the cuprate superconductors are characterized by numerous ordering tendencies, with the nematic order being the most distinct form of order. here the intertwinement of the electronic nematicity with superconductivity in cuprate superconductors is studied based on the kinetic - energy - driven superconductivity. it is shown that the optimized tc takes a dome - like shape with the weak and strong strength regions on each side of the optimal strength of the electronic nematicity, where the optimized tc reaches its maximum. this dome - like shape nematic - order strength dependence of tc indicates that the electronic nematicity enhances superconductivity. moreover, this nematic order induces the anisotropy of the electron fermi surface ( efs ), where although the original efs with the four - fold rotation symmetry is broken up into that with a residual two - fold rotation symmetry, this efs with the two - fold rotation symmetry still is truncated to form the fermi arcs with the most spectral weight that locates at the tips of the fermi arcs. concomitantly, these tips of the fermi arcs connected by the wave vectors $ { \ bf q } _ { i } $ construct an octet scattering model, however, the partial wave vectors and their respective symmetry - corresponding partners occur with unequal amplitudes, leading to these ordered states being broken both rotation and translation symmetries. as a natural consequence, the electronic structure is inequivalent between the $ k _ { x } $ and $ k _ { y } $ directions. these anisotropic features of the electronic structure are also confirmed via the result of the autocorrelation of the single - particle excitation spectra, where the breaking of the rotation symmetry is verified by the inequivalence on the average of the electronic structure at the two bragg scattering sites. furthermore, the strong energy dependence of the order parameter of the electronic nematicity is also discussed.
arxiv:2105.14494
squeezed - state interferometry plays an important role in quantum - enhanced optical phase estimation, as it allows the estimation precision to be improved up to the heisenberg limit by using ideal photon - number - resolving detectors at the output ports. here we show that for each individual $ n $ - photon component of the phase - matched coherent $ \ otimes $ squeezed vacuum input state, the classical fisher information always saturates the quantum fisher information. moreover, the total fisher information is the sum of the contributions from each individual $ n $ - photon components, where the largest $ n $ is limited by the finite number resolution of available photon counters. based on this observation, we provide an approximate analytical formula that quantifies the amount of lost information due to the finite photon number resolution, e. g., given the mean photon number $ \ bar { n } $ in the input state, over $ 96 $ percent of the heisenberg limit can be achieved with the number resolution larger than $ 5 \ bar { n } $.
arxiv:1611.05997
we report on the experimental doping of a $ ^ { 87 } $ rubidium ( rb ) bose - einstein condensate ( bec ) with individual neutral $ ^ { 133 } $ cesium ( cs ) atoms. we discuss the experimental tools and procedures to facilitate cs - rb interaction. first, we use degenerate raman side - band cooling of the impurities to enhance the immersion efficiency for the impurity in the quantum gas. we identify the immersed fraction of cs impurities from the thermalization of cs atoms upon impinging on a bec, where elastic collisions lead to a localization of cs atoms in the rb cloud. second, further enhancement of the immersion probability is obtained by localizing the cs atoms in a species - selective optical lattice and subsequent transport into the rb cloud. here, impurity - bec interaction is monitored by position and time resolved three - body loss of cs impurities immersed into the bec. this combination of experimental methods allows for the controlled doping of a bec with neutral impurity atoms, paving the way to impurity aided probing and coherent impurity - quantum bath interaction.
arxiv:1805.01313
we show that a synthetic pseudospin - momentum coupling can be used to design quasi - one - dimensional disorder - resistant coupled resonator optical waveguides ( crow ). in this structure, the propagating bloch waves exhibit a pseudospin - momentum locking at specific momenta where backscattering is suppressed. we quantify this resistance to disorder using two methods. first, we calculate the anderson localization length $ \ xi $, obtaining an order of magnitude enhancement compared to a conventional crow for typical device parameters. second, we study propagation in the time domain, finding that the loss of wavepacket purity in the presence of disorder rapidly saturates, indicating the preservation of phase information before the onset of anderson localization. our approach of directly optimizing the bulk bloch waves is a promising alternative to disorder - robust transport based on higher dimensional topological edge states.
arxiv:1902.06697
in this paper, multiple reconfigurable intelligent surfaces ( ris ) aided secure precise wireless transmission ( spwt ) schemes are proposed in the three - dimensional ( 3d ) wireless communication scenario. unavailable direct path channels from transmitter to receivers are considered when the direct paths are obstructed by obstacles. then, multiple riss are utilized to achieve spwt through the reflection path among transmitter, riss and receivers in order to enhance the communication performance and energy efficiency simultaneously. first, a maximum - signal - to - interference - and - noise ratio ( msinr ) scheme is proposed in a single user scenario. then, the multi - user scenario is considered where the illegitimate users are regarded as eavesdroppers. a maximum - secrecy - rate ( msr ) scheme and a maximum - signal - to - leakage - and - noise ratio ( mslnr ) are proposed. the former achieves a better secrecy rate ( sr ) performance but incurs a higher complexity. the latter has a lower complexity than the msr scheme with an sr performance loss. simulation results show that both single - user scheme and multi - user scheme can achieve spwt which transmits confidential message precisely to location of desired users. moreover, mslnr scheme has a lower complexity than the msr scheme, while the sr performance is close to that of the msr scheme.
arxiv:2011.11255
two - dimensional nuclear magnetic resonance ( nmr ) is essential in molecular structure determination. the nitrogen - vacancy ( nv ) center in diamond has been proposed and developed as an outstanding quantum sensor to realize nmr in nanoscale. in this work, we develop a scheme for two - dimensional nanoscale nmr spectroscopy based on quantum controls on an nv center. we carry out a proof of principle experiment on a target of two coupled $ ^ { 13 } $ c nuclear spins in diamond. a cosy - like sequences is used to acquire the data on time domain, which is then converted to frequency domain with the fast fourier transform ( fft ). with the two - dimensional nmr spectrum, the structure and location of the set of nuclear spin are resolved. this work marks a fundamental step towards resolving the structure of a single molecule.
arxiv:1902.05676
we consider stationary measures of the one - dimensional discrete - time quantum walks ( qws ) with two chiralities, which is defined by a 2 times 2 unitary matrix u. in our previous paper [ 15 ], we proved that any uniform measure becomes the stationary measure of the qw by solving the corresponding eigenvalue problem. this paper reports that non - uniform measures are also stationary measures of the qw except u is diagonal. for diagonal matrices, we show that any stationary measure is uniform. moreover, we prove that any uniform measure becomes a stationary measure for more general qws not by solving the eigenvalue problem but by a simple argument.
arxiv:1410.7651
let $ f _ { 0, \ infty } = \ { f _ n \ } _ { n = 0 } ^ { \ infty } $ be a sequence of continuous self - maps on a compact metric space $ x $. firstly, we obtain the relations between topological sequence entropy of a nonautonomous dynamical system $ ( x, f _ { 0, \ infty } ) $ and that of its finite - to - one extension. we then prove that the topological sequence entropy of $ ( x, f _ { 0, \ infty } ) $ is no less than its corresponding measure sequence entropy if $ x $ has finite covering dimension. secondly, we study the supremum topological sequence entropy of $ ( x, f _ { 0, \ infty } ) $, and confirm that it equals to that of its $ n $ - th compositions system if $ f _ { 0, \ infty } $ is equi - continuous ; and we prove the supremum topological sequence entropy of $ ( x, f _ { i, \ infty } ) $ is no larger than that of $ ( x, f _ { j, \ infty } ) $ if $ i \ leq j $, and they are equal if $ f _ { 0, \ infty } $ is equi - continuous and surjective. thirdly, we investigate the topological sequence entropy relations between $ ( x, f _ { 0, \ infty } ) $ and $ ( \ mathcal { m } ( x ), \ hat { f } _ { 0, \ infty } ) $ induced on the space $ \ mathcal { m } ( x ) $ of all borel probability measures, and obtain that given any sequence, the topological sequence entropy of $ ( x, f _ { 0, \ infty } ) $ is zero if and only if that of $ ( \ mathcal { m } ( x ), \ hat { f } _ { 0, \ infty } ) $ is zero ; the topological sequence entropy of $ ( x, f _ { 0, \ infty } ) $ is positive if and only if that of $ ( \ mathcal { m } ( x ), \ hat { f } _ { 0, \ infty } ) $ is infinite. by applying this result, we obtain some big differences between entropies of nonau
arxiv:2309.05225
the rest - frame far - ultraviolet ( fuv ) morphologies of 8 nearby interacting and starburst galaxies ( arp 269, m 82, mrk 8, ngc 520, ngc 1068, ngc 3079, ngc 3310, ngc 7673 ) are compared with 54 galaxies at z ~ 1. 5 and 46 galaxies at z ~ 4 observed in the goods - acs field. the nearby sample is artificially redshifted to z ~ 1. 5 and 4. we compare the simulated galaxy morphologies to real z ~ 1. 5 and 4 uv - bright galaxy morphologies. we calculate the gini coefficient ( g ), the second - order moment of the brightest 20 % of the galaxy ' s flux ( m _ 20 ), and the sersic index ( n ). we explore the use of nonparametric methods with 2d profile fitting and find the combination of m _ 20 with n an efficient method to classify galaxies as having merger, exponential disk, or bulge - like morphologies. when classified according to g and m _ 20, 20 / 30 % of real / simulated galaxies at z ~ 1. 5 and 37 / 12 % at z ~ 4 have bulge - like morphologies. the rest have merger - like or intermediate distributions. alternatively, when classified according to the sersic index, 70 % of the z ~ 1. 5 and z ~ 4 real galaxies are exponential disks or bulge - like with n > 0. 8, and ~ 30 % of the real galaxies are classified as mergers. the artificially redshifted galaxies have n values with ~ 35 % bulge or exponential at z ~ 1. 5 and 4. therefore, ~ 20 - 30 % of lyman - break galaxies ( lbgs ) have structures similar to local starburst mergers, and may be driven by similar processes. we assume merger - like or clumpy star - forming galaxies in the goods field have morphological structure with values n < 0. 8 and m _ 20 > - 1. 7. we conclude that mrk 8, ngc 3079, and ngc 7673 have structures similar to those of merger - like and clumpy star - forming galaxies observed at z ~ 1. 5 and 4.
arxiv:0904.4433
introducing an epitaxial growth technique called corner overgrowth, we fabricate a quantum confinement structure consisting of a high - mobility gaas / algaas heterojunction overgrown on top of an ex - situ cleaved substrate corner. the resulting corner - junction quantum - well heterostructure effectively bends a two - dimensional electron system ( 2des ) at an atomically sharp $ 90 ^ { \ rm o } $ angle. the high - mobility 2des demonstrates fractional quantum hall effect on both facets. lossless edge - channel conduction over the corner confirms a continuum of 2d electrons across the junction, consistent with schroedinger - poisson calculations of the electron distribution. this growth technique differs distinctly from cleaved - edge overgrowth and enables a complementary class of new embedded quantum heterostructures.
arxiv:cond-mat/0308576
in this work we discuss whether the non - commuting graph of a finite group can determine its nilpotency. more precisely, abdollahi, akbari and maimani conjectured that if $ g $ and $ h $ are finite groups with isomorphic non - commuting graphs and $ g $ is nilpotent, then $ h $ must be nilpotent as well ( conjecture 2 ). we pose a new conjecture ( conjecture 3 ) that, together with the assumption $ | z ( g ) | \ geq | z ( h ) | $, implies conjecture 2 and we prove it for groups in which all centralizers of non - central elements are abelian.
arxiv:2302.01770
one of the fundamental questions about the high temperature cuprate superconductors is the size of the fermi surface ( fs ) underlying the superconducting state. by analyzing the single particle spectral function for the fermi hubbard model as a function of repulsion $ u $ and chemical potential $ \ mu $, we find that the fermi surface in the normal state reconstructs from a large fermi surface matching the luttinger volume as expected in a fermi liquid, to a fermi surface that encloses fewer electrons that we dub the " luttinger breaking " ( lb ) phase, as the mott insulator is approached. this transition into a non - fermi liquid phase that violates the luttinger count, is a continuous phase transition at a critical density in the absence of any other broken symmetry. we obtain the fermi surface contour from the spectral weight $ a _ { \ vec { k } } ( \ omega = 0 ) $ and from an analysis of the poles and zeros of the retarded green ' s function $ g _ { \ vec { k } } ^ { ret } ( e = 0 ) $, calculated using determinantal quantum monte carlo and analytic continuation methods. we discuss our numerical results in connection with experiments on hall measurements, scanning tunneling spectroscopy and angle resolved photoemission spectroscopy.
arxiv:2001.07197
we show on a 4x4 example that many dynamics may eliminate all strategies used in correlated equilibria, and this for an open set of games. this holds for the best - response dynamics, the brown - von neumann - nash dynamics and any monotonic or weakly sign - preserving dynamics satisfying some standard regularity conditions. for the replicator dynamics and the best - response dynamics, elimination of all strategies used in correlated equilibrium is shown to be robust to the addition of mixed strategies as new pure strategies.
arxiv:0902.1964
system z + [ goldszmidt and pearl, 1991, goldszmidt, 1992 ] is a formalism for reasoning with normality defaults of the form " typically if phi then + ( with strength cf ) " where 6 is a positive integer. the system has a critical shortcoming in that it does not sanction inheritance across exceptional subclasses. in this paper we propose an extension to system z + that rectifies this shortcoming by extracting additional conditions between worlds from the defaults database. we show that the additional constraints do not change the notion of the consistency of a database. we also make comparisons with competing default reasoning systems.
arxiv:1302.6848
( below, \ box means " perfect square " ) let $ p $ and $ q $ be non - zero integers. the lucas sequence $ \ { u _ n ( p, q ) \ } $ is defined by $ u _ 0 = 0 $, $ u _ 1 = 1 $, $ u _ n = p u _ { n - 1 } - q u _ { n - 2 } $, $ ( n \ geq 2 ) $. historically, there has been much interest in when the terms of such sequences are perfect squares ( or higher powers ). here, we summarize results on this problem, and investigate for fixed $ k $ solutions of $ u _ n ( p, q ) = k \ box $, $ ( p, q ) = 1 $. we show finiteness of the number of solutions, and under certain hypotheses on $ n $, describe explicit methods for finding solutions. these involve solving finitely many thue - mahler equations. as an illustration of the methods, we find all solutions to $ u _ n ( p, q ) = k \ box $ where $ k = \ pm1, \ pm2 $, and $ n $ is a power of 2.
arxiv:math/0701252
galaxies located in the environment or on the line of sight towards gravitational lenses can significantly affect lensing observables, and can lead to systematic errors on the measurement of $ h _ 0 $ from the time - delay technique. we present the results of a systematic spectroscopic identification of the galaxies in the field of view of the lensed quasar he0435 - 1223, using the w. m. keck, gemini and eso - very large telescopes. our new catalog triples the number of known galaxy redshifts in the vicinity of the lens, expanding to 102 the number of measured redshifts for galaxies separated by less than 3 arcmin from the lens. we complement our catalog with literature data to gather redshifts up to 15 arcmin from the lens, and search for galaxy groups or clusters projected towards he0435 - 1223. we confirm that the lens is a member of a small group that includes at least 12 galaxies, and find 8 other group candidates near the line of sight of the lens. the flexion shift, namely the shift of lensed images produced by high order perturbation of the lens potential, is calculated for each galaxy / group and used to identify which objects produce the largest perturbation of the lens potential. this analysis demonstrates that i ) at most three of the five brightest galaxies projected within 12 arcsec of the lens need to be explicitly used in the lens models, and ii ) the groups can be treated in the lens model as an external tidal field ( shear ) contribution.
arxiv:1607.00382
directional transport - dominated particle separation presents major challenges in many technological applications. the feynman ratchet can convert the random perturbation into directional transport of particles, offering innovative separation schemes. here, we propose the design of a dusty plasma ratchet system to accomplish the separation of micron - sized particles. the dust particles are charged and suspended at specific heights within the saw channel, depending on their sizes. bi - dispersed dust particles can flow along the saw channel in opposite directions, resulting in a perfect purity of particle separation. we discuss the underlying mechanism of particle separation, wherein dust particles of different sizes are suspended at distinctive heights and experience electric ratchet potentials with opposite orientations, leading to their contrary flows. our results demonstrate a feasible and highly efficient method for separating micron - sized particles.
arxiv:2311.02553
we extend cellular automata to time - varying discrete geometries. in other words we formalize, and prove theorems about, the intuitive idea of a discrete manifold which evolves in time, subject to two natural constraints : the evolution does not propagate information too fast ; and it acts everywhere the same. for this purpose we develop a correspondence between complexes and labeled graphs. in particular we reformulate the properties that characterize discrete manifolds amongst complexes, solely in terms of graphs. in dimensions $ n < 4 $, over bounded - star graphs, it is decidable whether a cellular automaton maps discrete manifolds into discrete manifolds.
arxiv:1805.10051
we prove that any continuous mapping $ f : e \ to y $ on a completely metrizable subspace $ e $ of a perfect paracompact space $ x $ can be extended to a lebesgue class one mapping $ g : x \ to y $ ( i. e. for every open set $ v $ in $ y $ the preimage $ g ^ { - 1 } ( v ) $ is an $ f _ \ sigma $ - set in $ x $ ) with values in an arbitrary topological space $ y $.
arxiv:1407.0503
the increasing importance of solar power for electricity generation leads to an increasing demand for probabilistic forecasting of local and aggregated pv yields. in this paper we use an indirect modeling approach for hourly medium to long term local pv yields based on publicly available irradiation data. we suggest a time series model for global horizontal irradiation for which it is easy to generate an arbitrary number of scenarios and thus allows for multivariate probabilistic forecasts for arbitrary time horizons. in contrast to many simplified models that have been considered in the literature so far it features several important stylized facts. sharp time dependent lower and upper bounds of global horizontal irradiations are estimated that improve the often used physical bounds. the parameters of the beta distributed marginals of the transformed data are allowed to be time dependent. a copula - based time series model is introduced for the hourly and daily dependence structure based on a simple graphical structure known from the theory of vine copulas. non - gaussian copulas like gumbel and bb1 copulas are used that allow for the important feature of so - called tail dependence. evaluation methods like the continuous ranked probability score ( crps ), the energy score ( es ) and the variogram score ( vs ) are used to compare the power of the model for multivariate probabilistic forecasting with other models used in the literature showing that our model outperforms other models in many respects.
arxiv:2002.09267
glasses have a wide range of technological applications. the recent discovery of ultrastable glasses that are obtained by depositing the vapor of a glass - forming liquid onto the surface of a cold substrate has sparked renewed interest in the effects of confinements on physicochemical properties of liquids and glasses. here we use molecular dynamics simulations to study the effect of substrate on thin films of a model glass - forming liquid, the kob - andersen binary lennard - jones system, and compute profiles of several thermodynamic and kinetic properties across the film. we observe that the substrate can induce large oscillations in profiles of thermodynamic properties such as density, composition and stress, and we establish a correlation between the oscillations in total density and the oscillations in normal stress. we also demonstrate that the kinetic properties of an atomic film can be readily tuned by changing the strength of interactions between the substrate and the liquid. most notably, we show that a weakly attractive substrate can induce the emergence of a highly mobile region in its vicinity. in this highly mobile region, structural relaxation is several times faster than in the bulk, and the exploration of the potential energy landscape is also more efficient. in the subsurface region near a strongly attractive substrate, however, the dynamics is decelerated and the sampling of the potential energy landscape becomes less efficient than the bulk. we explain these two distinct behaviors by establishing a correlation between the oscillations in kinetic properties and the oscillations in lateral stress. our findings offer interesting opportunities for designing better substrates for the vapor deposition process or developing alternative procedures for situations where vapor deposition is not feasible.
arxiv:1404.4092
the paper presents the graph signal processing ( gsp ) companion model that naturally replicates the basic tenets of classical signal processing ( dsp ) for gsp. the companion model shows that gsp can be made equivalent to dsp ' plus ' appropriate boundary conditions ( bc ) - this is shown under broad conditions and holds for arbitrary undirected or directed graphs. this equivalence suggests how to broaden gsp - extend naturally a dsp concept to the gsp companion model and then transfer it back to the common graph vertex and graph fourier domains. the paper shows that gsp unrolls as two distinct models that coincide in dsp, the companion model based on ( hadamard or pointwise ) powers of what we will introduce as the spectral frequency vector $ \ lambda $, and the traditional graph vertex model, based on the adjacency matrix and its eigenvectors. the paper expands gsp in several directions, including showing that convolution in the graph companion model can be achieved with the fft and that gsp modulation with appropriate choice of carriers exhibits the dsp translation effect that enables multiplexing by modulation of graph signals.
arxiv:2303.02480
due to the many unique transport properties of weyl semimetals, they are promising materials for modern electronics. we investigate the electrons in the strong coupling approximation near weyl points based on their representation as massless weyl fermions. we have constructed a new fluid model based on the many - particle quantum hydrodynamics method to describe the behavior of electrons gas with different chirality near weyl points in the low - energy limit in the external electromagnetic fields, based on the many - particle weyl equation and many - particle wave function. the derived system of equations forms a closed apparatus for describing the dynamics of the electron current, spin density and spin current density. based on the proposed model, we considered small perturbations in the weyl fermion system in an external uniform magnetic field and predicted the new type of eigenwaves in the systems of the electrons near the weyl points.
arxiv:2108.06833
we investigate whether the affleck - dine ( ad ) mechanism works when the contribution of the two - loop thermal correction to the potential is negative in the gauge - mediated supersymmetry breaking models. the ad field is trapped far away from the origin by the negative thermal correction for a long time until the temperature of the universe becomes low enough. the most striking feature is that the hubble parameter becomes much smaller than the mass scale of the radial component of the ad field, during the trap. then, the amplitude of the ad field decreases so slowly that the baryon number is not fixed even after the onset of radial oscillation. the resultant baryon asymmetry crucially depends on whether the hubble parameter, $ h $, is larger than the mass scale of the phase component of the ad field, $ m _ \ theta $, at the beginning of oscillation. if $ h < m _ \ theta $ holds, the formation of q balls plays an essential role to determine the baryon number, which is found to be washed out due to the nonlinear dynamics of q - ball formation. on the other hand, if $ h > m _ \ theta $ holds, it is found that the dynamics of q - ball formation does not affect the baryon asymmetry, and that it is possible to generate the right amount of the baryon asymmetry.
arxiv:hep-ph/0302154
a large number of magnetic sensors, like superconducting quantum interference devices, optical pumping and nitrogen vacancy magnetometers, were shown to satisfy the energy resolution limit. this limit states that the magnetic sensitivity of the sensor, when translated into a product of energy with time, is bounded below by planck ' s constant, hbar. this bound implies a fundamental limitation as to what can be achieved in magnetic sensing. here we explore biological magnetometers, in particular three magnetoreception mechanisms thought to underly animals ' geomagnetic field sensing : the radical - pair, the magnetite and the magr mechanism. we address the question of how close these mechanisms approach the energy resolution limit. at the quantitative level, the utility of the energy resolution limit is that it informs the workings of magnetic sensing in model - independent ways, and thus can provide subtle consistency checks for theoretical models and estimated or measured parameter values, particularly needed in complex biological systems. at the qualitative level, the closer the energy resolution is to hbar, the more " quantum " is the sensor. this offers an alternative route towards understanding the quantum biology of magnetoreception. it also quantifies the room for improvement, illuminating what nature has achieved, and stimulating the engineering of biomimetic sensors exceeding nature ' s magnetic sensing performance.
arxiv:2410.07186
plasma boundary layers are susceptible to electrostatic instabilities driven by ion flows in presheaths and, when present, these instabilities can influence transport. in plasmas with a single species of positive ion, ion - acoustic instabilities are expected under conditions of low pressure and large electron - to - ion temperature ratio ( $ t _ e / t _ i \ gg 1 $ ). in plasmas with two species of positive ions, ion - ion two - stream instabilities can also be excited. the stability phase - space is characterized using the penrose criterion and approximate linear dispersion relations. predictions for how these instabilities affect ion and electron transport in presheaths, including rapid thermalization due to instability - enhanced collisions and an instability - enhanced ion - ion friction force, are also briefly reviewed. recent experimental tests of these predictions are discussed along with research needs required for further validation. the calculated stability boundaries provide a guide to determine the experimental conditions at which these effects can be expected.
arxiv:1510.00991
value adjustment of uncollateralized trades is determined within a risk - neutral pricing framework. when hedging such trades, investors cannot freely trade protection on their own name, thus facing an incomplete market. this fact is reflected in the non - uniqueness of the pricing measure, which is only constrained by the values of the hedging instruments tradable by the investor. uncollateralized trades should then be considered not as derivatives but as new primary assets in the investor ' s economy. different choices of the risk - neutral measure correspond to different completions of the market, based on the risk appetite of the investor, leading to different levels of value adjustments. we recover, in limiting cases, results well known in the literature.
arxiv:1409.6093
associative memory hamiltonian structure prediction potentials are not overly rugged, thereby suggesting their landscapes are like those of actual proteins. in the present contribution we show how basin - hopping global optimization can identify low - lying minima for the corresponding mildly frustrated energy landscapes. for small systems the basin - hopping algorithm succeeds in locating both lower minima and conformations closer to the experimental structure than does molecular dynamics with simulated annealing. for large systems the efficiency of basin - hopping decreases for our initial implementation, where the steps consist of random perturbations to the cartesian coordinates. we implemented umbrella sampling using basin - hopping to further confirm when the global minima are reached. we have also improved the energy surface by employing bioinformatic techniques for reducing the roughness or variance of the energy surface. finally, the basin - hopping calculations have guided improvements in the excluded volume of the hamiltonian, producing better structures. these results suggest a novel and transferable optimization scheme for future energy function development.
arxiv:0806.3652
supersymmetric monojets may be produced at the large hadron collider by the process qg - > squark neutralino _ 1 - > q neutralino _ 1 neutralino _ 1, leading to a jet recoiling against missing transverse momentum. we discuss the feasibility and utility of the supersymmetric monojet signal. in particular, we examine the possible precision with which one can ascertain the neutralino _ 1 - squark - quark coupling via the rate for monojet events. such a coupling contains information on the composition of the neutralino _ 1 and helps bound dark matter direct detection cross - sections and the dark matter relic density of the neutralino _ 1. it also provides a check of the supersymmetric relation between gauge couplings and gaugino - quark - squark couplings.
arxiv:1010.4261
a self - energy - functional approach is applied to construct cluster approximations for correlated lattice models. it turns out that the cluster - perturbation theory ( senechal et al, prl 84, 522 ( 2000 ) ) and the cellular dynamical mean - field theory ( kotliar et al, prl 87, 186401 ( 2001 ) ) are limiting cases of a more general cluster method. results for the one - dimensional hubbard model are discussed with regard to boundary conditions, bath degrees of freedom and cluster size.
arxiv:cond-mat/0303136