text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
here we report on hyperscaling analysis on thermodynamic measurements as a function of temperature and magnetic field for ni $ _ { 1 - x } $ rh $ _ { x } $ with $ x = 0. 375 $ where a ferromagnetic quantum critical point has been recently identified [ phys. rev. lett. $ \ textbf { 124 } $, 117203 ( 2020 ) ]. the obtained critical exponents agree well with the theory proposed by belitz, kirkpatrick, and vojta for disorder tuned quantum critical point in the preasymptotic region. | arxiv:2112.10997 |
our main theorem characterizes the complete intersections of codimension 2 in a projective space of dimension 3 or more over an algebraically closed field of characteristic 0 as the subcanonical and self - linked subschemes. in order to prove this theorem, we ' ll prove the gherardelli linkage theorem, which asserts that a partial intersection of two hypersurfaces is subcanonical if and only if its residual intersection is, scheme - theoretically, the intersection of the two hypersurfaces with a third. | arxiv:math/0003075 |
the circumgalactic medium ( cgm ) is a crucial component of galaxy evolution, but thus far its physical properties are highly unconstrained. as of yet, no cosmological simulation has reached convergence when it comes to constraining the cold and dense gas fraction of the cgm. such components are also challenging to observe, and require sub - millimeter instruments with a high sensitivity to extended, diffuse emission, like the proposed atacama large aperture sub - millimetre telescope ( atlast ). we present a state - of - the - art theoretical effort at modeling the [ cii ], [ ci ] ( 1 - 0 ), [ ci ] ( 2 - 1 ), co ( 3 - 2 ), and [ oiii ] line emissions of galaxies. we use the high - resolution cosmological zoom - in simulation ponos, representing a star forming galaxy system at z = 6. 5 ( $ m _ * = 2 \ times10 ^ 9 ~ m _ { \ odot } $ ), undergoing a major merger. we adopt different modeling approaches based on the photoionisation code cloudy. our fiducial model uses radiative transfer post - processing with ramsesrt and krome to create realistic fuv radiation fields, which we compare to sub - grid modeling approaches adopted in the literature. we find significant differences in the luminosity and in the contribution of different gas phases and galaxy components between the different modeling approaches. [ cii ] is the least model - dependant gas tracer, while [ ci ] ( 1 - 0 ) and co ( 3 - 2 ) are very model - sensitive. in all models, we find a significant contribution to the emission of [ cii ] ( up to $ \ sim $ 10 % ) and [ oiii ] ( up to $ \ sim $ 20 % ) from the cgm. [ cii ] and [ oiii ] trace different regions of the cgm : [ cii ] arises from an accreting filament and from tidal tails, while [ oiii ] traces a puffy halo surrounding the main disc, probably linked to sn feedback. we discuss our results in the context of current and future sub - mm observations with alma and atlast. | arxiv:2306.00583 |
the range of nucleon interaction 10 - 4 cm - 1 cm is interesting because it corresponds to the mass range of a intermediate particle inside so named " axion window " that is not closed yet by experiment. depolarization of ultracold neutrons ( ucn ) during their storage in material traps can be caused by cp - violating pseudo - magnetic precession of the neutron spin in the vicinity of the unpolarized substance surface. using the experimental limits for ucn depolarization new constraints were set for the product of the scalar, pseudo - scalar dimensionless constants gs * gp and the parameter lam _ ps, determining the yukawa - type of the nucleon interaction potential via new pseudo - scalar boson ( axion - like particle ) with a mass of m _ ps : gs * gp * lam _ ps ^ 2 < = 2. 96 * 10 - 21 [ cm ^ 2 ] for 10 - 3 cm < lam _ ps < 1 cm ; gs * gp * lam _ ps ^ 2 < = 3. 9 * 10 - 22 [ cm ^ 2 ] for 10 - 4 cm < lam _ ps < 10 - 3 cm. improvement of the limit for gs * gp in the area of lam _ ps from 0. 1 cm to 1 cm accounts for 4 - 5 orders of magnitude in comparision with previous limit. the prospects of increasing in accuracy search for cp - violating pseudo - magnetic precession are considered. the estimations of the possible effects of pseudo - magnetic precession in the frame of the theoretical models with cp - violation are discussed. | arxiv:0902.1056 |
the two - dimensional antiferromagnetic s = 1 / 2 heisenberg model with random bond dilution is studied using quantum monte carlo simulation at the percolation threshold ( 50 % of the bonds removed ). finite - size scaling of the staggered structure factor averaged over the largest connected clusters of sites on l * l lattices shows that long - range order exists within the percolating fractal clusters in the thermodynamic limit. this implies that the order - disorder transition driven by bond - dilution occurs exactly at the percolation threshold and that the exponents are classical. this result should apply also to the site - diluted system. | arxiv:cond-mat/9909230 |
this paper deals with stratifying systems over hereditary algebras. in the case of tame hereditary algebras we obtain a bound for the size of the stratifying systems composed only by regular modules and we conclude that stratifying systems can not be complete. for wild hereditary algebras with more than 2 vertices we show that there exists a complete stratifying system whose elements are regular modules. in the other case, we conclude that there are no stratifing system over them with regular modules. in one example we built all the stratifying systems, with a specific form, having maximal number of regular summads. | arxiv:1308.5547 |
along with current multi - scale based detectors, feature aggregation and enhancement ( fae ) modules have shown superior performance gains for cutting - edge object detection. however, these hand - crafted fae modules show inconsistent improvements on face detection, which is mainly due to the significant distribution difference between its training and applying corpus, coco vs. wider face. to tackle this problem, we essentially analyse the effect of data distribution, and consequently propose to search an effective fae architecture, termed autofae by a differentiable architecture search, which outperforms all existing fae modules in face detection with a considerable margin. upon the found autofae and existing backbones, a supernet is further built and trained, which automatically obtains a family of detectors under the different complexity constraints. extensive experiments conducted on popular benchmarks, wider face and fddb, demonstrate the state - of - the - art performance - efficiency trade - off for the proposed automatic and scalable face detector ( asfd ) family. in particular, our strong asfd - d6 outperforms the best competitor with ap 96. 7 / 96. 2 / 92. 1 on wider face test, and the lightweight asfd - d0 costs about 3. 1 ms, more than 320 fps, on the v100 gpu with vga - resolution images. | arxiv:2201.10781 |
metallic magnetic micro - calorimeters ( mmcs ) operated at millikelvin temperature offer the possibility to achieve ev - scale energy resolution with high stopping power for x - rays and massive particles in an energy range up to several tens of kev. this motivates their use in a wide range of applications in fields as particle physics, atomic and molecular physics. present detector systems consist of mmc arrays read out by 32 two - stage squid read - out channels. in contrast to the design of the detector array and consequently the design of the front - end squids, which need to be optimised for the physics case and the particles to be detected in a given experiment, the read - out chain can be standardised. we present our new standardised 32 - channel parallel read - out for the operation of mmc arrays to be operated in a dilution refrigerator. the read - out system consists of a detector module, whose design depends on the particular application, an amplifier module, ribbon cables from room temperature to the millikelvin platform and a data acquisition system. in particular, we describe the realisation of the read - out system prepared for the echo - 1k experiment for the operation of two 64 - pixel arrays. the same read - out concept is also used for the maxs detector systems, developed for the study of the de - excitation of highly charged heavy ions by x - rays, as well as for the mocca system, developed for the energy and position sensitive detection of neutral molecular fragments for the study of fragmentation when molecular ions recombine with electrons. the choice of standard modular components for the operation of 32 - channel mmc arrays offer the flexibility to upgrade detector modules without the need of any changes in the read - out system and the possibility to individually exchange parts in case of damages or failures. | arxiv:2102.11100 |
the characteristic of design works on firm at its renovation and of the common directions of their informatization is given. the implantation of a cad is selected as the key direction, and the requirements to a complex cad - system are stated. the methods of such a cad - system development are featured, and the connectedness of this development with the process of integration of information space of design department of the firm is characterized. the experience of development and implantation of a complex cad of renovation of firms technocad glassx lies in a basis of this reviewing | arxiv:cs/0412034 |
in this paper, we study the effects of using an algorithm - based risk assessment instrument to support the prediction of risk of criminalrecidivism. the instrument we use in our experiments is a machine learning version ofriskeval ( name changed for double - blindreview ), which is the main risk assessment instrument used by the justice department ofcountry ( omitted for double - blind review ). the task is to predict whether a person who has been released from prison will commit a new crime, leading to re - incarceration, within the next two years. we measure, among other variables, the accuracy of human predictions with and without algorithmicsupport. this user study is done with ( 1 ) generalparticipants from diverse backgrounds recruited through a crowdsourcing platform, ( 2 ) targetedparticipants who are students and practitioners of data science, criminology, or social work and professionals who workwithriskeval. among other findings, we observe that algorithmic support systematically leads to more accurate predictions fromall participants, but that statistically significant gains are only seen in the performance of targeted participants with respect to thatof crowdsourced participants. we also run focus groups with participants of the targeted study to interpret the quantitative results, including people who useriskevalin a professional capacity. among other comments, professional participants indicate that theywould not foresee using a fully - automated system in criminal risk assessment, but do consider it valuable for training, standardization, and to fine - tune or double - check their predictions on particularly difficult cases. | arxiv:2201.11080 |
magnon spintronics is a prosperous field that promises beyond - cmos technology based on elementary excitations of the magnetic order that act as information carriers for future computational architectures. unidirectional propagation of spin waves is key to the realization of magnonic logic devices. however, previous efforts to enhance the damon - eshbach - type nonreciprocity did not realize ( let alone control ) purely unidirectional propagation. here we experimentally demonstrate excitations of unidirectional exchange spin waves by a nanoscale magnetic grating consisting of co nanowires fabricated on an ultrathin yttrium iron garnet film. we explain and model the nearly perfect unidirectional excitation by the chirality of the magneto - dipolar interactions between the kittel mode of the nanowires and the exchange spin waves of the film. reversal of the magnetic configurations of film and nanowire array from parallel to antiparallel changes the direction of the excited spin waves. our results raise the prospect of a chiral magnonic logic without the need for fragile surface states. | arxiv:1903.00638 |
text classification is a fundamental task in natural language processing ( nlp ). several recent studies show the success of deep learning on text processing. convolutional neural network ( cnn ), as a popular deep learning model, has shown remarkable success in the task of text classification. in this paper, new baseline models have been studied for text classification using cnn. in these models, documents are fed to the network as a three - dimensional tensor representation to provide sentence - level analysis. applying such a method enables the models to take advantage of the positional information of the sentences in the text. besides, analysing adjacent sentences allows extracting additional features. the proposed models have been compared with the state - of - the - art models using several datasets. the results have shown that the proposed models have better performance, particularly in the longer documents. | arxiv:2301.11696 |
transverse single - spin asymmetries ( ssa ) in inclusive reactions are now considered to be directly related to the transverse momentum $ { \ bf k } _ { t } $ of the fundamental partons involved in the process. we find that the ideal probe to extract information on the gluon sivers function is the transverse ssa of prompt photon production $ p p ^ { \ uparrow } \ to \ gamma x $, at large $ p _ t $. the following related processes, $ p p ^ { \ uparrow } \ to \ gamma + jet + x $, $ p p ^ { \ uparrow } \ to \ gamma ^ * + x \ to \ mu ^ + \ mu ^ - + x $ and $ \ bar { p } p ^ { \ uparrow } \ to \ gamma + x $ are also briefly discussed. | arxiv:hep-ph/0503127 |
in this paper we report a systematic search for an emission line around 3. 5 kev in the spectrum of the cosmic x - ray background using a total of $ \ sim $ 10 ms chandra observations towards the cosmos legacy and cdfs survey fields. we find a marginal evidence of a feature at an energy of $ \ sim $ 3. 51 kev with a significance of 2. 5 - 3 $ \ sigma $, depending on the choice of the statistical treatment. the line intensity is best fit at $ 8. 8 \ \ pm \ { 2. 9 } \ times10 ^ { - 7 } $ ph cm $ ^ { - 2 } $ s $ ^ { - 1 } $ when using a simple $ \ delta \ chi ^ 2 $ or $ 10. 2 \ ^ { + 0. 2 } _ { - 0. 4 } \ times10 ^ { - 7 } $ ph cm $ ^ { - 2 } $ s $ ^ { - 1 } $ when mcmc is used. based on our knowledge of $ chandra $, and the reported detection of the line by other instruments, an instrumental origin for the line remains unlikely. we cannot though rule out a statistical fluctuation and in that case our results provide a 3 $ \ sigma $ upper limit at 1. 85 $ \ times $ 10 $ ^ { - 6 } $ ph cm $ ^ { - 2 } $ s $ ^ { - 1 } $. we discuss the interpretation of this observed line in terms of the iron line background ; s { \ sc xvi } charge exchange as well as potentially from sterile neutrino decay. we note that our detection is consistent with previous measurements of this line toward the galactic center, and can be modeled as the result of sterile neutrino decay from the milky way for the dark matter distribution modeled as an nfw profile. for this case, we estimate a mass m $ _ { \ nu } \ sim $ 7. 01 kev and a mixing angle sin $ ^ 2 $ ( 2 $ \ theta $ ) = 0. 83 - - 2. 75 $ \ times 10 ^ { - 10 } $. these derived values are in agreement with independent estimates from galaxy clusters ; the galactic center and m31. | arxiv:1701.07932 |
architectures with multiple classes of memory media are becoming a common part of mainstream supercomputer deployments. so called multi - level memories offer differing characteristics for each memory component including variation in bandwidth, latency and capacity. this paper investigates the performance of sparse matrix multiplication kernels on two leading high - performance computing architectures - - intel ' s knights landing processor and nvidia ' s pascal gpu. we describe a data placement method and a chunking - based algorithm for our kernels that exploits the existence of the multiple memory spaces in each hardware platform. we evaluate the performance of these methods w. r. t. standard algorithms using the auto - caching mechanisms. our results show that standard algorithms that exploit cache reuse performed as well as multi - memory - aware algorithms for architectures such as knls where the memory subsystems have similar latencies. however, for architectures such as gpus where memory subsystems differ significantly in both bandwidth and latency, multi - memory - aware methods are crucial for good performance. in addition, our new approaches permit the user to run problems that require larger capacities than the fastest memory of each compute node without depending on the software - managed cache mechanisms. | arxiv:1804.00695 |
any permutation - invariant function of data points $ \ vec { r } _ i $ can be written in the form $ \ rho ( \ sum _ i \ phi ( \ vec { r } _ i ) ) $ for suitable functions $ \ rho $ and $ \ phi $. this form - known in the machine - learning literature as deep sets - also generates a map - reduce algorithm. the area of a triangle is a permutation - invariant function of the locations $ \ vec { r } _ i $ of the three corners $ 1 \ leq i \ leq 3 $. we find the polynomial formula for the area of a triangle that is explicitly in deep sets form. this project was motivated by questions about the fundamental computational complexity of $ n $ - point statistics in cosmology ; that said, no insights of any kind were gained from these results. | arxiv:2503.22786 |
we present the asymptotic analysis of the quantum two - port interferometer in the $ n \ rightarrow \ infty $ limit of $ n $ partially indistinguishable photons. using the unitary - unitary duality between port and inner - mode degrees of freedom, the probability distribution of output port counts can be decomposed as a sum of contributions from independent channels, each associated to a spin - $ j $ representation of $ su ( 2 ) $ and, in this context, to $ 2 j $ effectively indistinguishable photons in the channel. our main result is that the asymptotic output distribution is dominated by the $ o ( \ sqrt { n } ) $ channels around a certain $ j ^ * $ that depends on the degree of indistinguishability. the asymptotic form is essentially the doubly - humped semi - classical envelope of the distribution that would arise from $ 2 j ^ * $ indistinguishable photons, and which reproduces the corresponding classical intensity distribution. | arxiv:2312.16774 |
analysis of asset liability management ( alm ) strategies especially for long term horizon is a crucial issue for banks, funds and insurance companies. modern economic models, investment strategies and optimization criteria make alm studies computationally very intensive task. it attracts attention to multiprocessor system and especially to the cheapest one : multi core pcs and pc clusters. in this article we are analyzing problem of parallel organization of portfolio optimization, results of using clusters for optimization and the most efficient cluster architecture for these kinds of tasks. | arxiv:0811.1504 |
in this short note we discuss recent results on hook length formulas of trees unifying some earlier results, and explain hook length formulas naturally associated to families of increasingly labelled trees. | arxiv:1004.1883 |
we study the murray adaptation of the noyes - field five - step model of the belousov - zhabotinsky ( bz ) reaction in the case when a tuning parameter $ r $, which determines the level of the bromide ion far ahead of the propagating wave, is bigger than 1 and when the delay in generation of the bromous acid is taken into account. the existence of wavefronts in the delayed bz system was previously established only in the monostable situation with $ r \ in ( 0, 1 ] $, the physically relevant bistable situation where $ r > 1 $ ( in real experiments $ r $ varies between 5 and 50 ) was left open. we complete the study by showing that the bz system with $ r > 1 $ admits monotone traveling fronts. note that one of the stable equilibria of the bz model is not isolated. this circumstance does not allow the direct application of the topological or analytical methods previously elaborated for the analysis of the existence of bistable waves. | arxiv:2305.07823 |
we investigate a state estimation problem for the dynamical system described by uncertain linear operator equation in hilbert space. the uncertainty is supposed to admit a set - membership description. we present explicit expressions for linear minimax estimation and error provided that any pair of uncertain parameters belongs to the quadratic bounding set. we introduce a new notion of minimax directional observability and index of non - causality for linear noncausal daes. application of these notions to the state estimation problem for linear uncertain noncausal daes allows to derive new minimax recursive estimator for both continuous and discrete time. we illustrate the benefits of non - causality of the plant applying our approach to scalar nonlinear set - membership state estimation problem. numerical example is presented. | arxiv:0810.3305 |
face - morphing attacks have been a cause for concern for a number of years. striving to remain one step ahead of attackers, researchers have proposed many methods of both creating and detecting morphed images. these detection methods, however, have generally proven to be inadequate. in this work we identify two new, gan - based methods that an attacker may already have in his arsenal. each method is evaluated against state - of - the - art facial recognition ( fr ) algorithms and we demonstrate that improvements to the fidelity of fr algorithms do lead to a reduction in the success rate of attacks provided morphed images are considered when setting operational acceptance thresholds. | arxiv:2012.10548 |
we present the luminosity function of 90um selected galaxies from the european large area iso survey ( elais ), extending to z = 0. 3. their luminosities are in the range 10 ^ 9 < h _ 65 ^ - 2 l / lsun < 10 ^ 12, i. e. non - ultraluminous. from our sample of 37 reliably detected galaxies in the elais s1 region from the efstathiou et al. ( 2000 ) s _ 90 > = 100mjy database, we found optical, 15um or 1. 4ghz identifications for 24 ( 65 % ). we have obtained 2df and uk schmidt flair spectroscopy of 89 % of ids to rigid multivariate flux limits. we construct a luminosity function assuming ( a ) our spectroscopic subset is an unbiased sparse sample, and ( b ) there are no galaxies which would not be represented in our spectroscopic sample at { \ it any } redshift. we argue that we can be confident of both assumptions. we find the luminosity function is well - described by the local 100um luminosity function of rowan - robinson, helou & walker ( 1987 ). { \ it assuming } this local normalisation, we derive luminosity evolution of ( 1 + z ) ^ { 2. 45 \ pm0. 85 } ( 95 % confidence ). we argue that star formation dominates the bolometric luminosities of these galaxies and we derive comoving star formation rates in broad agreement with the flores et al. ( 1999 ) and rowan - robinson et al. ( 1997 ) mid - ir - based estimates. | arxiv:astro-ph/0010025 |
we investigate the effect of nucleon - nucleon correlations on the initial condition of ultra - central heavy ion collisions at lhc energies. we calculate the eccentricities of the mc - glauber and ip - glasma models in the 0 - - 1 % centrality class and show that they are considerably affected by the inclusion of such type of correlations. for an ip - glasma initial condition, we further demonstrate that this effect survives the fluid - dynamical evolution of the system and can be observed in its final state azimuthal momentum anisotropy. | arxiv:1406.7792 |
the hubble space telescope ( hst ) uv legacy survey of galactic globular clusters ( gcs ) has investigated multiple stellar populations by means of the " chromosome map " ( chm ) diagnostic tool that maximises the separation between stars with different chemical composition. one of the most challenging features revealed by chms analysis is the apparent inhomogeneity among stars belonging to the first population, a phenomenon largely attributed to he variations. however, this explanation is not supported by the uniformity in p - capture elements of these stars. the hst survey has revealed that the gc ngc 3201 shows an exceptionally wide coverage in the delta ( f275w, f814w ) parameter of the chm. we present a chemical abundance analysis of 24 elements in 18 giants belonging to the first population of this gc, and having a wide range in delta ( f275w, f814w ). as far as the p - capture elements are concerned, the chemical abundances are typical of 1g stars, as expected from the location of our targets in the chm. based on radial velocities and chemical abundances arguments, we find that the three stars with the lowest delta ( f275w, f814w ) values are binary candidates. this suggests that, at least those stars could be explained with binarity. these results are consistent with evidence inferred from multi - band photometry that evolved blue stragglers populate the bluest part of the 1g sequence in the chm. the remaining 15 spectroscopic targets show a small range in the overall metallicity by ~ 0. 10 dex, with stars at higher delta ( f275w, f814w ) values having higher absolute abundances. we suggest that a small variation in metals and binarity govern the color spread of the 1g in the chm, and that evolved blue stragglers contribute to the bluest tail of the 1g sequence. | arxiv:1910.02892 |
off - diagonal parton distributions occur in several hard exclusive reactions. they extend the study of hadron structure beyond what can be learned from ordinary distributions and have a particularly rich spin structure. the hard scattering subprocesses in electroproduction of mesons and of real photons satisfy helicity selection rules, which provide powerful tools to test leading - twist dominance at a given value of the hard scale. | arxiv:hep-ph/9811220 |
we present a study of neutrino / antineutrino induced charged and neutral current single pion production off the nucleon. for this, we have considered $ p _ { 33 } ( 1232 ) $ resonance, non - resonant background terms, other higher resonances like $ p _ { 11 } ( 1440 ) $, $ s _ { 11 } ( 1535 ) $, $ d _ { 13 } ( 1520 ) $, $ s _ { 11 } ( 1650 ) $ and $ p _ { 13 } ( 1720 ) $. for the non - resonant background terms a microscopic approach based on su ( 2 ) non - linear sigma model has been used. the vector form factors for the resonances are obtained by using the relationship between the electromagnetic resonance form factors and helicity amplitudes provided by maid. axial coupling $ c _ 5 ^ { a } ( 0 ) $ in the case of $ p _ { 33 } ( 1232 ) $ resonance is obtained by fitting the anl and bnl $ \ nu $ - deuteron reanalyzed scattering data. the results are presented with and without deuteron effect for the total scattering cross sections for all possible channels viz. $ \ nu _ l ( \ bar \ nu _ l ) ~ + ~ n \ rightarrow l ^ - ( l ^ + ) ~ + ~ n ^ \ prime ~ + ~ \ pi ^ i $ ; $ \ nu _ l ( \ bar \ nu _ l ) ~ + ~ n \ rightarrow \ nu _ l ( \ bar \ nu _ l ) ~ + ~ n ^ \ prime ~ + ~ \ pi ^ i $, where $ n, n ^ \ prime = p, n $, $ \ pi ^ i = ~ \ pi ^ \ pm $ or $ \ pi ^ 0 $ and $ l = e, \ mu $. | arxiv:1509.08622 |
we report on full - polarization radio observations of the perseus cluster ( abell 426 ) using the westerbork synthesis radio telescope ( wsrt ) at wavelengths from 81 - 95 cm. we have employed a novel technique, rotation measure synthesis ( brentjens and de bruyn, 2005 ) to unravel the polarization properties of the emission across the full field of view and detect polarized emission over a wide range of rm from about 0 to 90 rad m ^ - 2. the low rm emission is associated with our galaxy, while the high rm emission is associated with the perseus cluster. the latter reaches typical surface brightness levels of 0. 5 - 1 mjy per beam and must be rather highly polarized. most of the peripheral polarized emission appears too bright, by about 1 - 2 orders of magnitude, to be explainable as thomson scattered emission of the central radio source off the thermal electrons in the cluster. the bulk of the emission associated with the perseus cluster is probably related to buoyant bubbles of relativistic plasma, probably relics from still active or now dormant agn within the cluster. a lenticular shaped structure measuring 0. 5 - 1 mpc is strikingly similar to the structures predicted by ensslin et al. ( 1998 ). at the western edge of the cluster, we detect very long, linear structures that may be related to shocks caused by infall of gas into the perseus cluster. | arxiv:astro-ph/0507351 |
let f denote a field of characteristic different from two. in this paper we describe the mod 2 cohomology of a galois group which is determined by the witt ring wf. | arxiv:math/9812169 |
the brillouin zone of the clean weyl semimetal contains points at which the density of states ( dos ) vanishes. previous work suggested that below a certain critical concentration of impurities this features is preserved including in the presence of disorder. this result got criticized for its neglect of rare disorder fluctuations which might bind quantum states and hence generate a finite dos. we here show that in spite of their existence these states are so fragile that their contribution effectively vanishes when averaged over continuous disorder distributions. this means that the integrity of the nodal points remains protected for weak disorder. | arxiv:1805.00018 |
since their introduction by erd \ h { o } s in 1950, covering systems ( that is, finite collections of arithmetic progressions that cover the integers ) have been extensively studied, and numerous questions and conjectures have been posed regarding the existence of covering systems with various properties. in particular, erd \ h { o } s asked if the moduli can be distinct and all arbitrarily large, erd \ h { o } s and selfridge asked if the moduli can be distinct and all odd, and schinzel conjectured that in any covering system there exists a pair of moduli, one of which divides the other. another beautiful conjecture, proposed by erd \ h { o } s and graham in 1980, states that if the moduli are distinct elements of the interval $ [ n, cn ] $, and $ n $ is sufficiently large, then the density of integers uncovered by the union is bounded below by a constant ( depending only on $ c $ ). this conjecture was confirmed ( in a strong form ) by filaseta, ford, konyagin, pomerance and yu in 2007, who moreover asked whether the same conclusion holds if the moduli are distinct and sufficiently large, and $ \ sum _ { i = 1 } ^ k \ frac { 1 } { d _ i } < c $. although this condition turns out not to be sufficiently strong to imply the desired conclusion, as the main result of this paper we will give an essentially best possible condition which is sufficient. our method has a number of further applications. most importantly, we prove the conjecture of schinzel stated above, which was made in 1967. we moreover give an alternative ( somewhat simpler ) proof of a breakthrough result of hough, who resolved erd \ h { o } s ' minimum modulus problem, with an improved bound on the smallest difference. finally, we make further progress on the problem of erd \ h { o } s and selfridge. | arxiv:1811.03547 |
using advanced first - principles calculations we predict that the non - polar srtio $ _ 3 $ / srzro $ _ 3 $ ( 001 ) interface, designed as either thin srzro $ _ 3 $ film deposited on srtio $ _ 3 $ or short - period ( srtio $ _ 3 $ ) $ _ m $ / ( srzro $ _ 3 $ ) $ _ n $ superlattice, host a 2 - dimensionally confined electron gas. mobile electron charge due to native impurities, field - effect, or modulation doping remains tightly trapped at the interface. key ingredients for this occurrence are a ) the peculiar chemistry of 3d orbitals, b ) the large band offset at titanate - zirconate interface. | arxiv:1309.4965 |
we study a set of two - loop non - planar master integrals needed for the nnlo qcd corrections to diphoton and dijet production at hadron colliders. the top - sector topology contains an internal massive fermion loop and is known to contain elliptic curves. leveraging the method of differential equations, we provide a comprehensive discussion for deriving an $ \ epsilon $ - factorized differential equation related to the most intricate sector within the feynman integral family. despite the dependence on multiple scales and the presence of two elliptic sectors, we demonstrate how to leverage the properties of their maximal cuts and the factorization of the picard - fuchs operator to deal with the complexity of the analytic computation. in particular, we construct a transformation matrix that brings the differential equations into a format enabling the convenient expression of analytic results in terms of chen ' s iterated integrals. | arxiv:2402.07311 |
we calculate explicitly the bethe vectors states by the algebraic bethe ansatz method with the $ gl ( 2 ) $ - invariant $ r $ - matrix for the two - site bose - hubbard model. using a binomial expansion of the n - th power of a sum of two operators we get and solve a recursion equation. we calculate the scalar product and the norm of the bethe vectors states. the form factors of the imbalance current operator are also computed. | arxiv:1503.07885 |
taking into account the mixing effects between left - and right - handed top - squarks, we calculate the genuine supersymmetric eletroweak correction to top quark production at the tevatron in the minimal supersymmetric model. the analytic expressions of the corrections to both the parton level cross section and the total hadronic cross section are presented. some numerical examples are also given to show the size of the corrections. | arxiv:hep-ph/9603442 |
it has been confirmed experimentally the existence of a mass gap between standard model ( sm ) and eventual beyond standard model ( bsm ) fields. therefore, the use of effective approaches to search for fingerprints of new physics is very appealing. a non - linear realizations of the electroweak symmetry breaking is considered here, where the higgs is a singlet with free couplings and the sm fields are also coupled to bosonic heavy resonances. a one - loop - level calculation of the oblique s and t parameters is presented here. this analysis allows us to constrain resonance masses to be above the tev scale, $ m _ r \! > \! 3 \, $ tev, in good agreement with our previous determinations, where these observables were computed with a more simplified lagrangian. | arxiv:2309.09741 |
x - ray fluorescence remote sensing technique plays a significant role in the chemical compositions research of the moon. here we describe the data analysis method for china ' s chang ' e - 2 x - ray spectrometer ( ce2xrs ) in detail and present the preliminary results : the first global mg / si and al / si maps on the lunar surface. our results show that the distributions of mg / si and al / si correlate well with the terrains of the moon. the higher mg / si ratio corresponding to the mare regions while the lower value corresponding to the highland terrains. the map of al / si ratio shows a reverse relationship with the map of mg / si ratio. | arxiv:1508.00678 |
the mass derived from gravitational lensing reflects the total mass contained in the lensing system, independent of the specific matter contents and states. a comparison of the dynamical masses from hydrostatic equilibrium with the gravitational masses from arc - like images of background galaxies is made for four clusters of galaxies at intermediate redshits. it is found that virial analysis has underestimated the total cluster masses ( from lensing ) by a factor of $ 3 \ sim6 $ within a radius of $ \ sim0. 3 $ mpc $ h _ { 50 } ^ { - 1 } $ around the cluster centers, indicating that clusters of galaxies might not be regarded as the well relaxed virialized systems. the increase of the total cluster masses obtained from lensing leads to the decrease of the baryon fractions of clusters of galaxies, which provides a crue for solving the ` ` $ \ omega _ 0 $ disprepancy puzzle " in cosmology. | arxiv:astro-ph/9406036 |
ecological connectivity in coastal oceanic waters is mediated by dispersion of the early life stages of marine organisms and conditions the structure of biological communities and the provision of ecosystem services. integrated management strategies aimed at ensuring long - term service provision to society do not currently consider the importance of dispersal and larval connectivity. a spatial optimization model is introduced to maximise the potential provision of ecosystem services in coastal areas by accounting for the role of dispersal and larval connectivity. the approach combines a validated coastal circulation model that reproduces realistic patterns of larval transport along the coast, which ultimately conditions the biological connectivity and productivity of an area, with additional spatial layers describing potential ecosystem services. the spatial optimization exercise was tested along the coast of central chile, a highly productive area dominated by the humboldt current. results show it is unnecessary to relocate existing management areas, as increasing no - take areas by 10 % could maximise ecosystem service provision, while improving the spatial representativeness of protected areas and minimizing social conflicts. the location of protected areas was underrepresented in some sections of the study domain, principally due to the restriction of the model to rocky subtidal habitats. future model developments should encompass the diversity of coastal ecosystems and human activities to inform integrative spatial management. nevertheless, the spatial optimization model is innovative not only for its integrated ecosystem perspective, but also because it demonstrates that it is possible to incorporate time - varying biophysical connectivity within the optimization problem, thereby linking the dynamics of exploited populations produced by the spatial management regime. | arxiv:1903.10322 |
we propose a novel approach to modeling advertising dynamics for a firm operating over distributed market domain based on controlled partial differential equations of diffusion type. using our model, we consider a general type of finite - horizon profit maximization problem in a monopoly setting. by reformulating this profit maximization problem as an optimal control problem in infinite dimensions, we derive sufficient conditions for the existence of its optimal solutions under general profit functions, as well as state and control constraints, and provide general characterization of the optimal solutions. sharper, feedback - form, characterizations of the optimal solutions are obtained for two variants of the general problem. | arxiv:math/0406435 |
spectral clustering is one of the most prominent clustering approaches. the distance - based similarity is the most widely used method for spectral clustering. however, people have already noticed that this is not suitable for multi - scale data, as the distance varies a lot for clusters with different densities. state of the art ( rosc and cast ) addresses this limitation by taking the reachability similarity of objects into account. however, we observe that in real - world scenarios, data in the same cluster tend to present in a smooth manner, and previous algorithms never take this into account. based on this observation, we propose a novel clustering algorithm, which con - siders the smoothness of data for the first time. we first divide objects into a great many tiny clusters. our key idea is to cluster tiny clusters, whose centers constitute smooth graphs. theoretical analysis and experimental results show that our clustering algorithm significantly outperforms state of the art. although in this paper, we singly focus on multi - scale situations, the idea of data smoothness can certainly be extended to any clustering algorithms | arxiv:2009.04674 |
global magnetohydrodynamic ( mhd ) models play an important role in the infrastructure of space weather forecasting. validating such models commonly utilizes in situ solar wind measurements made near the orbit of the earth. the purpose of this study is to test the performance of g3dmhd ( a data driven, time - dependent, 3 - d mhd model of the solar wind ) with parker solar probe ( psp ) measurements. since its launch in august 2018, psp has traversed the inner heliosphere at different radial distances sunward of the earth ( the closest approach ~ 13. 3 solar radii ), thus providing a good opportunity to study evolution of the solar wind and to validate heliospheric models of the solar wind. the g3dmhd model simulation is driven by a sequence of maps of photospheric field extrapolated to the assumed source surface ( 2. 5 rs ) using the potential field model from 2018 to 2022, which covers the first 15 psp orbits. the pearson correlation coefficient ( cc ) and the mean absolute squared error ( mase ) are used as the metrics to evaluate the model performance. it is found that the model performs better for both magnetic intensity ( cc = 0. 75 ; mase = 0. 60 ) and the solar wind density ( cc = 0. 73 ; mase = 0. 50 ) than for the solar wind speed ( cc = 0. 15 ; mase = 1. 29 ) and temperature ( cc = 0. 28 ; mase = 1. 14 ). this is due primarily to lack of accurate boundary conditions. the well - known underestimate of the magnetic field in solar minimum years is also present. assuming that the radial magnetic field becomes uniformly distributed with latitude at or below 18 rs ( the inner boundary of the computation do - main ), the agreement in the magnetic intensity significantly improves ( cc = 0. 83 ; mase = 0. 49 ). | arxiv:2410.23157 |
semantic parsing converts natural language queries into structured logical forms. the paucity of annotated training samples is a fundamental challenge in this field. in this work, we develop a semantic parsing framework with the dual learning algorithm, which enables a semantic parser to make full use of data ( labeled and even unlabeled ) through a dual - learning game. this game between a primal model ( semantic parsing ) and a dual model ( logical form to query ) forces them to regularize each other, and can achieve feedback signals from some prior - knowledge. by utilizing the prior - knowledge of logical form structures, we propose a novel reward signal at the surface and semantic levels which tends to generate complete and reasonable logical forms. experimental results show that our approach achieves new state - of - the - art performance on atis dataset and gets competitive performance on overnight dataset. | arxiv:1907.05343 |
using elementary ideas from tropical geometry, we assign a a tropical curve to every $ q $ - holonomic sequence of rational functions. in particular, we assign a tropical curve to every knot which is determined by the jones polynomial of the knot and its parallels. the topical curve explains the relation between the aj conjecture and the slope conjecture ( which relate the jones polynomial of a knot and its parallels to the $ \ sl ( 2, \ bc ) $ character variety and to slopes of incompressible surfaces ). our discussion predicts that the tropical curve is dual to a newton subdivision of the $ a $ - polynomial of the knot. we compute explicitly the tropical curve for the $ 4 _ 1 $, $ 5 _ 2 $ and $ 6 _ 1 $ knots and verify the above prediction. | arxiv:1003.4436 |
the 2d fermi surface of 1st stage pdal2cl8 acceptor - type graphite intercalation compounds ( gics ) has been investigated using the shubnikov - de haas ( sdh ) effect. one fundamental frequency is observed, the angular variation of which confirms its strongly 2d nature, as previously found through electrical conductivity measurements. the energy spectrum can be described by the 2d band structure model proposed by blinowski et al. we obtain the following parameter values : intraplane c - c interaction energy gamma _ 0 = 2. 7 ev, fermi energy e _ f = - 1. 1 ev and carrier density n _ sdh = 1. 1x10 ^ 27 m ^ - 3. some fewer details are presented on stage 2 and 3 materials. | arxiv:cond-mat/9910516 |
three algol - type binary systems ( io cep, im cep and tx ari ) showing cyclic orbital period changes are studied. the combination of time of minimum data from the ground - based observations together with high precision photometric data from the tess satellite enabled us to estimate the basic light curve elements of binary systems and mass functions for distant components around the systems. the relation of mass ratio to the system geometry in semi - detached binary stars allowed us to determine the mass ratio of the binary components without using spectra. by using the color and distance information from the gaia edr3 and light contributions of the components from the light curve analysis, the astrophysical parameters of the binary components as well as the minimum masses of the distant components are obtained with an uncertainty of ~ 10 - 20 per cent indicating that the method can be a good guide for those studying with faint systems where spectra with sufficient resolution and s / n ratio are difficult to acquire. | arxiv:2112.05376 |
we investigate the magnetic instabilities of the two - dimensional model of interacting e _ g electrons for hole doping away from two electrons per site in the mean - field approximation. in particular, we address the occurrence of orbitally polarized states due to the inequivalent orbitals, and their interplay with ferromagnetic and antiferromagnetic spin order. the role played by the hund ' s exchange coupling j _ h and by the crystal field orbital splitting e _ z in stabilizing one of the competing phases is discussed in detail. | arxiv:cond-mat/0502158 |
f. : good morning hermann, i would like to talk with you about infinitesimals. g. : tell me pierre. f. : i ' m fed up of all these slanders about my attitude to be non rigorous, so i ' ve started to study nonstandard analysis ( nsa ) and synthetic differential geometry ( sdg ). g. : yes, i ' ve read something... f. : ok, no problem about their rigour. but, when i ' ve seen that the sine of an infinite in nsa is infinitely near to a real number i was astonished : what is the intuitive meaning of this number, if any? then, i ' ve seen that to work in sdg i must learn to work in intuitionistic logic... you know, i love margins of books, and i don ' t want to loose too much time, i have many things to do... g. : in sdg they also say that every infinitesimal is at the same time positive and negative, what is the meaning of all these? and why does the square of a first order infinitesimal equal zero, whereas the product of two first order infinitesimals is not necessarily zero? and do you know that from any single infinitesimal in nsa is possible to construct a non measurable set? without using the axiom of choice! f. : yes, i know, i know... ok, listen : why cannot we start from standard real functions of one real variable and use... this work is the ideal continuation of this dialogue : a theory of actual infinitesimals that do not need a background of formal logic to be understood, with a clear intuitive meaning and with non trivial applications to differential geometry of both finite and infinite dimensional spaces. | arxiv:0907.1872 |
in video action recognition, shortcut static features can interfere with the learning of motion features, resulting in poor out - of - distribution ( ood ) generalization. the video background is clearly a source of static bias, but the video foreground, such as the clothing of the actor, can also provide static bias. in this paper, we empirically verify the existence of foreground static bias by creating test videos with conflicting signals from the static and moving portions of the video. to tackle this issue, we propose a simple yet effective technique, stillmix, to learn robust action representations. specifically, stillmix identifies bias - inducing video frames using a 2d reference network and mixes them with videos for training, serving as effective bias suppression even when we cannot explicitly extract the source of bias within each video frame or enumerate types of bias. finally, to precisely evaluate static bias, we synthesize two new benchmarks, scuba for static cues in the background, and scufo for static cues in the foreground. with extensive experiments, we demonstrate that stillmix mitigates both types of static bias and improves video representations for downstream applications. code is available at https : / / github. com / lihaoxin05 / stillmix. | arxiv:2211.12883 |
in this work, we present the three - dimensional maxwell carroll gravity by considering the ultra - relativistic limit of the maxwell chern - simons gravity theory defined in three spacetime dimensions. we show that an extension of the maxwellian carroll symmetry is necessary in order for the invariant tensor of the ultra - relativistic maxwellian algebra to be non - degenerate. consequently, we discuss the origin of the aforementioned algebra and theory as a flat limit. we show that the theoretical setup with cosmological constant yielding the extended maxwellian carroll chern - simons gravity in the vanishing cosmological constant limit is based on an enlarged extended version of the carroll symmetry. indeed, the latter exhibits a non - degenerate invariant tensor allowing the proper construction of a chern - simons gravity theory which reproduces the extended maxwellian carroll gravity in the flat limit. | arxiv:2107.05716 |
we embed the space of totally real $ r $ - cycles of a totally real projective variety into the space of complex $ r $ - cycles by complexification. we provide a proof of the holomorphic taffy argument in the proof of lawson suspension theorem by using chow forms and this proof gives us an analogous result for totally real cycle spaces. we use sturm theorem to derive a criterion for a real polynomial of degree $ d $ to have $ d $ distinct real roots and use it to prove the openness of some subsets of real divisors. this enables us to prove that the suspension map induces a weak homotopy equivalence between two enlarged spaces of totally real cycle spaces. | arxiv:math/0510548 |
we propose general non - accelerated and accelerated tensor methods under inexact information on the derivatives of the objective, analyze their convergence rate. further, we provide conditions for the inexactness in each derivative that is sufficient for each algorithm to achieve the desired accuracy. as a corollary, we propose stochastic tensor methods for convex optimization and obtain sufficient mini - batch sizes for each derivative. | arxiv:2012.15636 |
we provide a locally free resolution of the projectivized symmetric algebra of the ideal sheaf of a zero - dimensional scheme defined by n + 1 equations in an n - dimensional variety. the resolution is given in terms of the resolution of the ideal itself and of the eagon - northcott complex of the koszul hull. | arxiv:1805.05599 |
the notion of p - value is a fundamental concept in statistical inference and has been widely used for reporting outcomes of hypothesis tests. however, p - value is often misinterpreted, misused or miscommunicated in practice. part of the issue is that existing definitions of p - value are often derived from constructions under specific settings, and a general definition that directly reflects the evidence of the null hypothesis is not yet available. in this article, we first propose a general and rigorous definition of p - value that fulfills two performance - based characteristics. the performance - based definition subsumes all existing construction - based definitions of the p - value, and justifies their interpretations. the paper further presents a specific approach based on confidence distribution to formulate and calculate p - values. this specific way of computing p values has two main advantages. first, it is applicable for a wide range of hypothesis testing problems, including the standard one - and two - sided tests, tests with interval - type null, intersection - union tests, multivariate tests and so on. second, it can naturally lead to a coherent interpretation of p - value as evidence in support of the null hypothesis, as well as a meaningful measure of degree of such support. in particular, it places a meaning of a large p - value, e. g. p - value of 0. 8 has more support than 0. 5. numerical examples are used to illustrate the wide applicability and computational feasibility of our approach. we show that our proposal is effective and can be applied broadly, without further consideration of the form / size of the null space. as for existing testing methods, the solutions have not been available or cannot be easily obtained. | arxiv:2001.11945 |
a recent trend observed in traditionally challenging fields such as computer vision and natural language processing has been the significant performance gains shown by deep learning ( dl ). in many different research fields, dl models have been evolving rapidly and become ubiquitous. despite researchers ' excitement, unfortunately, most software developers are not dl experts and oftentimes have a difficult time following the booming dl research outputs. as a result, it usually takes a significant amount of time for the latest superior dl models to prevail in industry. this issue is further exacerbated by the common use of sundry incompatible dl programming frameworks, such as tensorflow, pytorch, theano, etc. to address this issue, we propose a system, called model asset exchange ( max ), that avails developers of easy access to state - of - the - art dl models. regardless of the underlying dl programming frameworks, it provides an open source python library ( called the max framework ) that wraps dl models and unifies programming interfaces with our standardized restful apis. these restful apis enable developers to exploit the wrapped dl models for inference tasks without the need to fully understand different dl programming frameworks. using max, we have wrapped and open - sourced more than 30 state - of - the - art dl models from various research fields, including computer vision, natural language processing and signal processing, etc. in the end, we selectively demonstrate two web applications that are built on top of max, as well as the process of adding a dl model to max. | arxiv:1909.01606 |
the evaluation of the $ b $ value of the gutenberg - richter ( gr ) law, for a sample composed of $ n $ earthquakes, presents a systematic positive bias $ \ delta b $ which is proportional to $ 1 / n $, as already observed by ogata \ & yamashina ( 1986 ). in this study we show how to incorporate in $ \ delta b $ the bias introduced by deviations from the gr law. more precisely we show that $ \ delta b $ is proportional to the square of the variability coefficient $ cv $, defined as the ratio between { the standard deviation of the magnitude distribution and its mean value. } when the magnitude distribution follows the gr law $ cv = 1 $ and this allows us to introduce a new procedure, based on the dependence of $ b $ on $ n $, which allows us to { identify } the incompleteness magnitude $ m _ c $ as the threshold magnitude leading to $ cv = 1 $. the method is tested on synthetic catalogs and it is applied to estimate $ m _ c $ in southern california, japan and new zealand. | arxiv:2307.03457 |
in probabilistic logic nilsson uses the device of a probability distribution over a set of possible worlds to assign probabilities to the sentences of a logical language. in his paper nilsson concentrated on inference and associated computational issues. this paper, on the other hand, examines the probabilistic semantics in more detail, particularly for the case of first - order languages, and attempts to explain some of the features and limitations of this form of probability logic. it is pointed out that the device of assigning probabilities to logical sentences has certain expressive limitations. in particular, statistical assertions are not easily expressed by such a device. this leads to certain difficulties with attempts to give probabilistic semantics to default reasoning using probabilities assigned to logical sentences. | arxiv:1304.2341 |
in this paper we propose a tokenization algorithm of reversible hybrid type, as defined in pci dss guidelines for designing a tokenization solution, based on a block cipher with a secret key and ( possibly public ) additional input. we provide some formal proofs of security for it, which imply our algorithm satisfies the most significant security requirements described in pci dss tokenization guidelines. finally, we give an instantiation with concrete cryptographic primitives and fixed length of the pan, and we analyze its efficiency and security. | arxiv:1609.00151 |
the gradient force is the conservative component of many types of forces exerted by light on particles. when it is derived from a potential, there is no heat transferred to the particle interacting with the light field. however, most theoretical descriptions of the gradient force use simplified configurations of the light field and particle interactions which overlook small amounts of heating. it is known that quantum fluctuations contribute to a very small but measurable momentum diffusion of atoms and a corresponding increase in their temperature. this paper examines the contribution to momentum diffusion from a gradient force described as a quantum interaction between electron wave packets and a classical electromagnetic field. stimulated transfers of photons between interfering light beams produce a small amount of heating that is difficult to detect in laboratory experiments. however the solar corona, with its thermal electrons irradiated by an intense electromagnetic field, provides ideal conditions for such a measurement. heating from stimulated transfers is calculated to contribute a large fraction of the observed coronal heating. furthermore, the energy removed from the light field produces a wavelength shift of its spectrum as it travels through free electrons. theory predicts a stimulated transfer redshift comparable to the redshift of distant objects observed in astronomy. | arxiv:2410.02036 |
we present an experimental technique using orbital angular momentum ( oam ) in a fundamental laser field to drive high harmonic generation ( hhg ). the mixing of beams with different oam allows to generate two laser foci tightly spaced to study the phase and amplitude of hhg produced in diatomic nitrogen. nitrogen is used as a well studied system to show the quality of oam based hhg interferometry. | arxiv:1907.09549 |
knowledge of the amount and distribution of radiogenic heating in the mantle is crucial for understanding the dynamics of the earth, including its thermal evolution, the style and planform of mantle convection, and the energetics of the core. although the flux of heat from the surface of the planet is robustly estimated, the contributions of radiogenic heating and secular cooling remain poorly defined. constraining the amount of heat - producing elements in the earth will provide clues to understanding nebula condensation and planetary formation processes in early solar system. mantle radioactivity supplies power for mantle convection and plate tectonics, but estimates of mantle radiogenic heat production vary by a factor of more than 20. recent experimental results demonstrate the potential for direct assessment of mantle radioactivity through observations of geoneutrinos, which are emitted by naturally occurring radionuclides. predictions of the geoneutrino signal from the mantle exist for several established estimates of mantle composition. here we present novel analyses, illustrating surface variations of the mantle geoneutrino signal for models of the deep mantle structure, including those based on seismic tomography. these variations have measurable differences for some models, allowing new and meaningful constraints on the dynamics of the planet. an ocean based geoneutrino detector deployed at several strategic locations will be able to discriminate between competing compositional models of the bulk silicate earth. | arxiv:1207.0853 |
we study an eigenvalue problem for a spin matrix arising in the pauli lubanski vector operator. entanglement of the eigenvectors and its connection with degeneracy is discussed. | arxiv:1703.02557 |
optimal control of molecular dynamics is commonly expressed from a quantum mechanical perspective. however, in most contexts the preponderance of molecular dynamics studies utilize classical mechanical models. this paper treats laser - driven optimal control of molecular dynamics in a classical framework. we consider the objective of steering a molecular system from an initial point in phase space to a target point, subject to the dynamic constraint of hamilton ' s equations. the classical control landscape corresponding to this objective is a functional of the control field, and the topology of the landscape is analyzed through its gradient and hessian with respect to the control. under specific assumptions on the regularity of the control fields, the classical control landscape is found to be free of traps that could hinder reaching the objective. the hessian associated with an optimal control field is shown to have finite rank, indicating the presence of an inherent degree of robustness to control noise. extensive numerical simulations are performed to illustrate the theoretical principles on a ) a model diatomic molecule, b ) two coupled morse oscillators, and c ) a chaotic system with a coupled quartic oscillator, confirming the absence of traps in the classical control landscape. we compare the classical formulation with the mathematically analogous state - to - state transition probability control landscape of n - level quantum systems. the absence of traps in both circumstances provides a broader basis to understand the growing number of successful control experiments with complex molecules, which can have dynamics that transcend the classical and quantum regimes. | arxiv:1108.3806 |
we try to find the relation between the three - flavor nambu - jona - lasinio model and qcd based on the hypothesis that the gluon momenta are sharply condensed around the qcd scale, \ mu _ g. we find that the effective four - and six - fermion interactions, g _ 4 and g _ 6, should be scaled by g _ 4 proportional to \ mu _ g ^ ( - 2 ) and g _ 6 proportional to \ mu _ g ^ ( - 5 ) being consistent with the mass dimension counting in the obtained effective lagrangian. we then study the \ mu _ g dependence on the phase diagram of the chiral phase transition at finite temperature and chemical potential and the location of the critical point. we find that the location of the critical point are sensitively affected by the value of the introduced gluon energy scale. | arxiv:1602.09056 |
the partial linear cox model for interval - censoring is well - studied under the additive assumption but is still under - investigated without this assumption. in this paper, we propose to use a deep relu neural network to estimate the nonparametric components of a partial linear cox model for interval - censored data. this model not only retains the nice interpretability of the parametric component but also improves the predictive power compared to the partial linear additive cox model. we derive the convergence rate of the proposed estimator and show that it can break the curse of dimensionality under some certain smoothness assumptions. based on such rate, the asymptotic normality and the semiparametric efficiency are also established. intensive simulation studies are carried out to demonstrate the finite sample performance on both estimation and prediction. the proposed estimation procedure is illustrated on a real dataset. | arxiv:2307.00195 |
for plane frameworks with reflection or rotational symmetries, where the group action is not necessarily free on the vertex set, we introduce a phase - symmetric orbit rigidity matrix for each irreducible representation of the group. we then use these generalised orbit rigidity matrices to provide necessary conditions for infinitesimal rigidity for frameworks that are symmetric with a cyclic group that acts freely or non - freely on the vertices. moreover, for the reflection, the half - turn, and the three - fold rotational group in the plane, we establish complete combinatorial characterisations of symmetry - generic infinitesimally rigid frameworks. this extends well - known characterisations for these groups to the case when the group action is not necessarily free on the vertices. the presence of vertices that are fixed by non - trivial group elements requires the introduction of generalised versions of group - labelled quotient graphs leads to more refined types of combinatorial sparsity counts for characterising symmetry - generic infinitesimal rigidity. | arxiv:2407.13612 |
in this paper, we study brownian - type operators, which are upper triangular $ 2 \ times 2 $ block matrix operators with entries satisfying some algebraic constraints. we establish a lifting theorem stating that any brownian - type operator with subnormal $ ( 2, 2 ) $ entry lifts to a brownian - type operator with normal $ ( 2, 2 ) $ entry, where lifting is understood in the sense of extending entries of the block matrices representing the operators in question. the spectral inclusion and the filling in holes theorems are obtained for such operators. | arxiv:2304.07968 |
the study of engineering economics in civil engineering, also known generally as engineering economics, or alternatively engineering economy, is a subset of economics, more specifically, microeconomics. it is defined as a " guide for the economic selection among technically feasible alternatives for the purpose of a rational allocation of scarce resources. " its goal is to guide entities, private or public, that are confronted with the fundamental problem of economics. this fundamental problem of economics consists of two fundamental questions that must be answered, namely what objectives should be investigated or explored and how should these be achieved? economics as a social science answers those questions and is defined as the knowledge used for selecting among "... technically feasible alternatives for the purpose of a rational allocation of scarce resources. " correspondingly, all problems involving "... profit - maximizing or cost - minimizing are engineering problems with economic objectives and are properly described by the label " engineering economy ". as a subdiscipline practiced by civil engineers, engineering economics narrows the definition of the fundamental economic problem and related questions to that of problems related to the investment of capital, public or private in a broad array of infrastructure projects. civil engineers confront more specialized forms of the fundamental problem in the form of inadequate economic evaluation of engineering projects. civil engineers under constant pressure to deliver infrastructure effectively and efficiently confront complex problems associated with allocating scarce resources for ensuring quality, mitigating risk and controlling project delivery. civil engineers must be educated to recognize the role played by engineering economics as part of the evaluations occurring at each phase in the project lifecycle. thus, the application of engineering economics in the practice of civil engineering focuses on the decision - making process, its context, and environment in project execution and delivery. it is pragmatic by nature, integrating microeconomic theory with civil engineering practice but, it is also a simplified application of economic theory in that it avoids a number of microeconomic concepts such as price determination, competition and supply and demand. this poses new, underlying economic problems of resource allocation for civil engineers in delivering infrastructure projects and specifically, resources for project management, planning and control functions. civil engineers address these fundamental economic problems using specialized engineering economics knowledge as a framework for continuously "... probing economic feasibility... using a stage - wise approach... " throughout the project lifecycle. the application of this specialized civil engineering knowledge can be in the form of engineering analyses of life - cycle cost, cost accounting, cost of capital and the economic feasibility of engineering solutions for design, construction and project management | https://en.wikipedia.org/wiki/Engineering_economics_(civil_engineering) |
a concept of the total velocity that contains velocity and oscillatory velocity is proposed for the velocity solution of dirac equation. it is shown that the electronic rest energy all comes from the oscillation of the electron itself. for this reason, the velocity solution of dirac equation is taken as the definition of elementary particles. leptons, mesons and baryons appear in results as the newly defined elementary particles, but the particle that consists of more than three quarks is ruled out. the results also show that a quark is not a particle, but part of the hadron or a partial particle and that quark confinement may serve as an evidence of this conclusion. | arxiv:0910.3286 |
despite significant advancements, segmentation based on deep neural networks in medical and surgical imaging faces several challenges, two of which we aim to address in this work. first, acquiring complete pixel - level segmentation labels for medical images is time - consuming and requires domain expertise. second, typical segmentation pipelines cannot detect out - of - distribution ( ood ) pixels, leaving them prone to spurious outputs during deployment. in this work, we propose a novel segmentation approach exploiting ood detection that learns only from sparsely annotated pixels from multiple positive - only classes. these multi - class positive annotations naturally fall within the in - distribution ( id ) set. unlabelled pixels may contain positive classes but also negative ones, including what is typically referred to as \ emph { background } in standard segmentation formulations. here, we forgo the need for background annotation and consider these together with any other unseen classes as part of the ood set. our framework can integrate, at a pixel - level, any ood detection approaches designed for classification tasks. to address the lack of existing ood datasets and established evaluation metric for medical image segmentation, we propose a cross - validation strategy that treats held - out labelled classes as ood. extensive experiments on both multi - class hyperspectral and rgb surgical imaging datasets demonstrate the robustness and generalisation capability of our proposed framework. | arxiv:2411.09553 |
we consider linear ill - conditioned operator equations in a hilbert space setting. motivated by the aggregation method, we consider approximate solutions constructed from linear combinations of tikhonov regularization, which amounts to finding solutions in a rational krylov space. by mixing these with usual krylov spaces, we consider least - squares problem in these mixed rational spaces. applying the arnoldi method leads to a sparse, pentadiagonal representation of the forward operator, and we introduce the lanczos method for solving the least - squares problem by factorizing this matrix. finally, we present an equivalent conjugate - gradient - type method that does not rely on explicit orthogonalization but uses short - term recursions and tikhonov regularization in each second step. we illustrate the convergence and regularization properties by some numerical examples. | arxiv:2306.03670 |
sparse coding represents a signal sparsely by using an overcomplete dictionary, and obtains promising performance in practical computer vision applications, especially for signal restoration tasks such as image denoising and image inpainting. in recent years, many discriminative sparse coding algorithms have been developed for classification problems, but they cannot naturally handle visual data represented by multiview features. in addition, existing sparse coding algorithms use graph laplacian to model the local geometry of the data distribution. it has been identified that laplacian regularization biases the solution towards a constant function which possibly leads to poor extrapolating power. in this paper, we present multiview hessian discriminative sparse coding ( mhdsc ) which seamlessly integrates hessian regularization with discriminative sparse coding for multiview learning problems. in particular, mhdsc exploits hessian regularization to steer the solution which varies smoothly along geodesics in the manifold, and treats the label information as an additional view of feature for incorporating the discriminative power for image annotation. we conduct extensive experiments on pascal voc ' 07 dataset and demonstrate the effectiveness of mhdsc for image annotation. | arxiv:1307.3811 |
we compute the automorphisms of the bousfield - kan completion at a prime p of the little two - disks operads and show that they are given by the pro - p grothendieck - teichm \ " uller group. we also show that the grothendieck - teichm \ " uller group acts faithfully on the p - complete stable little disks operad. | arxiv:1612.03420 |
purpose : the treatment of cardiovascular diseases requires complex and challenging navigation of a guidewire and catheter. this often leads to lengthy interventions during which the patient and clinician are exposed to x - ray radiation. deep reinforcement learning approaches have shown promise in learning this task and may be the key to automating catheter navigation during robotized interventions. yet, existing training methods show limited capabilities at generalizing to unseen vascular anatomies, requiring to be retrained each time the geometry changes. methods : in this paper, we propose a zero - shot learning strategy for three - dimensional autonomous endovascular navigation. using a very small training set of branching patterns, our reinforcement learning algorithm is able to learn a control that can then be applied to unseen vascular anatomies without retraining. results : we demonstrate our method on 4 different vascular systems, with an average success rate of 95 % at reaching random targets on these anatomies. our strategy is also computationally efficient, allowing the training of our controller to be performed in only 2 hours. conclusion : our training method proved its ability to navigate unseen geometries with different characteristics, thanks to a nearly shape - invariant observation space. | arxiv:2403.02777 |
the effect of limiting the acceptance in rapidity on event - by - event multiplicity fluctuations in nucleus - nucleus collisions has been investigated. our analysis shows that the multiplicity fluctuations decrease when the rapidity acceptance is decreased. we explain this trend by assuming that the probability distribution of the particles in the smaller acceptance window follows binomial distribution. following a simple statistical analysis we conclude that the event - by - event multiplicity fluctuations for full acceptance are likely to be larger than those observed in the experiments, since the experiments usually have detectors with limited acceptance. we discuss the application of our model to simulated data generated using venus, a widely used event generator in heavy - ion collisions. we also discuss the results from our calculations in presence of dynamical fluctuations and possible observation of these in the actual data. | arxiv:nucl-ex/0108011 |
type ii quasars are luminous active galactic nuclei whose centers are obscured by large amounts of gas and dust. in this paper we present 3 - band hst images of nine type ii quasars with redshifts 0. 2 < z < 0. 4 selected from the sloan digital sky survey based on their emission line properties. the intrinsic luminosities of these agn are estimated to be - 24 > m _ b > - 26, but optical obscuration allows their host galaxies to be studied unencumbered by bright nuclei. each object has been imaged in three continuum filters ( ` uv ', ` blue ' and ` yellow ' ) placed between the strong emission lines. the spectacular, high quality images reveal a wealth of details about the structure of the host galaxies and their environments. six of the nine galaxies in the sample are ellipticals with de vaucouleurs light profiles, one object has a well - defined disk component and the remaining two have marginal disks. stellar populations of type ii quasar hosts are more luminous ( by a median of 0. 3 - 0. 7 mag, depending on the wavelength ) and bluer ( by about 0. 4 mag ) than are m * galaxies at the same redshift. when smooth fits to stellar light are subtracted from the images, we find both positive and negative residuals that become more prominent toward shorter wavelengths. we argue that the negative residuals are due to kpc - scale dust obscuration, while most positive residuals are due to the light from the nucleus scattered off interstellar material in the host galaxy. scattered light makes a significant contribution to the broad band continuum emission and can be the dominant component of the extended emission in the uv in extreme cases. | arxiv:astro-ph/0603625 |
compressed sensing ( sparse signal recovery ) often encounters nonnegative data ( e. g., images ). recently we developed the methodology of using ( dense ) compressed counting for recovering nonnegative k - sparse signals. in this paper, we adopt very sparse compressed counting for nonnegative signal recovery. our design matrix is sampled from a maximally - skewed p - stable distribution ( 0 < p < 1 ), and we sparsify the design matrix so that on average ( 1 - g ) - fraction of the entries become zero. the idea is related to very sparse stable random projections ( li et al 2006 and li 2007 ), the prior work for estimating summary statistics of the data. in our theoretical analysis, we show that, when p - > 0, it suffices to use m = k / ( 1 - exp ( - gk ) log n measurements, so that all coordinates can be recovered in one scan of the coordinates. if g = 1 ( i. e., dense design ), then m = k log n. if g = 1 / k or 2 / k ( i. e., very sparse design ), then m = 1. 58k log n or m = 1. 16k log n. this means the design matrix can be indeed very sparse at only a minor inflation of the sample complexity. interestingly, as p - > 1, the required number of measurements is essentially m = 2. 7k log n, provided g = 1 / k. it turns out that this result is a general worst - case bound. | arxiv:1401.0201 |
linear programming ( lp ) decoding is emerging as an attractive alternative to decode low - density parity - check ( ldpc ) codes. however, the earliest lp decoders proposed for binary and nonbinary ldpc codes are not suitable for use at moderate and large code lengths. to overcome this problem, vontobel et al. developed an iterative low - complexity lp ( lclp ) decoding algorithm for binary ldpc codes. the variable and check node calculations of binary lclp decoding algorithm are related to those of binary belief propagation ( bp ). the present authors generalized this work to derive an iterative lclp decoding algorithm for nonbinary linear codes. contrary to binary lclp, the variable and check node calculations of this algorithm are in general different from that of nonbinary bp. the overall complexity of nonbinary lclp decoding is linear in block length ; however the complexity of its check node calculations is exponential in the check node degree. in this paper, we propose a modified bcjr algorithm for efficient check node processing in the nonbinary lclp decoding algorithm. the proposed algorithm has complexity linear in the check node degree. we also introduce an alternative state metric to improve the run time of the proposed algorithm. simulation results are presented for $ ( 504, 252 ) $ and $ ( 1008, 504 ) $ nonbinary ldpc codes over $ \ mathbb { z } _ 4 $. | arxiv:1102.3390 |
we investigate the effects of charged - current ( cc ) nonstandard neutrino interactions ( nsis ) at the source and at the detector in the simulated data for the planned deep underground neutrino experiment ( dune ), while neglecting the neutral - current nsis at the propagation due to the fact that several solutions have been proposed to resolve the degeneracies posed by neutral - current nsis while no solution exists for the degeneracies due to the cc nsis. we study the effects of cc nsis on the simultaneous measurements of $ \ theta _ { 23 } $ and $ \ delta _ { cp } $ in dune. the analysis reveals that 3 $ \ sigma $ c. l. measurement of the correct octant of $ \ theta _ { 23 } $ in the standard mixing scenario is spoiled if the cc nsis are taken into account. likewise, the cc nsis can deteriorate the uncertainty of the $ \ delta _ { cp } $ measurement by a factor of two relative to that in the standard oscillation scenario. we also show that the source and the detector cc nsis can induce a significant amount of fake cp - violation and the cp - conserving case can be excluded by more than 80 \ % c. l. in the presence of fake cp - violation. we further find the potential of dune to constrain the relevant cc nsi parameters from the single parameter fits for both neutrino and antineutrino appearance and disappearance channels at both the near and far detectors. the results show that there could be improvement in the current bounds by at least one order of magnitude at the near and far detector of dune except a few parameters which remain weaker at the far detector. | arxiv:1607.00065 |
covid - 19 has disrupted normal life and has enforced a substantial change in the policies, priorities and activities of individuals, organisations and governments. these changes are proving to be a catalyst for technology and innovation. in this paper, we discuss the pandemic ' s potential impact on the adoption of the internet of things ( iot ) in various broad sectors namely healthcare, smart homes, smart buildings, smart cities, transportation and industrial iot. our perspective and forecast of this impact on iot adoption is based on a thorough research literature review, a careful examination of reports from leading consulting firms and interactions with several industry experts. for each of these sectors, we also provide the details of notable iot initiatives taken in wake of covid - 19. we also highlight the challenges that need to be addressed and important research directions that will facilitate accelerated iot adoption. | arxiv:2101.07196 |
density peak clustering ( dpc ), a popular density - based clustering approach, has received considerable attention from the research community primarily due to its simplicity and fewer - parameter requirement. however, the resultant clusters obtained using dpc are influenced by the sensitive parameter $ d _ c $, which depends on data distribution and requirements of different users. besides, the original dpc algorithm requires visiting a large number of objects, making it slow. to this end, this paper investigates index - based solutions for dpc. specifically, we propose two list - based index methods viz. ( i ) a simple list index, and ( ii ) an advanced cumulative histogram index. efficient query algorithms are proposed for these indices which significantly avoids irrelevant comparisons at the cost of space. for memory - constrained systems, we further introduce an approximate solution to the above indices which allows substantial reduction in the space cost, provided that slight inaccuracies are admissible. furthermore, owing to considerably lower memory requirements of existing tree - based index structures, we also present effective pruning techniques and efficient query algorithms to support dpc using the popular quadtree index and r - tree index. finally, we practically evaluate all the above indices and present the findings and results, obtained from a set of extensive experiments on six synthetic and real datasets. the experimental insights obtained can help to guide in selecting a befitting index. | arxiv:2002.03182 |
= = = 0 - player puzzles = = = conway ' s game of life flexagon polyominoes = = references = = = = external links = = historical math problems / puzzles at mathematical association of america convergence | https://en.wikipedia.org/wiki/Mathematical_puzzle |
the totem experiment will detect leading protons scattered in angles of microradians from the interaction point at the large hadron collider. this will be achieved using detectors with a minimized dead area at the edge. the collaboration has developed an innovative structure at the detector edge reducing the conventional dead width to less than 100 microns, still using standard planar fabrication technology. in this new development, the current of the surface is decoupled from the sensitive volume current within a few tens of micrometers. the basic working principle is explained in this paper. final size detectors have been produced using this approach. the current - voltage and current - temperature characteristics of the detectors were studied and the detectors were successfully tested in a coasting beam experiment. | arxiv:physics/0612105 |
knowledge tracing ( kt ) is concerned with predicting students ' future performance on learning items in intelligent tutoring systems. learning items are tagged with skill labels called knowledge concepts ( kcs ). many kt models expand the sequence of item - student interactions into kc - student interactions by replacing learning items with their constituting kcs. this approach addresses the issue of sparse item - student interactions and minimises the number of model parameters. however, we identified a label leakage problem with this approach. the model ' s ability to learn correlations between kcs belonging to the same item can result in the leakage of ground truth labels, which leads to decreased performance, particularly on datasets with a high number of kcs per item. in this paper, we present methods to prevent label leakage in knowledge tracing ( kt ) models. our model variants that utilize these methods consistently outperform their original counterparts. this further underscores the impact of label leakage on model performance. additionally, these methods enhance the overall performance of kt models, with one model variant surpassing all tested baselines on different benchmarks. notably, our methods are versatile and can be applied to a wide range of kt models. | arxiv:2403.15304 |
in this work, we explore the possibility of using artificial neural networks to impose constraints on teleparallel gravity and its $ f ( t ) $ extensions. we use the available hubble parameter observations from cosmic chronometers and baryon acoustic oscillations from different galaxy surveys. we discuss the procedure for training a network model to reconstruct the hubble diagram. further, we describe the procedure to obtain $ h ' ( z ) $, the first order derivative of $ h ( z ) $, using artificial neural networks which is a novel approach to this method of reconstruction. these analyses are complemented with further studies on the impact of two priors which we put on $ h _ 0 $ to assess their impact on the analysis, which are the local measurements by the sh0es team ( $ h _ 0 ^ { \ text { r20 } } = 73. 2 \ pm 1. 3 $ km mpc $ ^ { - 1 } $ s $ ^ { - 1 } $ ) and the updated trgb calibration from the carnegie supernova project ( $ h _ 0 ^ { \ text { trgb } } = 69. 8 \ pm 1. 9 $ km mpc $ ^ { - 1 } $ s $ ^ { - 1 } $ ), respectively. additionally, we investigate the validity of the concordance model, through some cosmological null tests with these reconstructed data sets. finally, we reconstruct the allowed $ f ( t ) $ functions for different combinations of the observational hubble data sets. results show that the $ \ lambda $ cdm model lies comfortably included at the 1 $ \ sigma $ confidence level for all the examined cases. | arxiv:2209.01113 |
rapid advancements over the years have helped machine learning models reach previously hard - to - achieve goals, sometimes even exceeding human capabilities. however, to attain the desired accuracy, the model sizes and in turn their computational requirements have increased drastically. thus, serving predictions from these models to meet any target latency and cost requirements of applications remains a key challenge, despite recent work in building inference - serving systems as well as algorithmic approaches that dynamically adapt models based on inputs. in this paper, we introduce a form of dynamism, modality selection, where we adaptively choose modalities from inference inputs while maintaining the model quality. we introduce mosel, an automated inference serving system for multi - modal ml models that carefully picks input modalities per request based on user - defined performance and accuracy requirements. mosel exploits modality configurations extensively, improving system throughput by 3. 6 $ \ times $ with an accuracy guarantee and shortening job completion times by 11 $ \ times $. | arxiv:2310.18481 |
we formally define homological quantum rotor codes which use multiple quantum rotors to encode logical information. these codes generalize homological or css quantum codes for qubits or qudits, as well as linear oscillator codes which encode logical oscillators. unlike for qubits or oscillators, homological quantum rotor codes allow one to encode both logical rotors and logical qudits in the same block of code, depending on the homology of the underlying chain complex. in particular, a code based on the chain complex obtained from tessellating the real projective plane or a m \ " { o } bius strip encodes a qubit. we discuss the distance scaling for such codes which can be more subtle than in the qubit case due to the concept of logical operator spreading by continuous stabilizer phase - shifts. we give constructions of homological quantum rotor codes based on 2d and 3d manifolds as well as products of chain complexes. superconducting devices being composed of islands with integer cooper pair charges could form a natural hardware platform for realizing these codes : we show that the $ 0 $ - $ \ pi $ - qubit as well as kitaev ' s current - mirror qubit - - also known as the m \ " { o } bius strip qubit - - are indeed small examples of such codes and discuss possible extensions. | arxiv:2303.13723 |
a smooth scheme x over a field k of positive characteristic is said to be strongly liftable over w _ 2 ( k ), if x and all prime divisors on x can be lifted simultaneously over w _ 2 ( k ). in this paper, we first deduce the kummer covering trick over w _ 2 ( k ), which can be used to construct a large class of smooth projective varieties liftable over w _ 2 ( k ), and to give a direct proof of the kawamata - viehweg vanishing theorem on strongly liftable schemes. secondly, we generalize almost all of the results in [ xie10, xie11 ] to the case where everything is considered over w ( k ), the ring of witt vectors of k. | arxiv:1301.0857 |
the structure of the tate - shafarevich groups of a class of elliptic curves over global function fields is determined. these are known to be finite abelian groups from the monograph [ 1 ] and hence they are direct sums of finite cyclic groups where the orders of these cyclic components are invariants of the tate - shafarevich group. this decomposition of the tate - shafarevich groups into direct sums of finite cyclic groups depends on the behaviour of drinfeld - heegner points on these elliptic curves. these are points analogous to heegner points on elliptic curves over the rational numbers. | arxiv:1602.02932 |
it is a classical result that the category of finitely - generated free monoids serves as a prop for commutative bialgebras. attaching permutations to fix the order of multiplication, we construct an extension of this category that is equivalent to the prop for bialgebras. | arxiv:2106.13107 |
in micro - assembly applications, ensemble of chiplets immersed in a dielectric fluid are steered using dielectrophoretic forces induced by an array of electrode population. generalizing the finite population deterministic models proposed in prior works for individual chiplet position dynamics, we derive a controlled mean field model for a continuum of chiplet population in the form of a nonlocal, nonlinear partial differential equation. the proposed model accounts for the stochastic forces as well as two different types of nonlocal interactions, viz. chiplet - to - chiplet and chiplet - to - electrode interactions. both of these interactions are nonlinear functions of the electrode voltage input. we prove that the deduced mean field evolution can be expressed as the wasserstein gradient flow of a lyapunov - like energy functional. with respect to this functional, the resulting dynamics is a gradient descent on the manifold of joint population density functions with finite second moments that are supported on the position coordinates. | arxiv:2303.10564 |
we show that ' t hooft ' s representation of ( 2 + 1 ) - dimensional gravity in terms of flat polygonal tiles is closely related to a gauge - fixed version of the covariant hamiltonian lattice theory. ' t hooft ' s gauge is remarkable in that it leads to a hamiltonian which is a linear sum of vertex hamiltonians, each of which is defined modulo $ 2 \ pi $. a cyclic hamiltonian implies that ` ` time ' ' is quantized. however, it turns out that this hamiltonian is { \ it constrained }. if one chooses an internal time and solves this constraint for the ` ` physical hamiltonian ' ', the result is not a cyclic function. even if one quantizes { \ it a la dirac }, the ` ` internal time ' ' observable does not acquire a discrete spectrum. we also show that in euclidean 3 - d lattice gravity, ` ` space ' ' can be either discrete or continuous depending on the choice of quantization. finally, we propose a generalization of ' t hooft ' s gauge for hamiltonian lattice formulations of topological gravity dimension 4. | arxiv:gr-qc/9601011 |
this paper introduces a network - based method to capture heterogeneity in consumer microdata. we develop a permutation - based approach that repeatedly combines random samples of all agents ' decisions, and partitions agents into jointly rational types. aggregating these partitions yields a network that captures unobserved heterogeneity, where edges measure how often two agents share the same type across partitions. to evaluate how observable characteristics align with the heterogeneity, we implement permutation tests that shuffle covariate labels across network nodes, thereby generating a null distribution of alignment. we show that this test is exact, with asymptotic power of one. we further propose network - based measures that quantify whether nodes with the same observable attributes are disproportionately linked or clustered, along with standardized effect sizes that gauge each covariate ' s global influence. this yields a flexible, nonparametric measure of the heterogeneity structure. finally, we apply our method to grocery expenditure data from the stanford basket dataset. | arxiv:2501.13721 |
the all - photonic quantum repeater scheme, utilizing a type of graph state called the repeater graph state ( rgs ), promises resilience to photon losses and operational errors, offering a fast bell pair generation rate limited only by the rgs creation time ( rather than enforced round - trip waits ). while existing research has predominantly focused on rgs generation and secret key sharing rate analysis, there is a need to extend investigations to encompass broader applications, such as distributed computation and teleportation, the main tasks envisioned for the quantum internet. here we propose a new emitter - photonic qubit building block and an rgs protocol that addresses several key considerations : end node involvement in connection establishment, decoding of logical qubits within the rgs, and computing the pauli frame corrections at each participating node to ensure the desired correct end - to - end bell pair state. our proposed building block significantly reduces the total number of emissive quantum memories required for end nodes and seamlessly integrates all - photonic and memory - based repeaters under the same communication protocol. we also present an algorithm for decoding logical measurement results, employing graphical reasoning based on graph state manipulation rules. | arxiv:2306.03748 |
we report a first - principles description of inelastic lifetimes of excited electrons in real cu and al, which we compute, within the gw approximation of many - body theory, from the knowledge of the self - energy of the excited quasiparticle. our full band - structure calculations indicate that actual lifetimes are the result of a delicate balance between localization, density of states, screening, and fermi - surface topology. a major contribution from $ d $ - electrons participating in the screening of electron - electron interactions yields lifetimes of excited electrons in copper that are larger than those of electrons in a free - electron gas with the electron density equal to that of valence ( $ 4s ^ 1 $ ) electrons. in aluminum, a simple metal with no $ d $ - bands, splitting of the band structure over the fermi level results in electron lifetimes that are smaller than those of electrons in a free - electron gas. | arxiv:cond-mat/9907490 |
faculty of engineering of sultan ageng tirtayasa university faculty of engineering of university of indonesia faculty of engineering of gadjah mada university faculty of engineering of diponegoro university faculty of engineering of universitas negeri padang faculty of engineering of universitas negeri malang faculty of engineering of hasanuddin university faculty of engineering of university of surabaya = = = malaysia = = = activities on engineering education in malaysia are spearheaded by the society of engineering education malaysia ( seem ). seem was established in 2008 and launched on 23 february 2009. the idea of establishing the society of engineering education was initiated in april 2005 with the creating of a pro - team committee for seem. the objectives of this society are to contribute to the development of education in the fields of engineering education and science and technology, including teaching and learning, counseling, research, service and public relations. universiti teknologi malaysia centre for engineering education, cee universiti tunku abdul rahman tunku abdul rahman university college southern university college universiti malaysia pahang = = = pakistan = = = in pakistan, engineering education is accredited by the pakistan engineering council, a statutory body, constituted under the pec act no. v of 1976 of the constitution of pakistan and amended vide ordinance no. xxiii of 2006, to regulate the engineering profession in the country. it aims to achieve rapid and sustainable growth in all national, economic and social fields. the council is responsible for maintaining realistic and internationally relevant standards of professional competence and ethics for engineers in the country. pec interacts with the government, both at the federal and provincial level by participating in commissions, committees and advisory bodies. pec is a fully representative body of the engineering community in the country. pec has a full signatory status with washington accord. = = = philippines = = = the professional regulation commission is the regulating body for engineers in the philippines. in the philippines the center for innovation in engineering education ( ciee ) at batangas state university - the national engineering university operates with a visionary goals to elevate the standard of engineering education in the country and to cultivate individuals equipped to lead in the dynamic global knowledge economy. with a strategic focus on fostering academic and industry leaders, ciee acts as a nucleus, fostering collaborations among interdisciplinary experts. this collective synergy promotes a seamless exchange of knowledge and resources, bridging the gap between academia and industry. ciee ' s multifaceted support spans | https://en.wikipedia.org/wiki/Engineering_education |
and babylon 5 ( 1994 – 1999 ). syfy, launched in 1992 as the sci - fi channel, specializes in science fiction, supernatural horror, and fantasy. the space - western series firefly premiered in 2002 on fox. it is set in the year 2517, after the arrival of humans in a new star system, and follows the adventures of the renegade crew of serenity, a " firefly - class " spaceship. orphan black began its five - season run in 2013, about a woman who assumes the identity of one of her several genetically identical human clones. in late 2015, syfy premiered the expanse to great critical acclaim, an american tv series about humanity ' s colonization of the solar system. its later seasons would then be aired through amazon prime video. = = social influence = = science fiction ' s rapid rise in popularity during the first half of the 20th century was closely tied to the popular respect paid to science at that time, as well as the rapid pace of technological innovation and new inventions. science fiction has often predicted scientific and technological progress. some works predict that new inventions and progress will tend to improve life and society, for instance the stories of arthur c. clarke and star trek. others, such as h. g. wells ' s the time machine and aldous huxley ' s brave new world, warn about possible negative consequences. in 2001 the national science foundation conducted a survey on " public attitudes and public understanding : science fiction and pseudoscience ". it found that people who read or prefer science fiction may think about or relate to science differently than other people. they also tend to support the space program and the idea of contacting extraterrestrial civilizations. carl sagan wrote : " many scientists deeply involved in the exploration of the solar system ( myself among them ) were first turned in that direction by science fiction. " science fiction has predicted several existing inventions, such as the atomic bomb, robots, and borazon. in the 2020 series away astronauts use a mars rover called insight to listen intently for a landing on mars. in 2022 scientists used insight to listen for the landing of a spacecraft. science fiction can act as a vehicle to analyze and recognize a society ' s past, present, and potential future social relationships with the other. science fiction offers a medium and representation of alterity and differences in social identity. brian aldiss described science fiction as " cultural wallpaper ". this widespread influence can be found in trends for writers to employ science fiction as a tool for advocacy and generating cultural insights | https://en.wikipedia.org/wiki/Science_fiction |
packaging packing problems queueing theory engineering economics manufacturing engineering cutting stock problem bin packing problem = = notes = = = = bibliography = = yam, k. l., " encyclopedia of packaging technology ", john wiley & sons, 2009, isbn 978 - 0 - 470 - 08704 - 6 hanlon, kelsey, and forcinio, " handbook of package engineering ", crc press, 1998 | https://en.wikipedia.org/wiki/Packaging_engineering |
a simple regge - eikonal model with the eikonal represented as a single - reggeon - exchange term is applied to description of the nucleon - nucleon elastic diffractive scattering at ultra - high energies. the range of validity of the proposed approximation is discussed. the model predictions for the proton - proton cross - sections at the collision energy 14 tev are given. | arxiv:1404.2851 |
realization of an on - chip quantum network is a major goal in the field of integrated quantum photonics. a typical network scalable on - chip demands optical integration of single photon sources, optical circuitry and detectors for routing and processing of quantum information. current solutions either notoriously experience considerable decoherence or suffer from extended footprint dimensions limiting their on - chip scaling. here we propose and numerically demonstrate a robust on - chip quantum network based on an epsilon - near - zero ( enz ) material, whose dielectric function has the real part close to zero. we show that enz materials strongly protect quantum information against decoherence and losses during its propagation in the dense network. as an example, we model a feasible implementation of an enz network and demonstrate that quantum information can be reliably sent across a titanium nitride grid with a coherence length of 434 nm, operating at room temperature, which is more than 40 times larger than state - of - the - art plasmonic analogs. our results facilitate practical realization of large multi - node quantum photonic networks and circuits on - a - chip. | arxiv:1808.04272 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.