text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
responding to the current urgent need for low carbon emissions and high efficiency in manufacturing processes, the relationships between three different machining factors ( depth of cut, feed rate, and spindle rate ) on power consumption and surface finish ( roughness ) were analysed by applying a bayesian seemingly unrelated regressions ( sur ) model. for the analysis, an optimization criterion was established and minimized by using an optimization algorithm that combines evolutionary algorithm methods with a derivative - based ( quasi - newton ) method to find the optimal conditions for energy consumption that obtains a good surface finish quality. a bayesian anova was also performed to identify the most important factors in terms of variance explanation of the observed outcomes. the data were obtained from a factorial experimental design performed in two computerized numerical control ( cnc ) vertical machining centers ( haas umc - 750 and leadwell v - 40it ). some results from this study show that the feed rate is the most influential factor in power consumption, and the depth of cut is the factor with the stronger influence on roughness values. an optimal operational point is found for the three factors with a predictive error of less than 0. 01 % and 0. 03 % for the leadwell v - 40it machine and the haas umc - 750 machine, respectively. | arxiv:2207.05243 |
we study the solutions of the equations of motion in the gauged ( 2 + 1 ) - dimensional nonlinear schr \ " odinger model. the contribution of chern - simons gauge fields leads to a significant decrease of the critical power of self - focusing. we also show that at appropriate boundary conditions in the considered model there exists a regime of turbulent motions of hydrodynamic type. | arxiv:hep-th/9608069 |
the process of evolutionary emergence of purposeful adaptive behavior is investigated by means of computer simulations. the model proposed implies that there is an evolving population of simple agents, which have two natural needs : energy and reproduction. any need is characterized quantitatively by a corresponding motivation. motivations determine goal - directed behavior of agents. the model demonstrates that purposeful behavior does emerge in the simulated evolutionary processes. emergence of purposefulness is accompanied by origin of a simple hierarchy in the control system of agents. | arxiv:cs/0110021 |
strangeness enhancement and collective flow are considered signatures of the quark gluon plasma formation. these phenomena have been detected not only in relativistic heavy ion collisions but also in high energy, high multiplicity events of proton - proton and proton - nucleus ( " small systems " ) scatterings. indeed, a universal behavior emerges by considering the parton density in the transverse plane as the dynamical quantity to specify the initial condition of the collisions. on the other hand, $ e ^ + e ^ - $ annihilation data at lep and lower energies indicate that there is no strangeness enhancement and no flow - like effect. we show that the parton density in the transverse plane generated in $ e ^ + e ^ - $ annihilation at the available energy is too low to expect such effects. the event - by - event multiplicity where strangeness suppression and flow - like phenomenon could show up in $ e ^ + e ^ - $ is evaluated. | arxiv:2011.06966 |
we theoretically study magnon - phonon hybrid excitations ( magnon - polarons ) in two - dimensional antiferromagnets on a honeycomb lattice. with an in - plane dzyaloshinskii - moriya interaction ( dmi ) allowed from mirror symmetry breaking from phonons, we find non - trivial berry curvature around the anti - crossing rings among magnon and both optical and acoustic phonon bands, which gives rise to finite chern numbers. we show that the chern numbers of the magnon - polaron bands can be manipulated by changing the magnetic field direction or strength. we evaluate the thermal hall conductivity reflecting the non - trivial berry curvatures of magnon - polarons and propose a valley hall effect resulting from spin - induced chiral phonons as a possible experimental signature. our study complements prior work on magnon - phonon hybridized systems without optical phonons and suggests possible applications in spin caloritronics with topological magnons and chiral phonons. | arxiv:2107.11484 |
we present a self - consistent calculation of the finite temperature effective potential for $ \ lambda \ phi ^ 4 $ theory, using the composite operator effective potential in which an infinite series of the leading diagrams is summed up. our calculation establishes the proper form of the leading correction to the perturbative one - loop effective potential. | arxiv:hep-ph/9211211 |
our goal in this paper is the adaptation of image - text models for long video retrieval. recent works have demonstrated state - of - the - art performance in video retrieval by adopting clip, effectively hitchhiking on the image - text representation for video tasks. however, there has been limited success in learning temporal aggregation that outperform mean - pooling the image - level representations extracted per frame by clip. we find that the simple yet effective baseline of weighted - mean of frame embeddings via query - scoring is a significant improvement above all prior temporal modelling attempts and mean - pooling. in doing so, we provide an improved baseline for others to compare to and demonstrate state - of - the - art performance of this simple baseline on a suite of long video retrieval benchmarks. | arxiv:2205.08508 |
statistical methods are usually applied in the processing of digital images for the analysis of the textures displayed by them. aiming to evaluate the urbanization of a given location from satellite or aerial images, here we consider a simple processing to distinguish in them the ' urban ' from the ' rural ' texture. the method is based on the mean values and the standard deviations of the colour tones of image pixels. the processing of the input images allows to obtain some maps from which a quantitative evaluation of the textures can be obtained. | arxiv:1611.03469 |
su ( 2 ) gauge theory coupled to massless fermions in the adjoint representation is quantized in light - cone gauge by imposing the equal - time canonical algebra. the theory is defined on a space - time cylinder with " twisted " boundary conditions, periodic for one color component ( the diagonal 3 - component ) and antiperiodic for the other two. the focus of the study is on the non - trivial vacuum structure and the fermion condensate. it is shown that the indefinite - metric quantization of free gauge bosons is not compatible with the residual gauge symmetry of the interacting theory. a suitable quantization of the unphysical modes of the gauge field is necessary in order to guarantee the consistency of the subsidiary condition and allow the quantum representation of the residual gauge symmetry of the classical lagrangian : the 3 - color component of the gauge field must be quantized in a space with an indefinite metric while the other two components require a positive - definite metric. the contribution of the latter to the free hamiltonian becomes highly pathological in this representation, but a larger portion of the interacting hamiltonian can be diagonalized, thus allowing perturbative calculations to be performed. the vacuum is evaluated through second order in perturbation theory and this result is used for an approximate determination of the fermion condensate. | arxiv:hep-th/0408006 |
in this paper, we study a class of generalized extensible beam equations with a superlinear nonlinearity \ begin { equation * } \ left \ { \ begin { array } { ll } \ delta ^ { 2 } u - m \ left ( \ vert \ nabla u \ vert _ { l ^ { 2 } } ^ { 2 } \ right ) \ delta u + \ lambda v ( x ) u = f ( x, u ) & \ text { in } \ mathbb { r } ^ { n }, \ \ u \ in h ^ { 2 } ( \ mathbb { r } ^ { n } ), & \ end { array } % \ right. \ end { equation * } % where $ n \ geq 3 $, $ m ( t ) = at ^ { \ delta } + b $ with $ a, \ delta > 0 $ and $ b \ in \ mathbb { % r } $, $ \ lambda > 0 $ is a parameter, $ v \ in c ( \ mathbb { r } ^ { n }, \ mathbb { r } ) $ and $ % f \ in c ( \ mathbb { r } ^ { n } \ times \ mathbb { r }, \ mathbb { r } ). $ unlike most other papers on this problem, we allow the constant $ b $ to be nonpositive, which has the physical significance. under some suitable assumptions on $ v ( x ) $ and $ f ( x, u ) $, when $ a $ is small and $ \ lambda $ is large enough, we prove the existence of two nontrivial solutions $ u _ { a, \ lambda } ^ { ( 1 ) } $ and $ % u _ { a, \ lambda } ^ { ( 2 ) } $, one of which will blow up as the nonlocal term vanishes. moreover, $ u _ { a, \ lambda } ^ { ( 1 ) } \ rightarrow u _ { \ infty } ^ { ( 1 ) } $ and $ % u _ { a, \ lambda } ^ { ( 2 ) } \ rightarrow u _ { \ infty } ^ { ( 2 ) } $ strongly in $ h ^ { 2 } ( \ mathbb { % r } ^ { n } ) $ as $ \ lambda \ rightarrow \ infty $, where $ u _ { \ infty } | arxiv:1812.03043 |
let $ \ ell $ be a prime, $ k $ a finitely generated field of characteristic different from $ \ ell $, and $ x $ a smooth geometrically connected curve over $ k $. say a semisimple representation of $ \ pi _ 1 ^ { \ mathrm { et } } ( x _ { \ bar k } ) $ is arithmetic if it extends to a finite index subgroup of $ \ pi _ 1 ^ { \ mathrm { et } } ( x ) $. we show that there exists an effective constant $ n = n ( x, \ ell ) $ such that any semisimple arithmetic representation of $ \ pi _ 1 ^ { \ mathrm { et } } ( x _ { \ bar k } ) $ into $ \ mathrm { gl } _ n ( \ bar { \ mathbb { z } _ \ ell } ) $, which is trivial mod $ \ ell ^ n $, is in fact trivial. this extends a previous result of the second author from characteristic zero to all characteristics. the proof relies on a new noncommutative version of siegel ' s linearization theorem and the $ \ ell $ - adic form of baker ' s theorem on linear forms in logarithms. | arxiv:2107.02213 |
several methods have been proposed in the spatial statistics literature for the analysis of big data sets in continuous domains. however, new methods for analyzing high - dimensional areal data are still scarce. here, we propose a scalable bayesian modeling approach for smoothing mortality ( or incidence ) risks in high - dimensional data, that is, when the number of small areas is very large. the method is implemented in the r add - on package bigdm. model fitting and inference is based on the idea of " divide and conquer " and use integrated nested laplace approximations and numerical integration. we analyze the proposal ' s empirical performance in a comprehensive simulation study that consider two model - free settings. finally, the methodology is applied to analyze male colorectal cancer mortality in spanish municipalities showing its benefits with regard to the standard approach in terms of goodness of fit and computational time. | arxiv:2007.07724 |
discrete mathematics is the study of mathematical structures that can be considered " discrete " ( in a way analogous to discrete variables, having a bijection with the set of natural numbers ) rather than " continuous " ( analogously to continuous functions ). objects studied in discrete mathematics include integers, graphs, and statements in logic. by contrast, discrete mathematics excludes topics in " continuous mathematics " such as real numbers, calculus or euclidean geometry. discrete objects can often be enumerated by integers ; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets ( finite sets or sets with the same cardinality as the natural numbers ). however, there is no exact definition of the term " discrete mathematics ". the set of objects studied in discrete mathematics can be finite or infinite. the term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business. research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in " discrete " steps and store data in " discrete " bits. concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. conversely, computer implementations are significant in applying ideas from discrete mathematics to real - world problems. although the main objects of study in discrete mathematics are discrete objects, analytic methods from " continuous " mathematics are often employed as well. in university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course ; its contents were somewhat haphazard at the time. the curriculum has thereafter developed in conjunction with efforts by acm and maa into a course that is basically intended to develop mathematical maturity in first - year students ; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. some high - school - level discrete mathematics textbooks have appeared as well. at this level, discrete mathematics is sometimes seen as a preparatory course, like precalculus in this respect. the fulkerson prize is awarded for outstanding papers in discrete mathematics. = = topics = = = = = theoretical computer science = = = theoretical computer science includes areas of discrete mathematics relevant to computing. it draws heavily on graph theory and mathematical logic. included within theoretical computer science is the study of algorithms and data structures. computability studies what can be computed in principle, and has close ties to logic, while | https://en.wikipedia.org/wiki/Discrete_mathematics |
some time ago we extended our monogenity investigations and calculations of generators of power integral bases to the relative case. up to now we considered ( usually totally real ) extensions of complex quartic fields. in the present paper we consider power integral bases in relative extensions of totally real fields. totally complex quartic extensions of totally real number fields seems to be the most simple case, that we detail here. as we shall see, even in this case we have to overcome several unexpected difficulties, which we can, however solve by properly ( but not trivially ) adjusting standard methods. we demonstrate our general algorithm on an explicit example. we describe how the general methods for solving relative index form equations in quartic relative extensions are modified in this case. as a byproduct we show that relative thue equations in totally complex extensions of totally real fields can only have small solutions, and we construct a special method for the enumeration of small solutions of special unit equations. these statements can be applied to other diophantine problems, as well. | arxiv:2004.05393 |
we conduct numerical simulations of the interacting ejecta from an exploding co white dwarf ( wd ) with a he wd donor in the double - detonation scenario for type ia supernovae ( sne ia ), and study the possibility of exploding the companion wd. we also study the long time imprint of the collision on the supernova remnant. when the donor he wd has a low mass, m _ wd = 0. 2msun, it is at a distance of ~ 0. 08rsun from the explosion, and helium is not ignited. the low mass he wd casts an ' ejecta shadow ' behind it. by evolving the ejecta for longer times, we find that the outer parts of the shadowed side are fainter and its boundary with the ambient gas is somewhat flat. more massive he wd donors, m _ wd ~ 0. 4msun, must be closer to the co wd to transfer mass. at a distance of a < 0. 045rsun helium is detonated and the he wd explodes, leading to a triple detonation scenario. in the explosion of the donor wd approximately 0. 15msun of unburned helium is ejected. this might be observed as a peculiar type ib supernova. | arxiv:1410.1153 |
. the elongation of current gun tubes, such as the new german 120 mm l / 55, which was introduced by rheinmetall is considered only an interim solution as it does not offer the required increase in muzzle velocity. even advanced kinetic energy ammunition such as the united states ' m829a3 is considered only an interim solution against future threats. to that extent the solid propellant is considered to have reached the end of its usefulness, although it will remain the principal propulsion method for at least the next decade until newer technologies mature. etc technology offers a medium - risk upgrade and is developed to the point that further improvements are so minor that it can be considered mature. the lightweight american 120 mm xm291 came close to achieving 17 mj of muzzle energy, which is the lower - end muzzle energy spectrum for a 140 mm gun. however, the success of the xm291 does not imply the success of etc technology as there are key parts of the propulsion system that are not yet understood or fully developed, such as the plasma ignition process. nevertheless, there is substantial existing evidence that etc technology is viable and worth the money to continue development. furthermore, it can be integrated into current gun systems. = = operational principle = = an electrothermal - chemical gun uses a plasma cartridge to ignite and control the ammunition ' s propellant, using electrical energy as a catalyst to begin the process. originally researched by dr. jon parmentola for the u. s. army, it has grown into a very plausible successor to a standard solid propellant tank gun. since the beginning of research the united states has funded the xm291 gun project with us $ 4, 000, 000, basic research with us $ 300, 000, and applied research with us $ 600, 000. since then it has been proven to work, although efficiency to the level required has not yet been accomplished. etc increases the performance of conventional solid propellants, reduces the effect of temperature on propellant expansion and allows for more advanced, higher density propellants to be used. it will also reduce pressure placed on the barrel in comparison to alternative technologies that offer the same muzzle energy given the fact that it helps spread the propellant ' s gas much more smoothly during ignition. currently, there are two principal methods of plasma initiation : the flashboard large area emitter ( flare ) and the triple coaxial plasma igniter ( tcpi ). = = = flashboard large area emitter = = = flashboards run in several | https://en.wikipedia.org/wiki/Electrothermal-chemical_technology |
perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. many companies employ assembly lines of robots, especially in automotive industries and some factories are so robotized that they can run by themselves. outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. robots are also sold for various residential applications, from recreation to domestic applications. = = = structural analysis = = = structural analysis is the branch of mechanical engineering ( and also civil engineering ) devoted to examining why and how objects fail and to fix the objects and their performance. structural failures occur in two general modes : static failure, and fatigue failure. static structural failure occurs when, upon being loaded ( having a force applied ) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. fatigue failure occurs because of imperfections in the object : a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle ( propagation ) until the crack is large enough to cause ultimate failure. failure is not simply defined as when a part breaks, however ; it is defined as when a part does not operate as intended. some systems, such as the perforated top sections of some plastic bags, are designed to break. if these systems do not break, failure analysis might be employed to determine the cause. structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. engineers often use online documents and books such as those published by asm to aid them in determining the type of failure and possible causes. once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests. = = = thermodynamics and thermo - science = = = thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. at its simplest, thermodynamics is the study of energy, its use and transformation through a system. typically, engineering thermodynamics is concerned with changing energy from one form to another. as an example, automotive engines convert chemical energy ( enthalpy ) from the fuel into heat, and then into mechanical work that eventually turns the wheels. therm | https://en.wikipedia.org/wiki/Mechanical_engineering |
particle - based methods are a practical tool in computational fluid dynamics, and novel types of methods have been proposed. however, widely developed lagrangian - type formulations suffer from the nonuniform distribution of particles, which is enhanced over time and result in problems in computational efficiency and parallel computations. to mitigate these problems, a mesh - constrained discrete point ( mcd ) method was developed for stationary boundary problems ( matsuda et al., 2022 ). although the mcd method is a meshless method that uses moving least - squares approximation, the arrangement of particles ( or discrete points ( dps ) ) is specialized so that their positions are constrained in background meshes to obtain a closely uniform distribution. this achieves a reasonable approximation for spatial derivatives with compact stencils without encountering any ill - posed condition and leads to good performance in terms of computational efficiency. in this study, a novel meshless method based on the mcd method for incompressible flows with moving boundaries is proposed. to ensure the mesh constraint of each dp in moving boundary problems, a novel updating algorithm for the dp arrangement is developed so that the position of dps is not only rearranged but the dps are also reassigned the role of being on the boundary or not. the proposed method achieved reasonable results in numerical experiments for well - known moving boundary problems. | arxiv:2404.17542 |
using quantum - classical analogies, we find that dynamical pictures of quantum mechanics have precise counterparts in classical mechanics. in particular, the eulerian and lagrangian descriptions of fluid dynamics in classical mechanics are the analogs of the schroedinger and heisenberg pictures in quantum mechanics, respectively. similarities between classical and quantum dynamical pictures are explored within the framework of the koopman - von neumann formalism. these allow for a natural definition of various dynamical pictures in classical mechanics as well as the application of classical concepts to quantum dynamics. as an illustration, we use the interaction picture to find the classical evolution of an ensemble of particles of equal initial momenta and arbitrary configuration density under the action of a constant force in one dimension. as a second example, we discuss the extension of the ideas of sensitivity to initial conditions and chaos in classical mechanics to quantum mechanics. | arxiv:1305.5272 |
the aim of this paper is to discuss the constructivity of the method originally introduced by u. bessi to approach the phenomenon of topological instability commonly known as arnold ' s diffusion. by adapting results and proofs from existing works and introducing additional tools where necessary, it is shown how, at least for a ( well known ) paradigmatic model, it is possible to obtain a rigorous proof on a suitable discrete space, which can be fully implemented on a computer. a selection of explicitly constructed diffusing trajectories for the system at hand is presented in the final section. | arxiv:2306.11834 |
the aim of this article is to establish a toponogov type triangle comparison theorem for finsler manifolds, in the manner of radial curvature geometry. we consider the situation that the radial flag curvature is bounded below by the radial curvature function of a non - compact surface of revolution, the edge opposite to the base point is contained in a berwald - like region, and that the finsler metric is convex enough in the radial directions in that region. | arxiv:1205.3913 |
we report the discovery of an extremely close, eclipsing binary system. a white dwarf is orbited by a core he - burning compact hot subdwarf star with a period as short as $ \ simeq0. 04987 { \ rm d } $ making this system the most compact hot subdwarf binary discovered so far. the subdwarf will start to transfer helium - rich material on short timescales of less than $ 50 { \ rm myr } $. the ignition of he - burning at the surface may trigger carbon - burning in the core although the wd is less massive than the chandrasekhar limit ( $ > 0. 74 \, m _ { \ rm \ odot } $ ) making this binary a possible progenitor candidate for a supernova type ia event. | arxiv:1209.4740 |
the daniell - kolmogorov extension theorem is a fundamental result in the theory of stochastic processes, as it allows one to construct a stochastic process with prescribed finite - dimensional distributions. however, it is well - known that the domain of the constructed probability measure - the product sigma - algebra in the set of all paths - is not sufficiently rich. this problem is usually dealt with through a modification of the stochastic process, essentially changing the sample paths so that they become c \ ` adl \ ` ag. assuming a countable state space, we provide an alternative version of the daniell - kolmogorov extension theorem that does not suffer from this problem, in that the domain is sufficiently rich and we do not need a subsequent modification step : we assume a rather weak regularity condition on the finite - dimensional distributions, and directly obtain a probability measure on the product sigma - algebra in the set of all c \ ` adl \ ` ag paths. | arxiv:2301.07992 |
non - overlapping domain decomposition methods necessarily have to exchange dirichlet and neumann traces at interfaces in order to be able to converge to the underlying mono - domain solution. well known such non - overlapping methods are the dirichlet - neumann method, the feti and neumann - neumann methods, and optimized schwarz methods. for all these methods, cross - points in the domain decomposition configuration where more than two subdomains meet do not pose any problem at the continuous level, but care must be taken when the methods are discretized. we show in this paper two possible approaches for the consistent discretization of neumann conditions at cross - points in a finite element setting. | arxiv:1404.4698 |
overlap coincidence in a self - affine tiling in $ \ r ^ d $ is equivalent to pure point dynamical spectrum of the tiling dynamical system. we interpret the overlap coincidence in the setting of substitution delone set in $ \ r ^ d $ and find an efficient algorithm to check the pure point dynamical spectrum. this algorithm is easy to implement into a computer program. we give the program and apply it to several examples. in the course the proof of the algorithm, we show a variant of the conjecture of urba \ ' nski ( solomyak \ cite { solomyak : 08 } ) on the hausdorff dimension of the boundaries of fractal tiles. | arxiv:1003.2898 |
we consider single field chaotic inflationary models plus a cosine modulation term, as in axion monodromy models, and augment it by a light scalar field with similar cosine coupling. we show the power spectrum of curvature perturbations of this model is dominated by the one - loop contribution to inflaton two - point function which is enhanced due to resonant interactions. this allows to disentangle the scale of scalar and tensor perturbations and hence to suppress the ratio of tensor - to - scalar power spectra and alters the expression of scalar spectral tilt from the simple chaotic models, thus opening the way to reconcile chaotic models with convex potential and the planck data. as in monodromy inflation models, we also have a cosine modulation in spectral tilt. we mention that contribution of resonance effects on non - gaussianty is small and it remains within the current bounds. resonant production of light particles toward the end of inflation may set the stage for a successful reheating model. | arxiv:1903.05120 |
for example, a phone with 26 to 40 decibel is generally sufficient for mild hearing loss, while a phone with 71 to 90 decibel is better for more severe hearing loss. = = augmentative and alternative communication = = augmentative and alternative communication ( aac ) is an umbrella term that encompasses methods of communication for those with impairments or restrictions on the production or comprehension of spoken or written language. aac systems are extremely diverse and depend on the capabilities of the user. they may be as basic as pictures on a board that are used to request food, drink, or other care ; or they can be advanced speech generating devices, based on speech synthesis, that are capable of storing hundreds of phrases and words. = = cognitive impairments = = assistive technology for cognition ( atc ) is the use of technology ( usually high tech ) to augment and assist cognitive processes such as attention, memory, self - regulation, navigation, emotion recognition and management, planning, and sequencing activity. systematic reviews of the field have found that the number of atc are growing rapidly, but have focused on memory and planning, that there is emerging evidence for efficacy, that a lot of scope exists to develop new atc. examples of atc include : neuropage which prompts users about meetings, wakamaru, which provides companionship and reminds users to take medicine and calls for help if something is wrong, and telephone reassurance systems. = = = memory aids = = = memory aids are any type of assistive technology that helps a user learn and remember certain information. many memory aids are used for cognitive impairments such as reading, writing, or organizational difficulties. for example, a smartpen records handwritten notes by creating both a digital copy and an audio recording of the text. users simply tap certain parts of their notes, the pen saves it, and reads it back to them. from there, the user can also download their notes onto a computer for increased accessibility. digital voice recorders are also used to record " in the moment " information for fast and easy recall at a later time. a 2017 cochrane review highlighted the current lack of high - quality evidence to determine whether assistive technology effectively supports people with dementia to manage memory issues. thus, it is not presently sure whether or not assistive technology is beneficial for memory problems. = = = educational software = = = educational software is software that assists people with reading, learning, comprehension, and organizational difficulties. any accommodation software such as text readers, notetake | https://en.wikipedia.org/wiki/Assistive_technology |
ground - based counts and colors of faint galaxies in the u and r bands in one field at high galactic latitude are presented. integrated over flux, a total of 1. 2x10 ^ 5 sources per square degree are found to u = 25. 5 mag and 6. 3x10 ^ 5 sources per square degree to r = 27 mag, with d log n / dm ~ 0. 5 in the u band and d log n / dm ~ 0. 3 in the r band. consistent with these number - magnitude curves, sources become bluer with increasing magnitude to median u - r = 0. 6 mag at 24 < u < 25 mag and u - r = 1. 2 mag at 25 < r < 26 mag. because the lyman break redshifts into the u band at z ~ 3, at least 1. 2x10 ^ 5 sources per square degree must be at redshifts z < 3. measurable u - band fluxes of 73 percent of the 6. 3x10 ^ 5 sources per square degree suggest that the majority of these also lie at z < 3. these results require an enormous space density of objects in any cosmological model. | arxiv:astro-ph/9702241 |
we present an analysis of the structures and dynamics of the merging cluster abell ~ 1201, which has two sloshing cold fronts around a cooling core, and an offset gas core approximately 500kpc northwest of the center. new chandra and xmm - newton data reveal a region of enhanced brightness east of the offset core, with breaks in surface brightness along its boundary to the north and east. this is interpreted as a tail of gas stripped from the offset core. gas in the offset core and the tail is distinguished from other gas at the same distance from the cluster center chiefly by having higher density, hence lower entropy. in addition, the offset core shows marginally lower temperature and metallicity than the surrounding area. the metallicity in the cool core is high and there is an abrupt drop in metallicity across the southern cold front. we interpret the observed properties of the system, including the placement of the cold fronts, the offset core and its tail in terms of a simple merger scenario. the offset core is the remnant of a merging subcluster, which first passed pericenter southeast of the center of the primary cluster and is now close to its second pericenter passage, moving at ~ 1000 km / s. sloshing excited by the merger gave rise to the two cold fronts and the disposition of the cold fronts reveals that we view the merger from close to the plane of the orbit of the offset core. | arxiv:1205.2095 |
though the statistical analysis of ranking data has been a subject of interest over the past centuries, especially in economics, psychology or social choice theory, it has been revitalized in the past 15 years by recent applications such as recommender or search engines and is receiving now increasing interest in the machine learning literature. numerous modern systems indeed generate ranking data, representing for instance ordered results to a query or user preferences. each such ranking usually involves a small but varying subset of the whole catalog of items only. the study of the variability of these data, i. e. the statistical analysis of incomplete rank - ings, is however a great statistical and computational challenge, because of their heterogeneity and the related combinatorial complexity of the problem. whereas many statistical methods for analyzing full rankings ( orderings of all the items in the catalog ) are documented in the dedicated literature, partial rankings ( full rankings with ties ) or pairwise comparisons, only a few approaches are available today to deal with incomplete ranking, relying each on a strong specific assumption. it is the purpose of this article to introduce a novel general framework for the statistical analysis of incomplete rankings. it is based on a representation tailored to these specific data, whose construction is also explained here, which fits with the natural multi - scale structure of incomplete rankings and provides a new decomposition of rank information with a multiresolu - tion analysis interpretation ( mra ). we show that the mra representation naturally allows to overcome both the statistical and computational challenges without any structural assumption on the data. it therefore provides a general and flexible framework to solve a wide variety of statistical problems, where data are of the form of incomplete rankings. | arxiv:1601.00399 |
quantum cloning is an essential operation in quantum information and quantum computing. similar to the ` copy ' operation in classical computing, the cloning of flying bits for further processing from the solid - state quantum bits in storage is an operation frequently used in quantum information processing. here we propose a high - fidelity and controllable quantum cloning scheme between solid bits and flying bits. in order to overcome the obstacles from the no - cloning theorem and the weak phonon - photon interaction, we introduce a hybrid optomechanical system that performs both the probabilistic cloning and deterministic cloning closed to the theoretical optimal limit with the help of designed driving pulse in the presence of dissipation. in addition, our scheme allows a highly tunable switching between two cloning methods, namely the probabilistic and deterministic cloning, by simply changing the input laser pulse. this provides a promising platform for experimental executability. | arxiv:2302.05643 |
ngc 3311, the giant cd galaxy in the hydra cluster ( a1060 ), has one of the largest globular cluster systems known. we describe new gemini gmos ( g ', i ' ) photometry of the ngc 3311 field which reveals that the red, metal - rich side of its globular cluster population extends smoothly upward into the mass range associated with the new class of ultra - compact dwarfs ( ucds ). we identify 29 ucd candidates with estimated masses > 6x10 ^ 6 solar masses and discuss their characteristics. this ucd - like sequence is the most well defined one yet seen, and reinforces current ideas that the high - mass end of the globular cluster sequence merges continuously into the ucd sequence, which connects in turn to the e galaxy structural sequence. | arxiv:0708.1514 |
the aim of this paper is a characterization of great antipodal sets of complex grassmannian manifolds as certain designs with the smallest cardinalities. | arxiv:1303.5936 |
we construct an analytical model for two channel, two - body scattering amplitudes, and then apply it in the description of the three - body $ j / \ psi \ to k ^ + k ^ - \ pi ^ 0 $ decay. in the construction of the partial wave amplitudes, we combine the low energy resonance region with the regge asymptotic behavior determined from direct two - body production. we find that resonance production in the $ k \ pi $ channel in $ j / \ psi $ decays seems to differ from that observed in direct $ k \ pi $ production, while the mass distribution in the $ k \ bar k $ channel may be compatible. | arxiv:1112.3284 |
we study the interactions of axion - like particles ( alps ) with the standard model particles, aiming to probe their phenomenology via non - resonant searches at the lhc. these interactions are mediated by higher dimensional effective operators within two possible frameworks of linearly and non - linearly realised electroweak symmetry breaking. we consider the alps to be light enough to be produced on - shell and exploit their derivative couplings with the sm higgs boson and the gauge bosons. we will use the high momentum transfer processes, namely $ hz, z \ gamma, ww $ and $ ww \ gamma $ production from $ pp $ collisions. we derive upper limits on the gauge - invariant interactions of alps with the electroweak bosons and / or higgs boson that contribute to these processes, from the re - interpretation of the latest run 2 available lhc data. the constraints we obtain are strong for alp masses below 100 gev. these allowed effective interactions in the alp parameter space yield better significance at hl - lhc and thus, offer promising avenues for subsequent studies. furthermore, we augment our cut - based analysis with gradient - boosted decision trees, which improve the statistical significance distinctly across these interaction channels. we briefly compare the results with the complementary probe of these couplings via direct production of alps in association with the higgs boson or a vector boson. | arxiv:2312.05992 |
random numbers are essential for our modern information based society e. g. in cryptography. unlike frequently used pseudo - random generators, physical random number generators do not depend on complex algorithms but rather on a physical process to provide true randomness. quantum random number generators ( qrng ) do rely on a process, which can be described by a probabilistic theory only, even in principle. here we present a conceptually simple implementation, which offers a 100 % efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. a careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. generated random numbers pass standard statistical tests without any post - processing. the setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. | arxiv:1407.4602 |
the popularity of cyber - physical systems is fueling the rapid growth of location - based services. this poses the risk of location privacy disclosure. effective privacy preservation is foremost for various mobile applications. recently, geo - indistinguishability and expected inference error are proposed for limiting location leakages. in this paper, we argue that personalization means regionalization for geo - indistinguishability, and we propose a regionalized location obfuscation mechanism called dpive with personalized utility sensitivities. this substantially corrects the differential and distortion privacy problem of pive framework proposed by yu et al. on ndss 2017. we develop dpive with two phases. in phase i, we determine disjoint sets by partitioning all possible positions such that different locations in the same set share the protection location set ( pls ). in phase ii, we construct a probability distribution matrix in which the rows corresponding to the same pls have their own sensitivity of utility ( pls diameter ). moreover, by designing qk - means algorithm for more search space in 2 - d space, we improve dpive with refined location partition and present fine - grained personalization, enabling each location to have its own privacy level endowed with a customized privacy budget. experiments with two public datasets demonstrate that our mechanisms have the superior performance, typically on skewed locations. | arxiv:2102.00654 |
the three - dimensional bimodal random - field ising model is studied via a new finite temperature numerical approach. the methods of wang - landau sampling and broad histogram are implemented in a unified algorithm by using the n - fold version of the wang - landau algorithm. the simulations are performed in dominant energy subspaces, determined by the recently developed critical minimum energy subspace technique. the random fields are obtained from a bimodal distribution, that is we consider the discrete $ ( \ pm \ delta ) $ case and the model is studied on cubic lattices with sizes $ 4 \ leq l \ leq 20 $. in order to extract information for the relevant probability distributions of the specific heat and susceptibility peaks, large samples of random field realizations are generated. the general aspects of the model ' s scaling behavior are discussed and the process of averaging finite - size anomalies in random systems is re - examined under the prism of the lack of self - averaging of the specific heat and susceptibility of the model. | arxiv:cond-mat/0509566 |
suppose there is a collection $ x _ 1, x _ 2, \ dots, x _ n $ of independent uniform $ [ 0, 1 ] $ random variables, and a hypergraph $ \ cf $ of \ emph { target structures } on the vertex set $ \ { 1, \ dots, n \ } $. we would like to buy a target structure at small cost, but we do not know all the costs $ x _ i $ ahead of time. instead, we inspect the random variables $ x _ i $ one at a time, and after each inspection, choose to either keep the vertex $ i $ at cost $ x _ i $, or reject vertex $ i $ forever. in the present paper, we consider the case where $ \ { 1, \ dots, n \ } $ is the edge - set of some graph, and the target structures are the spanning trees of a graph, spanning arborescences of a digraph, the paths between a fixed pair of vertices, perfect matchings, hamilton cycles or the cliques of some fixed size. | arxiv:1605.06072 |
we present results on stripe formation in the swift - hohenberg equation with a directional quenching term. stripes are " grown " in the wake of a moving parameter step line, and we analyze how the orientation of stripes changes depending on the speed of the quenching line and on a lateral aspect ratio. we observe stripes perpendicular to the quenching line, but also stripes created at oblique angles, as well as periodic wrinkles created in an otherwise oblique stripe pattern. technically, we study stripe formation as traveling - wave solutions in the swift - hohenberg equation and in reduced cahn - hilliard and newell - whitehead - segel models, analytically, through numerical continuation, and in direct simulations. | arxiv:1810.08688 |
we show that simple models of scalar - field dark energy leave a generic enhancement in the weak - lensing power spectrum when compared to the lcdm prediction. in particular, we calculate the linear - scale enhancement in the convergence ( or cosmic - shear ) power spectrum for two best - fit models of scalar - field dark energy, namely, the ratra - peebles and sugra - type quintessence. our calculations are based on linear perturbation theory, using gauge - invariant variables with carefully defined adiabatic initial conditions. we find that geometric effects enhance the lensing power spectrum on a broad range of scales, whilst the clustering of dark energy gives rise to additional power on large scales. the dark - energy power spectrum for these models are also explicitly obtained. on degree scales, the total enhancement may be as large as 30 - 40 % for sources at redshift ~ 1. we argue that there are realistic prospects for detecting such an enhancement using the next generation of large telescopes. | arxiv:0910.1539 |
we study the dynamics of a two - degrees - of - freedom ( two dof ) nonlinear oscillator representing a quartercar model excited by a road roughness profile. modelling the road profile by means of a harmonic function we derive the melnikov criterion for a system transition to chaos or escape. the analytically obtained estimations are confirmed by numerical simulations. to analyze the transient vibrations we used recurrences. | arxiv:1206.5475 |
we show several relations between local moves on 1 - dimensional knots and those on high dimensional knots related by products of knots. | arxiv:1210.4667 |
first inclusive measurements of isolated prompt photons in photoproduction at the hera ep collider have been made with the zeus detector, using an integrated luminosity of 38. 4 pb $ ^ { - 1 } $. cross sections are given as a function of the pseudorapidity and the transverse energy ( $ \ eta ^ \ gamma $, \ etg ) of the photon, for $ \ etg > $ 5 gev in the $ \ gamma p $ centre - of - mass energy range 134 - 285 gev. comparisons are made with predictions from monte carlo models having leading - logarithm parton showers, and with next - to - leading - order qcd calculations, using currently available parameterisations of the photon structure. for forward $ \ eta ^ \ gamma $ ( proton direction ) good agreement is found, but in the rear direction all predictions fall below the data. | arxiv:hep-ex/9910045 |
we prove that the existence of an automorphism of finite order on a ( defined over a number field ) variety x implies the existence of algebraic linear relations between the logarithm of certain periods of x and the logarithm of special values of the gamma - function. this implies that a slight variation of results by anderson, colmez and gross on the periods of cm abelian varieties is valid for a larger class of cm motives. in particular, we prove a weak form of the period conjecture of gross - deligne. our proof relies on the arithmetic fixed point formula ( equivariant arithmetic riemann - roch theorem ) proved by k. koehler and the second author, and the vanishing of the equivariant analytic torsion for the dolbeault complex. | arxiv:math/0209177 |
learning rbms using standard algorithms such as cd ( k ) involves gradient descent on the negative log - likelihood. one of the terms in the gradient, which involves expectation w. r. t. the model distribution, is intractable and is obtained through an mcmc estimate. in this work we show that the hessian of the log - likelihood can be written in terms of covariances of hidden and visible units and hence, all elements of the hessian can also be estimated using the same mcmc samples with small extra computational costs. since inverting the hessian may be computationally expensive, we propose an algorithm that uses inverse of the diagonal approximation of the hessian, instead. this essentially results in parameter - specific adaptive learning rates for the gradient descent process and improves the efficiency of learning rbms compared to the standard methods. specifically we show that using the inverse of diagonal approximation of hessian in the stochastic dc ( difference of convex functions ) program approach results in very efficient learning of rbms. | arxiv:1810.10777 |
consider a regression model with infinitely many parameters and time series errors. we are interested in choosing weights for averaging across generalized least squares ( gls ) estimators obtained from a set of approximating models. however, gls estimators, depending on the unknown inverse covariance matrix of the errors, are usually infeasible. we therefore construct feasible generalized least squares ( fgls ) estimators using a consistent estimator of the unknown inverse matrix. based on this inverse covariance matrix estimator and fgls estimators, we develop a feasible autocovariance - corrected mallows model averaging criterion to select weights, thereby providing an fgls model averaging estimator of the true regression function. we show that the generalized squared error loss of our averaging estimator is asymptotically equivalent to the minimum one among those of gls model averaging estimators with the weight vectors belonging to a continuous set, which includes the discrete weight set used in hansen ( 2007 ) as its proper subset. | arxiv:1503.06401 |
the availability of challenging simulation environments is pivotal for advancing the field of multi - agent reinforcement learning ( marl ). in cooperative marl settings, the starcraft multi - agent challenge ( smac ) has gained prominence as a benchmark for algorithms following centralized training with decentralized execution paradigm. however, with continual advancements in smac, many algorithms now exhibit near - optimal performance, complicating the evaluation of their true effectiveness. to alleviate this problem, in this work, we highlight a critical issue : the default opponent policy in these environments lacks sufficient diversity, leading marl algorithms to overfit and exploit unintended vulnerabilities rather than learning robust strategies. to overcome these limitations, we propose smac - hard, a novel benchmark designed to enhance training robustness and evaluation comprehensiveness. smac - hard supports customizable opponent strategies, randomization of adversarial policies, and interfaces for marl self - play, enabling agents to generalize to varying opponent behaviors and improve model stability. furthermore, we introduce a black - box testing framework wherein agents are trained without exposure to the edited opponent scripts but are tested against these scripts to evaluate the policy coverage and adaptability of marl algorithms. we conduct extensive evaluations of widely used and state - of - the - art algorithms on smac - hard, revealing the substantial challenges posed by edited and mixed strategy opponents. additionally, the black - box strategy tests illustrate the difficulty of transferring learned policies to unseen adversaries. we envision smac - hard as a critical step toward benchmarking the next generation of marl algorithms, fostering progress in self - play methods for multi - agent systems. our code is available at https : / / github. com / devindeng94 / smac - hard. | arxiv:2412.17707 |
cooperative multi - agent reinforcement learning is a decentralized paradigm in sequential decision making where agents distributed over a network iteratively collaborate with neighbors to maximize global ( network - wide ) notions of rewards. exact computations typically involve a complexity that scales exponentially with the number of agents. to address this curse of dimensionality, we design a scalable algorithm based on the natural policy gradient framework that uses local information and only requires agents to communicate with neighbors within a certain range. under standard assumptions on the spatial decay of correlations for the transition dynamics of the underlying markov process and the localized learning policy, we show that our algorithm converges to the globally optimal policy with a dimension - free statistical and computational complexity, incurring a localization error that does not depend on the number of agents and converges to zero exponentially fast as a function of the range of communication. | arxiv:2109.11692 |
we present a novel algorithm for edge - coloring of multigraphs. the correctness of this algorithm for multigraphs with $ \ chi ' > \ delta + 1 $ ( $ \ chi ' $ is the chromatic edge number and $ \ delta $ is the maximum vertex degree ) would prove a long standing conjecture in edge - coloring of multigraphs. | arxiv:1706.04476 |
knowledge graphs typically undergo open - ended growth of new relations. this cannot be well handled by relation extraction that focuses on pre - defined relations with sufficient training data. to address new relations with few - shot instances, we propose a novel bootstrapping approach, neural snowball, to learn new relations by transferring semantic knowledge about existing relations. more specifically, we use relational siamese networks ( rsn ) to learn the metric of relational similarities between instances based on existing relations and their labeled data. afterwards, given a new relation and its few - shot instances, we use rsn to accumulate reliable instances from unlabeled corpora ; these instances are used to train a relation classifier, which can further identify new facts of the new relation. the process is conducted iteratively like a snowball. experiments show that our model can gather high - quality instances for better few - shot relation learning and achieves significant improvement compared to baselines. codes and datasets are released on https : / / github. com / thunlp / neural - snowball. | arxiv:1908.11007 |
the present study applies a deep reinforcement learning ( drl ) algorithm to active flow control ( afc ) of a two - dimensional flow around a confined square cylinder. specifically, the soft actor - critic ( sac ) algorithm is employed to modulate the flow of a pair of synthetic jets placed on the upper and lower surfaces of the confined squared cylinder in flow configurations characterized by $ re $ of 100, 200, 300, and 400. the investigation starts with an analysis of the baseline flow in the absence of active control. it is observed that at $ re $ = 100 and $ re $ = 200, the vortex shedding exhibits mono - frequency characteristics. conversely, at $ re $ = 300 and $ re $ = 400, the vortex shedding is dominated by multiple frequencies, which is indicative of more complex flow features. with the application of the sac algorithm, we demonstrate the capability of drl - based control in effectively suppressing vortex shedding, while significantly diminishing drag and fluctuations in lift. quantitatively, the data - driven active control strategy results in a drag reduction of approximately 14. 4 %, 26. 4 %, 38. 9 %, and 47. 0 % for $ re $ = 100, 200, 300, and 400, respectively. to understand the underlying control mechanism, we also present detailed flow field comparisons, which showcase the adaptability of drl in devising distinct control strategies tailored to the dynamic conditions at varying $ re $. these findings substantiate the proficiency of drl in controlling chaotic, multi - frequency dominated vortex shedding phenomena, underscoring the robustness of drl in complex afc problems. | arxiv:2404.12123 |
in this work we use the vanishing complexity factor as a supplementary condition to construct uncharged and charged like - - durgapal models. we provide the $ g _ { tt } $ component of the metric of the well - known durgapal iv and v solutions and a particular form for the anisotropy, related to the electric charge, to close the system of differential equations. the physical acceptance of the models is discussed. | arxiv:2208.10260 |
building upon existing signal processing techniques and open - source software, this paper presents a baseline design for an rf system - on - chip frequency division multiplexed readout for a spatio - spectral focal plane instrument based on low temperature detectors. a trade - off analysis of different fpga carrier boards is presented in an attempt to find an optimum next - generation solution for reading out larger arrays of microwave kinetic inductance detectors ( mkids ). the zcu111 rf soc fpga board from xilinx was selected, and it is shown how this integrated system promises to increase the number of pixels that can be read out ( per board ) which enables a reduction in the readout cost per pixel, the mass and volume, and power consumption, all of which are important in making mkid instruments more feasible for both ground - based and space - based astrophysics. the on - chip logic capacity is shown to form a primary constraint on the number of mkids which can be read, channelised, and processed with this new system. as such, novel signal processing techniques are analysed, including digitally down converted ( ddc ) - corrected sub - maximally decimated sampling, in an effort to reduce logic requirements without compromising signal to noise ratio. it is also shown how combining the zcu111 board with a secondary fpga board will allow all 8 adcs and 8 dacs to be utilised, providing enough bandwidth to read up to 8, 000 mkids per board - set, an eight - fold improvement over the state - of - the - art, and important in pursuing 100, 000 pixel arrays. finally, the feasibility of extending the operational frequency range of mkids to the 5 - 10 ghz regime ( or possibly beyond ) is investigated, and some benefits and consequences of doing so are presented. | arxiv:2212.07938 |
convolution - based and transformer - based vision backbone networks process images into the grid or sequence structures, respectively, which are inflexible for capturing irregular objects. though vision gnn ( vig ) adopts graph - level features for complex images, it has some issues, such as inaccurate neighbor node selection, expensive node information aggregation calculation, and over - smoothing in the deep layers. to address the above problems, we propose a progressive vision graph ( pvg ) architecture for vision recognition task. compared with previous works, pvg contains three main components : 1 ) progressively separated graph construction ( psgc ) to introduce second - order similarity by gradually increasing the channel of the global graph branch and decreasing the channel of local branch as the layer deepens ; 2 ) neighbor nodes information aggregation and update module by using max pooling and mathematical expectation ( maxe ) to aggregate rich neighbor information ; 3 ) graph error linear unit ( graphlu ) to enhance low - value information in a relaxed form to reduce the compression of image detail information for alleviating the over - smoothing. extensive experiments on mainstream benchmarks demonstrate the superiority of pvg over state - of - the - art methods, e. g., our pvg - s obtains 83. 0 % top - 1 accuracy on imagenet - 1k that surpasses gnn - based vig - s by + 0. 9 with the parameters reduced by 18. 5 %, while the largest pvg - b obtains 84. 2 % that has + 0. 5 improvement than vig - b. furthermore, our pvg - s obtains + 1. 3 box ap and + 0. 4 mask ap gains than vig - s on coco dataset. | arxiv:2308.00574 |
current technological progress is driving quantum key distribution towards a commercial and world widescale expansion. its capability to deliver unconditionally secure communication will be a fundamental feature in the next generations of telecommunication networks. nevertheless, demonstrations of qkd implementation in a real operating scenario and their coexistence with the classical telecom infrastructure are of fundamental importance for reliable exploitation. here we present a quantum key distribution application implemented overa classical fiber - based infrastructure. by exploiting just a single fiber cable for both the quantum and the classical channel and by using a simplified receiver scheme with just one single - photon detector, we demonstrate the feasibility of low - cost and ready - to - use quantum key distribution systems compatible with standard classical infrastructure. | arxiv:2109.13558 |
in this work, we study the problem of learning a single model for multiple domains. unlike the conventional machine learning scenario where each domain can have the corresponding model, multiple domains ( i. e., applications / users ) may share the same machine learning model due to maintenance loads in cloud computing services. for example, a digit - recognition model should be applicable to hand - written digits, house numbers, car plates, etc. therefore, an ideal model for cloud computing has to perform well at each applicable domain. to address this new challenge from cloud computing, we develop a framework of robust optimization over multiple domains. in lieu of minimizing the empirical risk, we aim to learn a model optimized to the adversarial distribution over multiple domains. hence, we propose to learn the model and the adversarial distribution simultaneously with the stochastic algorithm for efficiency. theoretically, we analyze the convergence rate for convex and non - convex models. to our best knowledge, we first study the convergence rate of learning a robust non - convex model with a practical algorithm. furthermore, we demonstrate that the robustness of the framework and the convergence rate can be further enhanced by appropriate regularizers over the adversarial distribution. the empirical study on real - world fine - grained visual categorization and digits recognition tasks verifies the effectiveness and efficiency of the proposed framework. | arxiv:1805.07588 |
what are the key - features that enable an information diffusion model to explain the inherent dynamic, and often competitive, nature of real - world propagation phenomena? in this paper we aim to answer this question by proposing a novel class of diffusion models, inspired by the classic linear threshold model, and built around the following aspects : trust / distrust in the user relationships, which is leveraged to model different effects of social influence on the decisions taken by an individual ; changes in adopting one or alternative information items ; hesitation towards adopting an information item over time ; latency in the propagation ; time horizon for the unfolding of the diffusion process ; and multiple cascades of information that might occur competitively. to the best of our knowledge, the above aspects have never been unified into the same lt - based diffusion model. we also define different strategies for the selection of the initial influencers to simulate non - competitive and competitive diffusion scenarios, particularly related to the problem of limitation of misinformation spread. results on publicly available networks have shown the meaningfulness and uniqueness of our models. | arxiv:1805.11303 |
deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging. we propose the swapping autoencoder, a deep model designed specifically for image manipulation, rather than random sampling. the key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image. in particular, we encourage the components to represent structure and texture, by enforcing one component to encode co - occurrent patch statistics across different parts of an image. as our method is trained with an encoder, finding the latent codes for a new input image becomes trivial, rather than cumbersome. as a result, it can be used to manipulate real input images in various ways, including texture swapping, local and global editing, and latent code vector arithmetic. experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models. | arxiv:2007.00653 |
convolutional neural network ( cnn ) based deep learning ( dl ) has achieved great progress in many real - life applications. meanwhile, due to the complex model structures against strict latency and memory restriction, the implementation of cnn models on the resource - limited platforms is becoming more challenging. this work proposes a solution, called compactnet \ footnote { project url : \ url { https : / / github. com / compactnet / compactnet } }, which automatically optimizes a pre - trained cnn model on a specific resource - limited platform given a specific target of inference speedup. guided by a simulator of the target platform, compactnet progressively trims a pre - trained network by removing certain redundant filters until the target speedup is reached and generates an optimal platform - specific model while maintaining the accuracy. we evaluate our work on two platforms of a mobile arm cpu and a machine learning accelerator npu ( cambricon - 1a isa ) on a huawei mate10 smartphone. for the state - of - the - art slim cnn model made for the embedded platform, mobilenetv2, compactnet achieves up to a 1. 8x kernel computation speedup with equal or even higher accuracy for image classification tasks on the cifar - 10 dataset. | arxiv:1905.11669 |
materials with very low dc magnetic susceptibility have many scientific applications. to our knowledge however, relatively little research has been conducted with the goal to produce a totally nonmagnetic material. this phrase in our case means after spatially averaging over macroscopic volumes, it possesses an average zero dc magnetic susceptibility. we report measurements of the dc magnetic susceptibility of three different types of nonmagnetic materials at room temperature : ( i ) solutions of paramagnetic salts and diamagnetic liquids, ( ii ) liquid gallium - indium alloys and ( iii ) pressed powder mixtures of tungsten and bismuth. the lowest measured magnetic susceptibility among these candidate materials is in the order of 10 ^ - 9 cgs volume susceptibility units, about two orders of magnitude smaller than distilled water. in all cases, the measured concentration dependence of the magnetic susceptibility is consistent with that expected for the weighted sum of the susceptibilities of the separate components within experimental error. these results verify the wiedemann additivity law and thereby realize the ability to produce materials with small but tunable magnetic susceptibility. for our particular scientific application, we are also looking for materials with the largest possible number of neutrons and protons per unit volume. the gallium - indium alloys fabricated and measured in this work possess to our knowledge the smallest ratio of volume magnetic susceptibility to nucleon number density per unit volume for a room temperature liquid, and the tungsten - bismuth pressed powder mixtures possess to our knowledge the smallest ratio of volume magnetic susceptibility to nucleon number density per unit volume for a room temperature solid. this ratio is a figure of merit for a certain class of precision experiments that search for possible exotic spin - dependent forces of nature. | arxiv:1506.09199 |
the warm ionized medium ( wim ), also referred to as diffuse ionized gas, contains most of the mass of interstellar medium in ionized form, contributing as much as 30 % of the total atomic gas mass in the solar neighborhood. the advent of ccds has enabled unprecedented study of this medium in external galaxies, probing a variety of environments. in particular, we can derive the morphology of the wim, its distribution across disks, and the correlation with other population i material. spectroscopy of the wim makes it possible to test various ionization models. i will review here our current understanding of the properties of the wim in spiral galaxies. a perhaps unexpected result is that the h - alpha emission from the wim contributes about 40 % of the total observed h - alpha luminosity from spirals. this places severe constraints on possible sources of ionization, since only photo ionization by ob stars meets this requirement. spectroscopic measurements of forbidden line strengths appear in reasonable agreement with photo ionization models. it is not yet clear if the lyman continuum photons that ionize the wim are mostly from ob stars located inside traditional hii regions, or from field ob stars. | arxiv:astro-ph/9707287 |
we estimate the bounds of box - counting dimension of hidden variable fractal interpolation functions ( hvfifs ) and hidden variable bivariate fractal interpolation functions ( hvbfifs ) with four function contractivity factors and present analytic properties of hvfifs which are constructed to ensure more flexibility and diversity in modeling natural phenomena. firstly, we construct the hvfifs and analyze their smoothness and stability. secondly, we obtain the lower and upper bounds of box - counting dimension of the hvfifs. finally, in the similar way, we get the lower and upper bounds of box - counting dimension of hvbfifs constructed in [ 21 ]. | arxiv:1904.10617 |
< x ≤ b }, [ a, b ] = [ a, b ] = { x ∈ r a ≤ x ≤ b }. { \ displaystyle { \ begin { aligned } ( a, b ) = { \ mathopen { ] } } a, b { \ mathclose { [ } } & = \ { x \ in \ mathbb { r } \ mid a < x < b \ }, \ \ [ 5mu ] [ a, b ) = { \ mathopen { [ } } a, b { \ mathclose { [ } } & = \ { x \ in \ mathbb { r } \ mid a \ leq x < b \ }, \ \ [ 5mu ] ( a, b ] = { \ mathopen { ] } } a, b { \ mathclose { ] } } & = \ { x \ in \ mathbb { r } \ mid a < x \ leq b \ }, \ \ [ 5mu ] [ a, b ] = { \ mathopen { [ } } a, b { \ mathclose { ] } } & = \ { x \ in \ mathbb { r } \ mid a \ leq x \ leq b \ }. \ end { aligned } } } each interval ( a, a ), [ a, a ), and ( a, a ] represents the empty set, whereas [ a, a ] denotes the singleton set { a }. when a > b, all four notations are usually taken to represent the empty set. both notations may overlap with other uses of parentheses and brackets in mathematics. for instance, the notation ( a, b ) is often used to denote an ordered pair in set theory, the coordinates of a point or vector in analytic geometry and linear algebra, or ( sometimes ) a complex number in algebra. that is why bourbaki introduced the notation ] a, b [ to denote the open interval. the notation [ a, b ] too is occasionally used for ordered pairs, especially in computer science. some authors such as yves tille use ] a, b [ to denote the complement of the interval ( a, b ) ; namely, the set of all real numbers that are either less than or equal to a, or greater than or equal to b. = = = infinite endpoints = = = in some contexts, an interval may be defined as a subset of the extended real numbers, the set of all real | https://en.wikipedia.org/wiki/Interval_(mathematics) |
unmanned aerial vehicles ( uavs ) are widely used for object detection. however, the existing uav - based object detection systems are subject to the serious challenge, namely, the finite computation, energy and communication resources, which limits the achievable detection performance. in order to overcome this challenge, a uav cognitive semantic communication system is proposed by exploiting knowledge graph. moreover, a multi - scale compression network is designed for semantic compression to reduce data transmission volume while guaranteeing the detection performance. furthermore, an object detection scheme is proposed by using the knowledge graph to overcome channel noise interference and compression distortion. simulation results conducted on the practical aerial image dataset demonstrate that compared to the benchmark systems, our proposed system has superior detection accuracy, communication robustness and computation efficiency even under high compression rates and low signal - to - noise ratio ( snr ) conditions. | arxiv:2401.13995 |
we investigate the complexity of $ r $ - approval control problems in $ k $ - peaked elections, where at most $ k $ peaks are allowed in each vote with respect to an order of the candidates. we show that most np - hardness results in general elections also hold in k - peaked elections even for $ k = 2, 3 $. on the other hand, we derive polynomial - time algorithms for some problems for $ k = 2 $. all our np - hardness results apply to approval and sincere - strategy preference - based approval as well. our study leads to many dichotomy results for the problems considered in this paper, with respect to the values of $ k $ and $ r $. in addition, we study $ r $ - approval control problems from the viewpoint of parameterized complexity and achieve both fixed - parameter tractability results and w - hardness results, with respect to the solution size. along the way exploring the complexity of control problems, we obtain two byproducts which are of independent interest. first, we prove that every graph of maximum degree 3 admits a specific 2 - interval representation where every 2 - interval corresponding to a vertex contains a trivial interval and, moreover, 2 - intervals may only intersect at the endpoints of the intervals. second, we develop a fixed - parameter tractable algorithm for a generalized $ r $ - set packing problem with respect to the solution size, where each element in the given universal set is allowed to occur in more than one r - subset in the solution. | arxiv:1304.4471 |
we prove langlands functoriality for the generic spectrum of general spin groups ( both odd and even ). contrary to other recent instances of functoriality, our resulting automorphic representations on the general linear group will not be self - dual. together with cases of classical groups, this completes the list of cases of split reductive groups whose l - groups have classical derived groups. the important transfer from gsp ( 4 ) to gl ( 4 ) follows from our result as a special case. | arxiv:math/0411035 |
based on high contrast images obtained with the gemini planet imager ( gpi ), we report the discovery of two point - like sources at angular separations of $ \ rho \ sim0. 18 ' ' $ and $ \ rho \ sim0. 80 ' ' $ from the stars hd 29992 and hd 196385. a combined analysis of the new gpi observations and images from the literature indicates that the source close to hd 29992 could be a companion to the star. concerning hd 196385, the small number of contaminants ( $ \ sim0. 5 $ ) suggests that the detected source may be gravitationally bound to the star. for both systems, we discarded the presence of other potential companions with $ m > 75 $ m $ _ { \ rm jup } $ at $ \ rho \ sim0. 3 - 1. 3 ' ' $. from stellar model atmospheres and low - resolution gpi spectra, we derive masses of $ \ sim0. 2 $ - $ 0. 3 $ m $ _ { \ odot } $ for these sources. using a markov - chain monte carlo approach, we performed a joint fit of the new astrometry measurements and published radial velocity data to characterize the possible orbits. for hd 196385b, the median dynamic mass is in agreement with that derived from model atmospheres, whilst for hd 29992b, the orbital fit favors masses close to the brown dwarf regime ( $ \ sim0. 08 $ m $ _ { \ odot } $ ). hd 29992 and hd 196385 might be two new binary systems with m - type stellar companions. however, new high angular resolution images would help to definitively confirm whether the detected sources are gravitationally bound to their respective stars, and permit tighter constraints on the orbital parameters of both systems. | arxiv:2207.07435 |
recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition. it reveals the vulnerability of deep convolutional neural networks ( cnns ) as state - of - the - art building block for face recognition models against adversarial examples, which can cause certain consequences for secure systems. gradient - based adversarial attacks are widely studied before and proved to be successful against face recognition models. however, finding the optimized perturbation per each face needs to submitting the significant number of queries to the target model. in this paper, we propose recursive adversarial attack on face recognition using automatic face warping which needs extremely limited number of queries to fool the target model. instead of a random face warping procedure, the warping functions are applied on specific detected regions of face like eyebrows, nose, lips, etc. we evaluate the robustness of proposed method in the decision - based black - box attack setting, where the attackers have no access to the model parameters and gradients, but hard - label predictions and confidence scores are provided by the target model. | arxiv:2207.01149 |
cellular networks, such as 5g systems, are becoming increasingly complex for supporting various deployment scenarios and applications. embracing artificial intelligence ( ai ) in 5g evolution is critical to managing the complexity and fueling the next quantum leap in 6g cellular networks. in this article, we share our experience and best practices in applying ai in cellular networks. we first present a primer on the state of the art of ai in cellular networks, including basic concepts and recent key advances. then we discuss 3gpp standardization aspects and share various design rationales influencing standardization. we also present case studies with real network data to showcase how ai can improve network performance and enable network automation. | arxiv:2111.10663 |
in this paper, we characterize the boundedness and compactness of differences of weighted composition operators from weighted bergman spaces $ a ^ p _ \ omega $ induced by a doubling weight $ \ omega $ to lebesgue spaces $ l ^ q _ \ mu $ on the unit ball for full $ 0 < p, q < \ infty $, which extend many results on the unit disk. as a byproduct, a new characterization of $ q $ - carleson the measure for $ a ^ p _ \ omega $ in terms of the bergman metric ball is also presented. | arxiv:2407.15096 |
in 1991, mckay and radziszowski proved that, however each 3 - subset of a 13 - set is assigned one of two colours, there is some 4 - subset whose four 3 - subsets have the same colour. more than 25 years later, this remains the only non - trivial classical ramsey number known for hypergraphs. in this article, we find all the extremal colourings of the 3 - subsets of a 12 - set and list some of their properties. using the catalogue, we provide an answer to a question of dudek, fleur, mubayi and r \ " odl about the size - ramsey numbers of hypergraphs. | arxiv:1608.07762 |
quasi - periodic eruptions ( qpes ) are recurrent x - ray bursts found so far in the nuclei of low - mass galaxies. their trigger mechanism is still unknown, but recent models involving one or two stellar - mass companions around the central massive ( $ \ approx10 ^ 5 - 10 ^ 6 $ solar masses ) black hole have gathered significant attention. while these have been compared only qualitatively with observations, the phenomenology of qpes is developing at a fast pace, with the potential to reveal new insights. here we report two new observational results found in ero - qpe1, the brightest qpe source discovered so far : i ) the eruptions in ero - qpe1 occur sometimes as single isolated bursts, and at others as chaotic mixtures of multiple overlapping bursts with very different amplitudes ; ii ) we confirm that qpes peak at later times and are broader at lower energies, with respect to higher energies while, for the first time, we find that qpes also start earlier at lower energies. furthermore, eruptions appear to undergo an anti - clockwise hysteresis cycle in a plane of hardness ratio versus total count rate. behavior i ) was not found before in any other qpe source and implies that if a common trigger mechanism is in place for all qpes, it must be able to produce both types of timing properties, regular and complex. result ii ) implies that the x - ray emitting component does not have an achromatic evolution even during the start of qpes, and that the rise is harder than the decay at a given total count rate. this specific energy dependence could be qualitatively compatible with inward radial propagation during the rise within a compact accretion flow, the presence of which is suggested by the stable quiescence spectrum observed in general for qpe sources. | arxiv:2203.11939 |
in 1960, payne and weinberger proved that among all domains that lie within a wedge ( an angle whose measure is less than or equal to $ \ pi $ ), and have a given value of a certain integral the circular sector has the lowest fundamental eigenvalue of the dirichlet laplacian. here, it is shown that an analogue of this assertion is true for domains with a cut and for indented domains ; that is, for those located in a reflex angle ( its measure is between $ \ pi $ and $ 2 \ pi $ ). | arxiv:1602.07447 |
staphylococcus aureus nasal carriage increases risk of infection and has been associated with lifestyle behavior and biological host characteristics. we used social network analysis to evaluate whether contacts have the same s. aureus genotype, or whether contagiousness is an indirect effect of contacts sharing the same lifestyle or characteristics. the fit futures 1 study collected data on social contact among 1038 first level students in the same high school district in norway. s. aureus persistent carriage was determined from two nasal swab cultures and genotype from spa - typing of a positive throat swab culture. bootstrap, t - tests, logistic regression, and autocorrelation were used to evaluate social network influence on host risk factors and s. aureus carriage. both persistent carriage and spa - type were transmitted in the social network ( p < 0. 001 ). the probability of carriage increased by 3. 7 % and 5. 0 % for each additional s. aureus positive friend, in univariable regression and multivariable autocorrelation analysis respectively. male sex was associated with a 15 % lower risk of transmission compared to women, although the prevalence of carriage was higher for men ( 36 % versus 24 % ). medium physical activity, medium and high alcohol - use, and normal - weight students had higher number of contacts, and increased risk of transmission ( p < 0. 002 ). we demonstrate direct social transmission of s. aureus in a general youth population. lifestyle factors are associated with risk of transmission suggesting indirect social group effects from having more similar environmental exposures. the predominance in carriage is determined by sex - specific predisposing host characteristics as social transmission is less frequent than in females. better understanding of how social interactions influence s. aureus carriage dynamics in the population is important for developing new preventive measures. | arxiv:2202.08794 |
a major challenge in gravitational - wave multi - messenger astrophysics is the imprecise localization of gravitational - wave compact binary mergers. we investigate the use of a method to include galaxy catalog information in performing parameter estimation of these events. we test its effectiveness with the gravitational - wave events gw170817, gw190425, and gw190814, as well as with simulated binary neutron star mergers. for gw170817, we recover the true host galaxy as the most probable galaxy after a straightforward mass reweighting, with significantly decreased localization area and volume. on the simulated sample, however, we do not find improvement compared to performing a simple galaxy catalog crossmatch with a regular gravitational wave localization. future investigations into sampling methods may yield improvements that increase the viability of this method. | arxiv:2410.14663 |
analog coding decouples the tasks of protecting against erasures and noise. for erasure correction, it creates an " analog redundancy " by means of band - limited discrete fourier transform ( dft ) interpolation, or more generally, by an over - complete expansion based on a frame. we examine the analog coding paradigm for the dual setup of a source with " erasure " side - information ( si ) at the encoder. the excess rate of analog coding above the rate - distortion function ( rdf ) is associated with the energy of the inverse of submatrices of the frame, where each submatrix corresponds to a possible erasure pattern. we give a partial theoretical as well as numerical evidence that a variety of structured frames, in particular dft frames with difference - set spectrum and more general equiangular tight frames ( etfs ), with a common manova limiting spectrum, minimize the excess rate over all possible frames. however, they do not achieve the rdf even in the limit as the dimension goes to infinity. | arxiv:1602.03498 |
for any solvable lie group whose exponential map $ \ exp _ g \ colon { \ mathfrak g } \ to g $ is bijective, we prove that the real rank of $ c ^ * ( g ) $ is equal to $ \ dim ( { \ mathfrak g } / [ { \ mathfrak g }, { \ mathfrak g } ] ) $. we also indicate a proof of a similar formula for the stable rank of $ c ^ * ( g ) $, as well as some estimates on the ideal generated by the projections in $ c ^ * ( g ) $. | arxiv:1511.05533 |
in 1990, j. l. krivine introduced the notion of storage operator to simulate, in $ \ lambda $ - calculus, the " call by value " in a context of a " call by name ". j. l. krivine has shown that, using g \ " odel translation from classical into intuitionistic logic, we can find a simple type for storage operators in af2 type system. in this present paper, we give a general type for storage operators in a slight extension of af2. we give at the end ( without proof ) a generalization of this result to other types. | arxiv:0905.0549 |
in this paper we address a model coupling viscoplasticity with damage in thermoviscoelasticity. the associated pde system consists of the momentum balance with viscosity and inertia for the displacement variable, at small strains, of the plastic and damage flow rules, and of the heat equation. it has a strongly nonlinear character and in particular features quadratic terms on the right - hand side of the heat equation and of the damage flow rule, which have to be handled carefully. we propose two weak solution concepts for the related initial - boundary value problem, namely ` entropic ' and ` weak energy ' solutions. accordingly, we prove two existence results by passing to the limit in a carefully devised time discretization scheme. finally, in the case of a prescribed temperature profile, and under a strongly simplifying condition, we provide a continuous dependence result, yielding uniqueness of weak energy solutions. | arxiv:1701.00139 |
the topology of the electronic band structure of solids can be described by its berry curvature distribution across the brillouin zone. we theoretically introduce and experimentally demonstrate a general methodology based on the measurement of energy - and momentum - resolved optical transition rates, allowing to reveal signatures of berry curvature texture in reciprocal space. by performing time - and angle - resolved photoemission spectroscopy of atomically thin wse $ _ 2 $ using polarization - modulated excitations, we demonstrate that excitons become an asset in extracting the quantum geometrical properties of solids. we also investigate the resilience of our measurement protocol against ultrafast scattering processes following direct chiroptical transitions. | arxiv:2308.09634 |
we present a new signature by which to one could potentially discriminate between a spectrum of gravitational radiation generated by a self - ordering scalar field vs that of inflation, specifically a comparison of the magnitude of a flat spectrum at frequencies probed by future direct detection experiments to the magnitude of a possible polarization signal in the cosmic microwave background ( cmb ) radiation. in the process we clarify several issues related to the proper calculation of such modes, focusing on the effect of post - horizon - crossing evolution. | arxiv:1003.1735 |
for an element in $ bs ( 1, n ) = \ langle t, a | tat ^ { - 1 } = a ^ n \ rangle $ written in the normal form $ t ^ { - u } a ^ vt ^ w $ with $ u, w \ geq 0 $ and $ v \ in \ mathbb { z } $, we exhibit a geodesic word representing the element and give a formula for its word length with respect to the generating set $ \ { t, a \ } $. using this word length formula, we prove that there are sets of elements of positive density of positive, negative and zero conjugation curvature, as defined by bar natan, duchin and kropholler. | arxiv:2006.14525 |
accurate physical parameters have been determined for two early - type detached eclipsing binaries in the open cluster h persei ( ngc 869 ). masses accurate to 1. 5 % are derived from high - resolution spectroscopy and radii accurate to 4 - - 6 % have been obtained from fitting the existing light curves. the four stars are placed in the mass radius plane and compared to the theoretical stellar models of the granada group. the best - fitting models have a low metallicity of approximately z = 0. 01 and a high helium abundance of y = 0. 34. this is the first determination of the bulk metallicity of the perseus double cluster. recent studies have assumed a solar metallicity so their results should be reviewed. | arxiv:astro-ph/0312506 |
thermopower measurements have been carried out on the ni substituted samples of na0. 75coo2 in the temperature range 4. 2k to 300k. the room temperature tep increases by 20microv / k even with 1 % ni substitution and systematically increases with increasing ni content upto 5 %. the increase in tep is accompanied by a decrease in rho, thus increasing the ratio of s ^ ( 2 ) / rho on ni substitution. at low t, the tep shows an anomaly in the substituted samples, showing a peak at t ~ 20k. | arxiv:cond-mat/0510342 |
in recent years, the buildings where we spend most part of our life are rapidly evolving. they are becoming fully automated environments where energy consumption, access control, heating and many other subsystems are all integrated within a single system commonly referred to as smart building ( sb ). to support the growing complexity of building operations, building automation systems ( bas ) powering sbs are integrating consumer range internet of things ( iot ) devices such as ip cameras alongside with operational technology ( ot ) controllers and actuators. however, these changes pose important cybersecurity concerns since the attack surface is larger, attack vectors are increasing and attacks can potentially harm building occupants. in this paper, we analyze the threat landscape of bass by focusing on subsystems which are strongly affected by the advent of iot devices such as video surveillance systems and smart lightning. we demonstrate how bas operation can be disrupted by simple attacks to widely used network protocols. furthermore, using both known and 0 - day vulnerabilities reported in the paper and previously disclosed, we present the first ( at our knowledge ) bas - specific malware which is able to persist within the bas network by leveraging both ot and iot devices connected to the bas. our research highlights how bas networks can be considered as critical as industrial control systems and security concerns in bass deserve more attention from both industrial and scientific communities. even within a simulated environment, our proof - of - concept attacks were carried out with relative ease and a limited amount of budget and resources. therefore, we believe that well - funded attack groups will increasingly shift their focus towards bass with the potential of impacting the live of thousands of people. | arxiv:1912.02480 |
corrections of order $ t ^ 4 $ to vector and axial current correlators in qcd at a finite temperature $ t < t _ c $ are obtained using dispersion relations for the amplitudes of deep inelastic scattering on pions. their relation with the operator product expansion is presented. an interpretation of the results in terms of $ t $ - dependent meson masses is given : masses of $ \ rho $ and $ a _ 1 $ start to move with temperature in order $ t ^ 4 $. | arxiv:hep-ph/9405371 |
we construct a model in which four dimensional chiral fermions arise on the boundaries of a five dimensional lattice with free boundary conditions in the fifth direction. the physical content is similar to kaplan ' s model of domain wall fermions, yet the present construction has several technical advantages. we discuss some aspects of perturbation theory, as well as possible applications of the model both for lattice qcd and for the on - going attempts to construct a lattice chiral gauge theory. | arxiv:hep-lat/9303005 |
using the quantum molecular dynamics model, we aim to investigate the emis - sion of light complex particles, and degree of stopping reached in heavy - ion colli - sions. we took incident energies between 50 and 1000 mev / nucleon. in addition, central and peripheral collisions and different masses are also considered. we ob - serve that the light complex particles act in almost similar manner as anisotropic ratio. in other words, multiplicity of light complex particles is an indicator of global stopping in heavy - ion collisions. we see that maximum light complex particles and stopping is obtained for heavier masses in central collisions. | arxiv:1007.1859 |
we investigate an electron in the plane interacting with the magnetic field due to an electric current forming a localized rotationally symmetric vortex. we show that independently of the vortex profile an electron with spin antiparallel to the magnetic field can be trapped if the vortex current is strong enough. in addition, the electron scattering on the vortex exhibits resonances for any spin orientation. on the other hand, in distinction to models with a localized flux tube the present situation exhibits no bound states for weak vortices. | arxiv:cond-mat/9803166 |
while more organizations have been trying to move their infrastructure to the cloud in recent years, there have been significant challenges in how identities and access are managed in a hybrid cloud setting. this paper showcases a novel identity and access management framework for shared resources in a multi - tenant hybrid cloud environment. the paper demonstrates a method to implement the " mirror " identities of on - premise identities in the cloud. following the best security practices, the framework ensures that only rightful users can use their mirror identities in the cloud. furthermore, the paper also proposes a technique in scaling the framework to accommodate large - scale enterprises. the framework exhibited in the paper provides a comprehensive and scalable solution for enterprises to implement identity and access control in their hybrid cloud infrastructure. although the paper focuses on implementing the framework in google cloud platform, it can be easily applied to any major public cloud platform. | arxiv:2203.11463 |
fourier neural operator ( fno ) is a popular operator learning framework. it not only achieves the state - of - the - art performance in many tasks, but also is efficient in training and prediction. however, collecting training data for the fno can be a costly bottleneck in practice, because it often demands expensive physical simulations. to overcome this problem, we propose multi - resolution active learning of fno ( mra - fno ), which can dynamically select the input functions and resolutions to lower the data cost as much as possible while optimizing the learning efficiency. specifically, we propose a probabilistic multi - resolution fno and use ensemble monte - carlo to develop an effective posterior inference algorithm. to conduct active learning, we maximize a utility - cost ratio as the acquisition function to acquire new examples and resolutions at each step. we use moment matching and the matrix determinant lemma to enable tractable, efficient utility computation. furthermore, we develop a cost annealing framework to avoid over - penalizing high - resolution queries at the early stage. the over - penalization is severe when the cost difference is significant between the resolutions, which renders active learning often stuck at low - resolution queries and inferior performance. our method overcomes this problem and applies to general multi - fidelity active learning and optimization problems. we have shown the advantage of our method in several benchmark operator learning tasks. the code is available at https : / / github. com / shib0li / mra - fno. | arxiv:2309.16971 |
the ease of using optical gain / loss provides a fertile ground for experimental explorations of non - hermitian ( nh ) physics. without gain / loss, can we realize the nh effect in a hermitian system? the interface between the coupled hermitian subsystems is a natural object for nh physics due to the nonconservative process on it. however, it is still far from enduing the interface with rich nh physics. here, a junction between the topological insulator and the conductor is considered, where the interface can be effectively described by a nh hamiltonian - - such nh character is ascribed to the conductor self - energy of a reservoir. as a consequence of that, we show the wave propagation along the interface exhibits dissipative non - reciprocity ( dubbed non - bloch transport ), which was believed to be unique in nh systems. moreover, the meta - materialization of tight - binding models is also studied by identifying their equivalent connectivity, enabling us to demonstrate the above exotic nh behavior of the interface experimentally. our work provides a conceptually rich avenue to construct nh systems for both optics and electronics. | arxiv:2408.16290 |
we introduce a regression model for data on non - linear manifolds. the model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and euclidean explanatory variables. the approach is based on stochastic development of euclidean diffusion processes to the manifold. defining the data distribution as the transition distribution of the mapped stochastic process, parameters of the model, the non - linear analogue of design matrix and intercept, are found via maximum likelihood. the model is intrinsically related to the geometry encoded in the connection of the manifold. we propose an estimation procedure which applies the laplace approximation of the likelihood function. a simulation study of the performance of the model is performed and the model is applied to a real dataset of corpus callosum shapes. | arxiv:1703.00291 |
sports highlights are becoming increasingly popular on video - sharing platforms. yet, crafting sport highlight videos is challenging, which requires producing engaging narratives from different angles, and conforming to different platform affordances with constantly changing audiences. many content creators therefore create derivative work of the original sports video through manga styles to enhance its expressiveness. but manually creating and inserting tailored manga - style content can still be time - consuming. we introduce sportoonizer, a system embedding the pipeline for automatic generation of manga - style animations for highlights in sports videos and insertion into original videos. it seamlessly merges dynamic manga sequences with live - action footage, enriching the visual tapestry and deepening narrative scope. by leveraging genais, sportoonizer crafts compelling storylines encapsulating the intensity of sports moments and athletes ' personal journeys. our evaluation study demonstrates that integrating manga b - rolls significantly enhances viewer engagement, visual interest, and emotional connection towards athletes ' stories in the viewing experience. | arxiv:2409.13443 |
we consider the problem of stochastic monotone submodular function maximization, subject to constraints. we give results on adaptivity gaps, and on the gap between the optimal offline and online solutions. we present a procedure that transforms a decision tree ( adaptive algorithm ) into a non - adaptive chain. we prove that this chain achieves at least $ { \ tau } $ times the utility of the decision tree, over a product distribution and binary state space, where $ { \ tau } = \ min _ { i, j } \ pr [ x _ i = j ] $. this proves an adaptivity gap of $ 1 / { \ tau } $ ( which is $ 2 $ in the case of a uniform distribution ) for the problem of stochastic monotone submodular maximization subject to state - independent constraints. for a cardinality constraint, we prove that a simple adaptive greedy algorithm achieves an approximation factor of $ ( 1 - 1 / e ^ { \ tau } ) $ with respect to the optimal offline solution ; previously, it has been proven that the algorithm achieves an approximation factor of $ ( 1 - 1 / e ) $ with respect to the optimal adaptive online solution. finally, we show that there exists a non - adaptive solution for the stochastic max coverage problem that is within a factor $ ( 1 - 1 / e ) $ of the optimal adaptive solution and within a factor of $ { \ tau } ( 1 - 1 / e ) $ of the optimal offline solution. | arxiv:1504.02146 |
almost sure bounds are established on the uniform error of smoothing spline estimators in nonparametric regression with random designs. some results of einmahl and mason ( 2005 ) are used to derive uniform error bounds for the approximation of the spline smoother by an ` ` equivalent ' ' reproducing kernel regression estimator, as well as for proving uniform error bounds on the reproducing kernel regression estimator itself, uniformly in the smoothing parameter over a wide range. this admits data - driven choices of the smoothing parameter. | arxiv:math/0612776 |
we present a volumetric mesh - based algorithm for flattening the placenta to a canonical template to enable effective visualization of local anatomy and function. monitoring placental function in vivo promises to support pregnancy assessment and to improve care outcomes. we aim to alleviate visualization and interpretation challenges presented by the shape of the placenta when it is attached to the curved uterine wall. to do so, we flatten the volumetric mesh that captures placental shape to resemble the well - studied ex vivo shape. we formulate our method as a map from the in vivo shape to a flattened template that minimizes the symmetric dirichlet energy to control distortion throughout the volume. local injectivity is enforced via constrained line search during gradient descent. we evaluate the proposed method on 28 placenta shapes extracted from mri images in a clinical study of placental function. we achieve sub - voxel accuracy in mapping the boundary of the placenta to the template while successfully controlling distortion throughout the volume. we illustrate how the resulting mapping of the placenta enhances visualization of placental anatomy and function. our code is freely available at https : / / github. com / mabulnaga / placenta - flattening. | arxiv:1903.05044 |
the identification capacity region of the compound broadcast channel is determined under an average error criterion, where the sender has no channel state information. we give single - letter identification capacity formulas for discrete channels and multiple - input multiple - output gaussian channels under an average input constraint. the capacity theorems apply to general discrete memoryless broadcast channels. this is in contrast to the transmission setting, where the capacity is only known for special cases, notably the degraded broadcast channel and the multiple - input multiple - output broadcast channel with private messages. furthermore, the identification capacity region of the compound multiple - input multiple - output broadcast channel can be larger than the transmission capacity region. this is a departure from the single - user behavior of identification, since the identification capacity of a single - user channel equals the transmission capacity. | arxiv:2110.15101 |
we establish a coupled fixed points theorem for a meaningful class of mixed monotone multivalued operators and then we use it to derive some results on existence of quasisolutions and solutions to first - - order functional differential equations with state - - dependent deviating arguments. our results are very general and can be applied to functional equations featuring discontinuities with respect to all of their arguments, but we emphasize that they are new even for differential equations with continuously state - - dependent delays. | arxiv:1104.2159 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.