text
stringlengths
1
3.65k
source
stringlengths
15
79
we study the occurrence of negative differential conductance induced by resonance effects in a model for a multilayer heterostructure. in particular, we consider a system consisting of several correlated and non - correlated monoatomic layers, sandwiched between two metallic leads. the geometry confines electrons in wells within the heterostructures, which are connected to each other and to the leads by tunneling processes. the non - equilibrium situation is produced by applying a bias - voltage to the leads. our results show that for specific values of the parameters resonance tunneling takes place. we investigate in detail its influence on the current - voltage characteristics. our results are obtained via non - equilibrium real - space dynamical mean - field theory. as an impurity solver we use the so - called auxiliary master equation approach, which addresses the impurity problem within an auxiliary system consisting of a correlated impurity, a small number of uncorrelated bath sites, and two markovian environments described by a generalized master equation.
arxiv:1607.05115
we study the effect of spin injection into $ s - $ and $ d - $ wave superconductors, with an emphasis on the interplay between boundary and bulk spin transport properties. the quantities of interest include the amount of non - equilibrium magnetization ( $ m $ ), as well as the induced spin - dependent current ( $ i _ s $ ) and boundary voltage ( $ v _ s $ ). in general, the andreev reflection makes each of the three quantities depend on a different combination of the boundary and bulk contributions. the situation simplifies either for half - metallic ferromagnets or in the strong barrier limit, where both $ v _ s $ and $ m $ depend solely on the bulk spin transport / relaxation properties. the implications of our results for the on - going spin injection experiments in high $ t _ c $ cuprates are discussed.
arxiv:cond-mat/9901004
fluorescence is a powerful mean to probe information processing in the mammalian brain. however, neuronal tissues are highly heterogeneous and thus opaque to light. a wide set of non - invasive or invasive techniques for scattered light rejection, optical sectioning or localized excitation, have been developed, but non - invasive optical recording of activity through highly scattering layer beyond the ballistic regime is to date impossible. here, we show that functional signals from fluorescent time - varying sources located below an highly scattering tissue can be retrieved efficiently, by exploiting matrix factorization algorithms to demix this information from low contrast fluorescence speckle patterns.
arxiv:1906.02604
we give a generalization of weighted weil heights. these heights generalize both weil ' s heights and dobrowolski ' s height. we study northcott numbers for our heights. our results generalize the authors ' former work on vidaux and videla ' s question about the northcott number. as an application, we evaluate northcott numbers for talamanca ' s spectral height on matrices.
arxiv:2308.03981
the screening length in a quark - gluon plasma, the dispersion relations of thermal gluon self - energy and the quark potential at high temperature are studied within thermo field dynamics framework. by calculation of the real and imaginary parts, of the gluon self - energy in one - loop order in thermo field dynamics, we obtain an expression for the screening length in a quark - gluon plasma and the dispersion relation between the real and imaginary parts. at high temperature, using photon exchange between electron - positron in a skeleton expansion and ladder approximation, the screened coulomb potential is obtained, and using one - gluon and two - gluon exchange between a quark - antiquark, we get an expression for the screened quark potential up to $ o ( g amplitudes of generic process taking place in a many body system in equilibrium at temperature t. the relationship of the scattering and decay amplitudes as calculated in thermo field dynamics to the conventional techniques is established. it is shown that in many cases the calculations are relatively easy in tfd.
arxiv:hep-ph/0011182
the detection of alzheimer disease ( ad ) from clinical mri data is an active area of research in medical imaging. recent advances in quantum computing, particularly the integration of parameterized quantum circuits ( pqcs ) with classical machine learning architectures, offer new opportunities to develop models that may outperform traditional methods. however, quantum machine learning ( qml ) remains in its early stages and requires further experimental analysis to better understand its behavior and limitations. in this paper, we propose an end to end hybrid classical quantum convolutional neural network ( cq cnn ) for ad detection using clinically formatted 3d mri data. our approach involves developing a framework to make 3d mri data usable for machine learning, designing and training a brain tissue segmentation model ( skull net ), and training a diffusion model to generate synthetic images for the minority class. our converged models exhibit potential quantum advantages, achieving higher accuracy in fewer epochs than classical models. the proposed beta8 3 qubit model achieves an accuracy of 97. 50 %, surpassing state of the art ( sota ) models while requiring significantly fewer computational resources. in particular, the architecture employs only 13k parameters ( 0. 48 mb ), reducing the parameter count by more than 99. 99 % compared to current sota models. furthermore, the diffusion - generated data used to train our quantum models, in conjunction with real samples, preserve clinical structural standards, representing a notable first in the field of qml. we conclude that cqcnn architecture like models, with further improvements in gradient optimization techniques, could become a viable option and even a potential alternative to classical models for ad detection, especially in data limited and resource constrained clinical settings.
arxiv:2503.02345
we construct the type ia supernovae ( sne ia ) luminosity function ( lf ) using the zwicky transient facility bright transient survey ( bts ) catalogue. while this magnitude - limited survey has an unprecedented number of objects, it suffers from large distance uncertainties and lacks an estimation of host extinction. we bypass these issues by calculating the intrinsic luminosities from the shape parameters of the light curve ' s $ g $ and $ r $ bands, with the luminosities calibrated from the well observed sne ia sample of the carnegie supernova project, allowing us to construct, for the first time, the intrinsic lf of sne ia. we then use a novel tight relation between the color stretch and the synthesized $ ^ { 56 } $ ni mass, $ m _ \ mathrm { ni56 } $, to determine the $ m _ \ mathrm { ni56 } $ distribution of sne ia. we find that the lfs are unimodal, with their peaks in line with previous results, but have a much lower rate of dim events and luminous events. we show that the features on top of the unimodal lf - derived distributions are all compatible with statistical noise, consistent with a single progenitor channel for the explosions. we further derive, for the first time, the sne ia distribution of host galaxy extinction, and find a mean selective extinction of $ e ( b - v ) \ approx0. 1 $ and a non - negligible fraction with large, $ > 1 \, \ text { mag } $, extinction in the optical bands. the high extinction is typical for luminous sne, supporting their young population origin.
arxiv:2109.06219
this companion paper supports the replication of the fashion trend forecasting experiments with the kern ( knowledge enhanced recurrent network ) method that we presented in the icmr 2020. we provide an artifact that allows the replication of the experiments using a python implementation. the artifact is easy to deploy with simple installation, training and evaluation. we reproduce the experiments conducted in the original paper and obtain similar performance as previously reported. the replication results of the experiments support the main claims in the original paper.
arxiv:2105.11826
the temperature and pressure dependence of the thermal displacements and lattice parameters were obtained across the $ \ gamma \ to \ alpha $ phase transition of ce using high - pressure, high - resolution neutron and synchrotron x - ray powder diffraction. the estimated vibrational entropy change per atom in the $ \ gamma \ to \ alpha $ phase transition, $ \ delta s ^ { \ gamma - \ alpha } _ { \ rm vib } \ approx ( 0. 75 \ pm 0. 15 ) $ k $ _ { \ rm b } $, is about half of the total entropy change. the bulk modulus follows a power - law pressure dependence which is well described using the framework of electron - phonon coupling. these results clearly demonstrate the importance of lattice vibrations, in addition to the spin and charge degrees of freedom, for a complete description of the $ \ gamma \ to \ alpha $ phase transition in elemental ce.
arxiv:cond-mat/0308416
we describe the influence of hard wall confinement and lateral dimension on the low temperature transport properties of long diffusive channels and ballistic crosses fabricated in an insb / inxal1 - xsb heterostructure. partially diffuse boundary scattering is found to play a crucial role in the electron dynamics of ballistic crosses and substantially enhance the negative bend resistance. experimental observations are supported by simulations using a classical billiard ball model for which good agreement is found when diffuse boundary scattering is included.
arxiv:1009.3823
that creative technology tools, though " widely available ", are difficult to use for young populations. the first major corporation to have a corporate officer bearing the title creative technology was the walt disney company, which gave it first to the imagineer, bran ferren in 1993, who eventually became disney ' s president of creative technology in 1998. at about the same time, the first educational research center in the united states was created to bridge these disciplines across industry, academia and the defense communities, designated the university of southern california ' s, institute for creative technologies. the ict was established with funding by the us army. marketers and advertisers are also looking toward the power of creative technology to re - engage customers. the uk ' s marketing agencies association is promoting creative technology as a way to build a more connected and personalized engagement with prospective customers, which launched a creative technology initiative in early 2015. industry associations and developers, arts organizations and agency creatives alike call for more investment in technology, which has lagged behind the sea change in the industry that is introducing more technology into creative fields such as google. many advertising agencies and other businesses have begun to create internal labs for research in creative technology. for example, unilever created its foundry project as a way for the company to " embrace the mentality of hacking, deploying and scaling " ; they share their discoveries and view the lab as a way to incorporate technology into the company, drive experimentation and engage with strategic partners. the adobe creative technologies lab collaborated with the mit media lab, one of the most notable endeavors in the creative technology field, to give artists the ability to draw geometric designs with a computer without having to master text - based programming or math. = = examples = = " creativeapplications. net ( can ) is a community of creative practitioners working at the intersection of art, media and technology. " a pepper grinder that disabled wifi in the household when twisted was introduced by the head of creative technology at agency clemenger bbdo. zkm has an annual prize for creative apps, the app art awards itp ( the interactive telecommunications program ) has a class in " creative computing " " the eyeo festival brings together a rich intersection of people doing fascinating things with technology. artists, data designers, creative coders, ai & xr explorers, storytellers, researchers, technology & platform developers all cross paths and share inspiration at eyeo " artist jake lee - high created an interactive street experience for the premiere of showtime ( tv network ) penny dreadful
https://en.wikipedia.org/wiki/Creative_technology
we introduce low regularity exponential - type integrators for nonlinear schr \ " odinger equations for which first - order convergence only requires the boundedness of one additional derivative of the solution. more precisely, we will prove first - order convergence in $ h ^ r $ for solutions in $ h ^ { r + 1 } $ ( $ r > d / 2 $ ) of the derived schemes. this allows us lower regularity assumptions on the data in the energy space than for instance required for classical splitting or exponential integration schemes. for one dimensional quadratic schr \ " odinger equations we can even prove first - order convergence without any loss of regularity. numerical experiments underline the favorable error behavior of the newly introduced exponential - type integrators for low regularity solutions compared to classical splitting and exponential integration schemes.
arxiv:1603.07746
we discuss the interplay between spectral shape and detector response beyond a simple e ^ - 2 neutrino flux at neutrino telescopes, at the example of time - integrated point source searches using icecube - 40 data. we use a self - consistent model for the neutrino production, in which protons interact with synchrotron photons from co - accelerated electrons, and we fully take into account the relevant pion and kaon production modes, the flavor composition at the source, flavor mixing, and magnetic field effects on the secondaries ( pions, muon, and kaons ). since some of the model parameters can be related to the hillas parameters r ( size of the acceleration region ) and b ( magnetic field ), we relate the detector response to the hillas plane. in order to compare the response to different spectral shapes, we use the energy flux density as a measure for the pion production efficiency times luminosity of the source. we demonstrate that icecube has a very good reach in this quantity for agn nuclei and jets for all source declinations, while the spectra of sources with strong magnetic fields are found outside the optimal reach. we also demonstrate where neutrinos from kaon decays and muon tracks from tau decays can be relevant for the detector response. finally, we point out the complementarity between icecube and other experiments sensitive to high - energy neutrinos, at the example of 2004 - 2008 earth - skimming neutrino data from auger. we illustrate that auger, in principle, is better sensitive to the parameter region in the hillas plane from which the highest - energetic cosmic rays may be expected in this model.
arxiv:1103.4266
in this paper we show that the space of nodal rational curves, which is so called a severi variety ( of rational curves ), on any non - singular projective surface is always equipped with a natural einstein - weyl structure, if the space is 3 - dimensional. this is a generalization of the einstein - weyl structure on the space of smooth rational curves on a complex surface, given by n. hitchin. as geometric objects naturally associated to einstein - weyl structure, we investigate null surfaces and geodesics on the severi varieties. also we see that if the projective surface has an appropriate real structure, then the real locus of the severi variety becomes a positive definite einstein - weyl manifold. moreover we construct various explicit examples of rational surfaces having 3 - dimensional severi varieties of rational curves.
arxiv:0901.2264
the atmospheric parameters and chemical abundances of two neglected a - type stars, 28 peg and hd 202240, were derived using high resolution spectra obtained at the tubitak national observatory. we determined the photospheric abundances of eleven elements for 28 peg and twenty for hd 202240, using equivalent - width measurement and spectral synthesis methods. their abundance patterns are in good agreement with those of chemically normal a - type stars having similar atmospheric parameters. we pinpoint the position of these stars on the h - r diagram and estimate their masses and ages as ; $ 2. 60 \ pm0. 10 \ m _ \ odot $ and $ 650 \ pm50 \ myr $ for 28 peg and $ 4. 50 \ pm0. 09 \ m _ \ odot $ and $ 150 \ pm10 \ myr $ for hd 202240. to compare our abundance determinations with those of stars having similar ages and atmospheric parameters, we select members of open clusters. we notice that our target stars exhibit similar abundance patterns with these members.
arxiv:1507.00475
starting with the 2004 recall referendum, an important opposition sector to president chavez has questioned the integrity of the venezuelan electoral system, and casts doubt on the legitimacy and impartiality of the upcoming 2012 presidential elections on october 7. after carrying out a forensic analysis on venezuelan elections and referendums celebrated since 1998 until 2012, we reach two controversial conclusions : on one hand, we cannot rule out the hypothesis of fraud in elections run by the current regime. on the other, if fraud has been committed, it has not been decisive on results of past elections. in other words, the winner would have been the same in clean elections. only in a scenario of tight results, as 2012 elections could be, fraud would constitute a decisive factor. - - - - a partir del refer \ ' endum revocatorio del 2004, un importante sector opositor al presidente ch \ ' avez ha cuestionado la integridad del sistema electoral venezolano y tiene dudas acerca de la legitimidad e imparcialidad de las futuras elecciones presidenciales del 7 de octubre del 2012. practicando un an \ ' alisis forense a elecciones y refer \ ' endums venezolanos desde 1998 hasta 2012 llegamos a dos controversiales conclusiones : por un lado, no podemos descartar la hip \ ' otesis de fraude en comicios administrados por el actual r \ ' egimen. por otro lado, de haberse cometido fraude en pasadas elecciones, este no ha sido determinante en los resultados. es decir, el ganador hubiese sido el mismo en elecciones limpias. s \ ' olo en un escenario de resultados ajustados, como pudiera ser el 2012, el fraude podr \ ' ia ser determinante.
arxiv:1209.3795
exotic decays of the standard model - like higgs boson into beyond - the - standard model particles are predicted in a wide range of well - motivated theories. the enormous samples of higgs bosons that have been and will be produced at the large hadron collider thus constitute one of the key discovery opportunities at that facility, particularly in the upcoming high - statistics high - luminosity run. here we review recent theoretical work on models that predict or accommodate exotic higgs decays, the status of current experimental searches, and look forward to future capabilities at dedicated higgs factories and beyond.
arxiv:2111.12751
we give two results concerning the construction of modular invariant partition functions for conformal field theories constructed by tensoring together other conformal field theories. first we show how the possible modular invariants for the tensor product theory are constrained if the allowed modular invariants of the individual conformal field theory factors have been classified. we illustrate the use of these constraints for theories of the type $ su ( 2 ) _ { k _ a } \ otimes su ( 2 ) _ { k _ b } $, finding all consistent theories for $ k _ a, k _ b $ odd. second we show how known diagonal modular invariants can be used to construct some inherently asymmetric ones where the holomorphic and anti - holomorphic theories do not share the same chiral algebra. some explicit examples are given.
arxiv:hep-th/9211073
using the formula found by noorbala and sepehrinia, the wave deviation in an inhomogeneous medium with continuous variation of propagation velocity is deduced. for electromagnetic waves ( light ) that propagate in the gravitational field, the deduced deviation is identical to that calculated from general relativity. the method and its consequences are a good example that verifies the noorbala - sepehrinia ' s formula as well as the mecano - optics analogy ( hamilton ' s principle / principle of stationary action and fermat ' s principle ) for the bodies movement in the gravitational field.
arxiv:1810.07029
variable star network ( vsnet, http : / / www. kusastro. kyoto - u. ac. jp / vsnet / ) is a global professional - amateur network of researchers in variable stars and related objects, particularly in transient objects, such as cataclysmic variables, black hole binaries, supernovae and gamma - ray bursts. the vsnet has been playing a pioneering role in establishing the field of " transient object astronomy ", by effectively incorporating modern advance in observational astronomy and global electronic network, as well as collaborative progress in theoretical astronomy and astronomical computing. the vsnet is now one of the best - featured global networks in this field of astronomy. we review on the historical progress, design concept, associated technology, and a wealth of scientific achievements powered by the vsnet.
arxiv:astro-ph/0310209
this article describes an approach to incorporate expert opinion on observable quantities through the use of a loss function which updates a prior belief as opposed to specifying parameters on the priors. eliciting information on observable quantities allows experts to provide meaningful information on a quantity familiar to them, in contrast to elicitation on model parameters, which may be subject to interactions with other parameters or non - linear transformations before obtaining an observable quantity. the approach to incorporating expert opinion described in this paper is distinctive in that we do not specify a prior to match an expert ' s opinion on observed quantity, rather we obtain a posterior by updating the model parameters through a loss function. this loss function contains the observable quantity, expressed a function of the parameters, and is related to the expert ' s opinion which is typically operationalized as a statistical distribution. parameters which generate observable quantities which are further from the expert ' s opinion incur a higher loss, allowing for the model parameters to be estimated based on their fidelity to both the data and expert opinion, with the relative strength determined by the number of observations and precision of the elicited belief. including expert opinion in this fashion allows for a flexible specification of the opinion and in many situations is straightforward to implement with commonly used probabilistic programming software. we highlight this using three worked examples of varying model complexity including survival models, a multivariate normal distribution and a regression problem.
arxiv:2302.06391
representations of the poincar \ ' { e } symmetry are studied by using a hilbert space with a phase space content. the states are described by wave functions ( quasi amplitudes of probability ) associated with wigner functions ( quasi probability density ). the gauge symmetry analysis provides a realization of the seiberg - witten gauge theory for noncommutative fields.
arxiv:1402.1446
we prove that free pre - lie algebras, when considered as lie algebras, are free. working in the category of s - modules, we define a natural filtration on the space of generators. we also relate the symmetric group action on generators with the structure of the anticyclic prelie operad.
arxiv:0704.2153
we report on the hadron mass spectrum obtained on a 16 ^ 3 x 40 lattice in full qcd at \ beta = 5. 7 using two flavors of staggered fermions with m a = 0. 01. we study the effective mass plateaus for different sized sources. our mass results are slightly lighter than our earlier 16 ^ 3 x 32 calculation. the landau gauge \ delta is quite different from the coulomb gauge \ delta.
arxiv:hep-lat/9412069
we consider group control by adding individuals ( gcai ) in the setting of group identification for two procedural rules - - the consensus - start - respecting rule and the liberal - start - respecting rule. it is known that gcai for both rules are np - hard, but whether they are fixed - parameter tractable with respect to the number of distinguished individuals remained open. we resolve both open problems in the affirmative. in addition, we strengthen the np - hardness of gcai by showing that, with respect to the natural parameter the number of added individuals, gcai for both rules are w [ 2 ] - hard. notably, the w [ 2 ] - hardness for the liberal - start - respecting rule holds even when restricted to a very special case where the qualifications of individuals satisfy the so - called consecutive ones property. however, for the consensus - start - respecting rule, the problem becomes polynomial - time solvable in this special case. we also study a dual restriction where the disqualifications of individuals fulfill the consecutive ones property, and show that under this restriction gcai for both rules turn out to be polynomial - time solvable. our reductions for showing w [ 2 ] - hardness also imply several lower bounds concerning kernelization and exact algorithms.
arxiv:2203.16872
in this article we prove upper bounds for the laplace eigenvalues $ \ lambda _ k $ below the essential spectrum for strictly negatively curved cartan - hadamard manifolds. our bound is given in terms of $ k ^ 2 $ and specific geometric data of the manifold. this applies also to the particular case of non - compact manifolds whose sectional curvature tends to $ - \ infty $, where no essential spectrum is present due to a theorem of donnelly / li. the result stands in clear contrast to laplacians on graphs where such a bound fails to be true in general.
arxiv:1706.02437
polarisation imaging is used to distinguish objects and surface characteristics that are otherwise not visible with black - and - white or colour imaging. full - stokes polarisation imaging allows complex image processing like water glint filtering, which is particularly useful for remote earth observations. the relatively low cost of small - satellites makes their use in remote sensing more accessible. however, their size and weight limitations cannot accommodate the bulky conventional optics needed for full - stokes polarisation imaging. we present the modelling of an ultra - thin topology - optimised diffractive metasurface that encodes polarisation states in five different diffraction orders. positioning the metasurface in a telescope ' s pupil plane allows the diffraction orders to be imaged onto a single detector, resulting in the capability to perform single - shot full - stokes polarisation imaging of the earth ' s surface. the five rectangular image swaths are designed to use the full width of the camera, and then each successive frame can be stitched together as the satellite moves over the earth ' s surface, restoring the full field of view achievable with any chosen camera without comprising the on - ground resolution. each set of four out of the five orders enables the reconstruction of the full polarisation state, and their simultaneous reconstructions allow for error monitoring. the lightweight design and compact footprint of the polarisation imaging optical system achievable with a metasurface is a novel approach to increase the functionality of small satellites while working within their weight and volume constraints.
arxiv:2412.06132
an accurate understanding of a user ' s query intent can help improve the performance of downstream tasks such as query scoping and ranking. in the e - commerce domain, recent work in query understanding focuses on the query to product - category mapping. but, a small yet significant percentage of queries ( in our website 1. 5 % or 33m queries in 2019 ) have non - commercial intent associated with them. these intents are usually associated with non - commercial information seeking needs such as discounts, store hours, installation guides, etc. in this paper, we introduce joint query intent understanding ( jointmap ), a deep learning model to simultaneously learn two different high - level user intent tasks : 1 ) identifying a query ' s commercial vs. non - commercial intent, and 2 ) associating a set of relevant product categories in taxonomy to a product query. jointmap model works by leveraging the transfer bias that exists between these two related tasks through a joint - learning process. as curating a labeled data set for these tasks can be expensive and time - consuming, we propose a distant supervision approach in conjunction with an active learning model to generate high - quality training data sets. to demonstrate the effectiveness of jointmap, we use search queries collected from a large commercial website. our results show that jointmap significantly improves both " commercial vs. non - commercial " intent prediction and product category mapping by 2. 3 % and 10 % on average over state - of - the - art deep learning methods. our findings suggest a promising direction to model the intent hierarchies in an e - commerce search engine.
arxiv:2005.13783
temporal graphs are graphs with time - stamped edges. we study the problem of finding a small vertex set ( the separator ) with respect to two designated terminal vertices such that the removal of the set eliminates all temporal paths connecting one terminal to the other. herein, we consider two models of temporal paths : paths that pass through arbitrarily many edges per time step ( non - strict ) and paths that pass through at most one edge per time step ( strict ). regarding the number of time steps of a temporal graph, we show a complexity dichotomy ( np - hardness versus polynomial - time solvability ) for both problem variants. moreover we prove both problem variants to be np - complete even on temporal graphs whose underlying graph is planar. we further show that, on temporal graphs with planar underlying graph, if additionally the number of time steps is constant, then the problem variant for strict paths is solvable in quasi - linear time. finally, we introduce and motivate the notion of a temporal core ( vertices whose incident edges change over time ). we prove that the non - strict variant is fixed - parameter tractable when parameterized by the size of the temporal core, while the strict variant remains np - complete, even for constant - size temporal cores.
arxiv:1711.00963
hirota ' s discrete korteweg - de vries equation ( dkdv ) is an integrable partial difference equation on 2 - dimensional integer lattice, which approaches the korteweg - de vries equation in a continuum limit. we find new transformations to other equations, including a second - degree second - order partial difference equation, which provide an unusual embedding into a three - dimensional lattice. the consistency of the resulting system extends a property that has been widely used to study partial difference equations on multidimensional lattices.
arxiv:2102.00684
condensed matter systems with topological order and metamaterials with left - handed chirality have attracted recently extensive interests in the fields of physics and optics. so far the two fields are independent, and there is no work to address their connection. here we propose to establish the relation between the topological order in condensed matter systems and the chirality in metamaterials, by mapping explicitly maxwell ' s equations to the dirac equation in one dimension. we report an experimental implement of the band inversion in the dirac equation, which accompanies change of chirality of electromagnetic wave in metamaterials, and the first microwave measurement of topological excitations and topological phases in one dimension. our finding provides a proof - of - principle example that electromagnetic wave in the metamaterials can be used to simulate the topological order in condensed matter systems and quantum phenomena in relativistic quantum mechanics in a controlled laboratory environment.
arxiv:1211.5413
the energy - based stochastic extension of the schrodinger equation is a rather special nonlinear stochastic differential equation on hilbert space, involving a single free parameter, that has been shown to be very useful for modelling the phenomenon of quantum state reduction. here we construct a general closed form solution to this equation, for any given initial condition, in terms of a random variable representing the terminal value of the energy and an independent brownian motion. the solution is essentially algebraic in character, involving no integration, and is thus suitable as a basis for efficient simulation studies of state reduction in complex systems.
arxiv:quant-ph/0203035
we are concerned with the design of model predictive control ( mpc ) schemes such that asymptotic stability of the resulting closed loop is guaranteed even if the linearization at the desired set point fails to be stabilizable. therefore, we propose to construct the stage cost based on the homogeneous approximation and rigorously show that applying mpc yields an asymp - totically stable closed - loop behavior if the homogeneous approximation is asymptotically null controllable. to this end, we verify cost controllability - a condition relating the current state, the stage cost, and the growth of the value function w. r. t. time - for this class of systems in order to provide stability and performance guarantees for the proposed mpc scheme without stabilizing terminal costs or constraints.
arxiv:1906.05112
we study nonequilibrium phase transitions in a mass - aggregation model which allows for diffusion, aggregation on contact, dissociation, adsorption and desorption of unit masses. we analyse two limits explicitly. in the first case mass is locally conserved whereas in the second case local conservation is violated. in both cases the system undergoes a dynamical phase transition in all dimensions. in the first case, the steady state mass distribution decays exponentially for large mass in one phase, and develops an infinite aggregate in addition to a power - law mass decay in the other phase. in the second case, the transition is similar except that the infinite aggregate is missing.
arxiv:cond-mat/9806353
a ubiquitous feature of living cells is their growth over time followed by division into daughter cells. how isogenic cell populations maintain size homeostasis, i. e., a narrow distribution of cell size, is an intriguing fundamental problem. we model cell size using a stochastic hybrid system, where a cell grows exponentially in size ( volume ) over time and probabilistic division events are triggered at discrete time intervals. moreover, whenever division events occur, size is randomly partitioned among daughter cells. we first consider a scenario, where a timer ( i. e., cell - cycle clock ) that measures the time since the last division event regulates both the cellular growth and division rates. analysis reveals that such a timer - controlled system cannot achieve size homeostasis, in the sense that, the cell - to - cell size variation grows unboundedly with time. to explore biologically meaningful mechanisms for controlling size we consider two classes of regulation : a size - dependent growth rate and a size - dependent division rate. our results show that these strategies can provide bounded intercellular variation in cell size, and exact mathematical conditions on the form of regulation needed for size homeostasis are derived. different known forms of size control strategies, such as, the adder and the sizer are shown to be consistent with these results. interestingly, for timer - based division mechanisms, the mean cell size depends on the noise in the cell - cycle duration but independent of errors incurred in partitioning of volume among daughter cells. in contrast, the mean cell size decreases with increasing partitioning errors for size - based division mechanisms. finally, we discuss how organisms ranging from bacteria to mammalian cells have adopted different control approaches for maintaining size homeostasis.
arxiv:1606.00535
the nuclear waste problem is one of the main interests of the rare earth and actinide elements chemistry. studies of actinide - containing compounds are at the frontier of the applications of current theoretical methods due to the need to consider relativistic effects and approximations to the dirac equation in them. here, we employ four - component relativistic quantum calculations and scalar approximations to understand the contribution of f - type atomic orbitals in the chemical bonding of actinides ( ac ) to organic ligands. we studied the relativistic quantum structure of an isostructural family made of plutonium ( pu ), americium ( am ), californium ( cf ), and berkelium ( bk ) atoms with the redox - active model ligand ; dopo ( 2, 4, 6, 8 - tetra - tert - butyl - 1 - oxo - 1h - phenoxazin - 9 - olate ). crystallographic structures were available to validate our calculations for all mentioned elements except for cf. in short, state - of - the - art relativistic calculations were performed at different levels of theory to investigate the relativistic effects and electron correlations on geometrical structures and bonding energies of $ ac $ - dopo $ _ 3 $ complexes ( $ ac $ = pu, am, cf, bk ) : 1 ) the scalar relativistic zeroth order regular approximation ( zora ) within the hybrid density functional theory ( dft ) and 2 ) the four - component dirac equation with the dirac - hartree - fock ( 4c - dhf ) and l \ ' evy - leblond ( ll ) hamiltonians. we show that scalar dft - zora could be used as an efficient theoretical approximation to first approximate the geometry and electronic properties of actinides which are difficult to synthesize or characterize ; but knowing that the higher levels of theory, like the 4c - dhf, gives closer results to experiments than the scalar dft - zora. we also performed spin - free calculations of geometric parameters for the americium and berkelium compounds.
arxiv:2108.06057
we perform fully general - relativistic hydrodynamics simulations of binary neutron star mergers over $ 100 \, \ rm ms $ post - merger to investigate the dynamics of remnant massive neutron stars ( nss ). our focus is mainly on the analysis of convective stability and mode characteristics of the massive nss. we derive stability criteria for hot, differentially rotating relativistic stars that account for both buoyant and rotational restoring forces, and apply them for the first time to the post - merger massive nss. our results show no evidence of large - scale convective instability, as both angle - averaged specific entropy and specific angular momentum increase outward within the massive nss. rotational effects significantly enhance stability for local regions that would be otherwise unstable by the schwarzschild criterion. additionally, our mode analysis of matter fields and gravitational waves reveals no excitation of inertial modes after the damping of quadrupolar $ f $ - modes in the massive nss, contrasting with previous studies. as in many previous works, we observe the excitation of an $ m = 1 $ one - armed mode. however, we also find that the growth of the $ m = 1 $ mode amplitude after the merger may correlate strongly with the violation of linear momentum conservation, indicating that we cannot reject the possibility that the excitation of the one - armed mode has a numerical origin.
arxiv:2501.19053
due to the growing number of cyber attacks against computer systems, we need to pay special attention to the security of our software systems. in order to maximize the effectiveness, excluding the human component from this process would be a huge breakthrough. the first step towards this is to automatically recognize the vulnerable parts in our code. researchers put a lot of effort into creating machine learning models that could determine if a given piece of code, or to be more precise, a selected function, contains any vulnerabilities or not. we aim at improving the existing models, building on previous results in predicting vulnerabilities at the level of functions in javascript code using the well - known static source code metrics. in this work, we propose to include several so - called process metrics ( e. g., code churn, number of developers modifying a file, or the age of the changed source code ) into the set of features, and examine how they affect the performance of the function - level javascript vulnerability prediction models. we can confirm that process metrics significantly improve the prediction power of such models. on average, we observed a 8. 4 % improvement in terms of f - measure ( from 0. 764 to 0. 848 ), 3. 5 % improvement in terms of precision ( from 0. 953 to 0. 988 ) and a 6. 3 % improvement in terms of recall ( from 0. 697 to 0. 760 ).
arxiv:2105.07527
the initial conditions for newtonian $ n $ - body simulations are usually generated by applying the zel ' dovich approximation to the initial displacements of the particles using an initial power spectrum of density fluctuations generated by an einstein - boltzmann solver. we show that in most gauges the initial displacements generated in this way receive a first - order relativistic correction. we define a new gauge, the $ n $ - body gauge, in which this relativistic correction vanishes and show that a conventional newtonian $ n $ - body simulation includes all first - order relativistic contributions ( in the absence of radiation ) if we identify the coordinates in newtonian simulations with those in the relativistic $ n $ - body gauge.
arxiv:1505.04756
let r be a noetherian standard graded ring, and m and n two finitely generated graded r - modules. we introduce reg _ r ( m, n ) by using the notion of generalized local cohomology instead of local cohomology, in the definition of regularity. we prove that reg _ r ( m, n ) is finite in several cases. in the case that the base ring is a field, we show that reg _ r ( m, n ) = reg ( n ) - indeg ( m ). this formula, together with a graded version of duality for generalized local cohomology, gives a formula for the minimum of the initial degrees of some ext modules ( in the case r is cohen - macaulay ), of which the three usual definitions of regularity are special cases. bounds for regularity of certain ext modules are obtained, using the same circle of ideas.
arxiv:math/0701509
data augmentation is a widely used trick when training deep neural networks : in addition to the original data, properly transformed data are also added to the training set. however, to the best of our knowledge, a clear mathematical framework to explain the performance benefits of data augmentation is not available. in this paper, we develop such a theoretical framework. we show data augmentation is equivalent to an averaging operation over the orbits of a certain group that keeps the data distribution approximately invariant. we prove that it leads to variance reduction. we study empirical risk minimization, and the examples of exponential families, linear regression, and certain two - layer neural networks. we also discuss how data augmentation could be used in problems with symmetry where other approaches are prevalent, such as in cryo - electron microscopy ( cryo - em ).
arxiv:1907.10905
it is shown that the minimal left - right symmetric model admits cosmic string, domain wall and conditionally, monopole solutions. the strings arise when the $ su ( 2 ) _ r $ is broken and can either be destabilized at the electroweak scale or remain stable through the subsequent breakdown to $ u ( 1 ) _ { em } $. the monopoles and domain wall configurations exist in the $ su ( 2 ) _ l \ otimes u ( 1 ) _ y $ symmetric phase and disappear after subsequent symmetry breaking. their destabilization provides new sources of non - equilibrium effects below the electroweak scale. several defect - mediated mechanisms for low energy baryogenesis are shown to be realisable in this model.
arxiv:hep-ph/9805276
relying on the recent work of liu - sz \ ' ekelyhidi we give a weak asymptotic estimate for the bergman kernels of polarized k \ " ahler manifolds with ricci lower bound and sobolev constant upper bound. we will also give a simple proof for the partial $ c ^ 0 $ estimate along the ( generalized ) k \ " ahler - ricci flow on fano manifolds.
arxiv:1911.11328
a new class of rings, { \ em the class of weakly left localizable rings }, is introduced. a ring $ r $ is called { \ em weakly left localizable } if each non - nilpotent element of $ r $ is invertible in some left localization $ s ^ { - 1 } r $ of the ring $ r $. explicit criteria are given for a ring to be a weakly left localizable ring provided the ring has only finitely many maximal left denominator sets ( eg, this is the case if a ring has a left artinian left quotient ring ). it is proved that a ring with finitely many maximal left denominator sets that satisfies some natural conditions is a weakly left localizable ring iff its left quotient ring is a direct product of finitely many local rings such that their radicals are nil ideals.
arxiv:1408.5608
properly defining a reward signal to efficiently train a reinforcement learning ( rl ) agent is a challenging task. designing balanced objective functions from which a desired behavior can emerge requires expert knowledge, especially for complex environments. learning rewards from human feedback or using large language models ( llms ) to directly provide rewards are promising alternatives, allowing non - experts to specify goals for the agent. however, black - box reward models make it difficult to debug the reward. in this work, we propose object - centric assessment with language models ( ocalm ) to derive inherently interpretable reward functions for rl agents from natural language task descriptions. ocalm uses the extensive world - knowledge of llms while leveraging the object - centric nature common to many environments to derive reward functions focused on relational concepts, providing rl agents with the ability to derive policies from task descriptions.
arxiv:2406.16748
the nuclear symmetry energy is a fundamental quantity important for studying the structure of systems as diverse as the atomic nucleus and the neutron star. considerable efforts are being made to experimentally extract the symmetry energy and its dependence on nuclear density and temperature. in this article, we review experimental studies carried out up - to - date and their current status.
arxiv:1002.0313
can be used anywhere it is necessary to ensure the energy supply to a machine is interrupted before the machine is entered for adjustment or maintenance. = = mechanical = = interlocks may be strictly mechanical. an example of a mechanical interlock is a steering wheel of a car. in modern days, most cars have an anti - theft feature that restricts the turning of the steering wheel if the key is not inserted in the ignition. this prevents an individual from pushing the car since the mechanical interlock restricts the directional motion of the front wheels of the car. in the operation of a device such as a press or cutter that is hand fed or the workpiece hand removed, the use of two buttons to actuate the device, one for each hand, greatly reduces the possibility of operation endangering the operator. no such system is fool - proof, and such systems are often augmented by the use of cable – pulled gloves worn by the operator ; these are retracted away from the danger area by the stroke of the machine. a major problem in engineering operator safety is the tendency of operators to ignore safety precautions or even outright disabling forced interlocks due to work pressure and other factors. therefore, such safeties require and perhaps must facilitate operator cooperation. = = electrical = = many people use generators to supplement power to a home or business in the event that main ( municipal ) power has gone offline. in order to safely transfer the power source from a generator ( and back to the main ), a safety interlock is often employed. the interlock consists of one or more switches that prevent both main power and generator power from powering the dwelling simultaneously. without this safeguard, both power sources running at once could cause an overload condition, or generator power back - feed onto the main could cause the dangerous voltage to reach a lineman repairing the main feed far outside the building. an interlock device is designed to allow a generator to provide backup power in such a way that it ( a ) prevents main and generator power to be connected at the same time, and ( b ) allows circuit breakers to operate normally without interference in the event of an overload condition. most interlock devices for electrical systems employ a mechanical device to manage the movement of circuit breakers. some also allow for the use of padlocks to prevent someone from accidentally activating the main power system without authorization. = = defeatable = = interlocks prevent injuries by preventing direct contact with energized parts of electrical equipment. only qualified personnel, who must use a tool
https://en.wikipedia.org/wiki/Interlock_(engineering)
we present an effective immunization strategy for computer networks and populations with broad and, in particular, scale - free degree distributions. the proposed strategy, acquaintance immunization, calls for the immunization of random acquaintances of random nodes ( individuals ). the strategy requires no knowledge of the node degrees or any other global knowledge, as do targeted immunization strategies. we study analytically the critical threshold for complete immunization. we also study the strategy with respect to the susceptible - infected - removed epidemiological model. we show that the immunization threshold is dramatically reduced with the suggested strategy, for all studied cases.
arxiv:cond-mat/0207387
in tabular biomedical data analysis, tuning models to high accuracy is considered a prerequisite for discussing feature importance, as medical practitioners expect the validity of feature importance to correlate with performance. in this work, we challenge the prevailing belief, showing that low - performing models may also be used for feature importance. we propose experiments to observe changes in feature rank as performance degrades sequentially. using three synthetic datasets and six real biomedical datasets, we compare the rank of features from full datasets to those with reduced sample sizes ( data cutting ) or fewer features ( feature cutting ). in synthetic datasets, feature cutting does not change feature rank, while data cutting shows higher discrepancies with lower performance. in real datasets, feature cutting shows similar or smaller changes than data cutting, though some datasets exhibit the opposite. when feature interactions are controlled by removing correlations, feature cutting consistently shows better stability. by analyzing the distribution of feature importance values and theoretically examining the probability that the model cannot distinguish feature importance between features, we reveal that models can still distinguish feature importance despite performance degradation through feature cutting, but not through data cutting. we conclude that the validity of feature importance can be maintained even at low performance levels if the data size is adequate, which is a significant factor contributing to suboptimal performance in tabular medical data analysis. this paper demonstrates the potential for utilizing feature importance analysis alongside statistical analysis to compare features relatively, even when classifier performance is not satisfactory.
arxiv:2409.13342
we show that the specialized quantum d - module of the equivariant quantum cohomology ring of the minimal resolution of an ade singularity is isomorphic to the d - module of graded traces on the minimal nilpotent orbit in the lie algebra of the same type. this generalizes a recent result of shlykov [ hikita conjecture for the minimal nilpotent orbit, to appear in proc. ams, https : / / doi. org / 10. 1090 / proc / 15281 ] and hence verifies in this case the quantum version of hikita ' s conjecture, proposed by kamnitzer, mcbreen and proudfoot [ the quantum hikita conjecture, advances in mathematics 390 ( 2021 ) 107947 ]. we also show analogous isomorphisms for singularities of bcfg type.
arxiv:2302.13249
a finite - dimensional matrix model for the nucleon - nucleon cross section operator is used to calculate the dispersive correction to nucleon - nucleus total cross sections, and the leading terms in its expansion in the number of inelastic transitions in the high - energy limit where the longitudnal momentum transfers can be ignored.
arxiv:nucl-th/9705038
in this paper, a generalized long - wave short - wave resonance interaction system, which describes the nonlinear interaction between a short - wave and a long - wave in fluid dynamics, plasma physics and nonlinear optics, is considered. using the hirota bilinear method, the general $ n $ - bright and $ n $ - dark soliton solutions are deduced and their gram determinant forms are obtained. a special feature of the fundamental bright soliton solution is that, in general, it behaves like the korteweg - devries soliton. however, under a special condition, it also behaves akin to the nonlinear schr \ " { o } dinger soliton when it loses the amplitude dependent velocity property. the fundamental dark - soliton solution admits anti - dark, grey, and completely black soliton profiles, in the short - wave component, depending on the choice of wave parameters. on the other hand, a bright soliton like profile always occurs in the long - wave component. the asymptotic analysis shows that both the bright and dark solitons undergo an elastic collision with a finite phase shift. in addition to these, by tuning the phase shift regime, we point out the existence of resonance interactions among the bright solitons. furthermore, under a special velocity resonance condition, we bring out the various types of bright and dark soliton bound states. also, by fixing the phase factor and the system parameter $ \ beta $, corresponding to the interaction between long and short wave components, the different types of profiles associated with the obtained breather solution are demonstrated.
arxiv:2206.10159
we employ the conformal bootstrap to re - examine the problem of finding the critical behavior of four - fermion theory at its strong coupling fixed point. existence of a solution of the bootstrap equations indicates self - consistency of the assumption that, in space - time dimensions less than four, the renormalization group flow of the coupling constant of a four - fermion interaction has a nontrivial fixed point which is generally out of the perturbative regime. we exploit the hypothesis of conformal invariance at this fixed point to reduce the set of the schwinger - dyson bootstrap equations for four - fermion theory to three equations which determine the scale dimension of the fermion field $ \ psi $, the scale dimension of the composite field $ \ bar { \ psi } \ psi $ and the critical value of the yukawa coupling constant. we solve the equations assuming this critical value to be small. we show that this solution recovers the fixed point for the four - fermion interaction with $ n $ - component fermions in the limit of large $ n $ at ( euclidean ) dimensions $ d $ between two and four. we perform a detailed analysis of the $ 1 / n $ - expansion in $ d = 3 $ and demonstrate full agreement with the conformal bootstrap. we argue that this is a useful starting point for more sophisticated computations of the critical indices.
arxiv:hep-th/9301069
the style - based gan architecture ( stylegan ) yields state - of - the - art results in data - driven unconditional generative image modeling. we expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. in particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. in addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. this makes it possible to reliably attribute a generated image to a particular network. we furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
arxiv:1912.04958
virtual laboratories ( v labs ) have in the recent past become part and parcel of remote teaching in practical hands - on approaches, particularly in cybersecurity distance courses. their potential is meant to assist learners with hands - on practical laboratory exercises irrespective of geographical location. nevertheless, adopting v labs in didactic approaches in higher education has seen both merits and demerits. based on this premise, this study investigates the impact of v labs on active learning ( al ) and engagement in cybersecurity distance education. a survey with a limited number of learners and educators who have had an experience with cybersecurity distance courses that leveraged v labs in their practical lab assignment, was conducted at blekinge tekniska h \ " ogskola, sweden, to assess the impact of v labs on al and engagement in cybersecurity distance education. 29 % and 73 % of the learners and educators, respectively responded to the survey administered remotely and with good internal consistency of questionnaires based on the cronbalch alpha ; the results showed that learners and educators had a positive perception of using v labs to enhance al in cybersecurity distance education. the key concentration of the study was on al and engagement and problem - solving abilities when v labs are used. both the learners and educators found the v labs to be engaging, interactive, and effective in improving their understanding of cybersecurity concepts.
arxiv:2404.04952
it is well known that the branching process approach to the study of the random graph $ g _ { n, p } $ gives a very simple way of understanding the size of the giant component when it is fairly large ( of order $ \ theta ( n ) $ ). here we show that a variant of this approach works all the way down to the phase transition : we use branching process arguments to give a simple new derivation of the asymptotic size of the largest component whenever $ ( np - 1 ) ^ 3n \ to \ infty $.
arxiv:1207.6209
in this paper we construct a visualization of the abel ' s impossibility theorem also known as the abel - ruffini theorem. using the canvas object in javascript along with the p5. js library, and given any expression that uses analytic functions and radicals one can always construct closed paths such that the expression evaluated at the coefficients of a general polynomial returns to it ' s initial position, while the roots of the polynomial undergo a non - trivial permutation. hence, such expression does not reconstruct the roots from the coefficients. using the visualization we begin by considering the necessity of radicals to solve second degree polynomial equations and build towards degree five polynomial equations. in eventuality our program shows that there is no formula for an arbitrary fifth degree polynomial equation that uses analytic functions, finite field operations, and radicals that reconstructs the roots of the polynomial from it ' s coefficients. this theorem was partially completed by paolo ruffini in 1799 and completed by niels abel in 1824.
arxiv:1908.00972
fix $ n \ geq 5 $ general points $ p _ 1, \ dots, p _ n \ in \ mathbb { p } ^ 1 $, and a weight vector $ \ mathcal { a } = ( a _ { 1 }, \ dots, a _ { n } ) $ of real numbers $ 0 \ leq a _ { i } \ leq 1 $. consider the moduli space $ \ mathcal { m } _ { \ mathcal { a } } $ parametrizing rank two parabolic vector bundles with trivial determinant on $ \ big ( \ mathbb { p } ^ 1, p _ 1, \ dots, p _ n \ big ) $ which are semistable with respect to $ \ mathcal { a } $. under some conditions on the weights, we determine and give a modular interpretation for the automorphism group of the moduli space $ \ mathcal { m } _ { \ mathcal { a } } $. it is isomorphic to $ \ left ( \ frac { \ mathbb { z } } { 2 \ mathbb { z } } \ right ) ^ { k } $ for some $ k \ in \ { 0, \ dots, n - 1 \ } $, and is generated by admissible elementary transformations of parabolic vector bundles. the largest of these automorphism groups, with $ k = n - 1 $, occurs for the central weight $ \ mathcal { a } _ { f } = \ left ( \ frac { 1 } { 2 }, \ dots, \ frac { 1 } { 2 } \ right ) $. the corresponding moduli space $ { \ mathcal m } _ { \ mathcal { a } _ f } $ is a fano variety of dimension $ n - 3 $, which is smooth if $ n $ is odd, and has isolated singularities if $ n $ is even.
arxiv:1902.04136
several recent unsupervised learning methods use probabilistic approaches to solve combinatorial optimization ( co ) problems based on the assumption of statistically independent solution variables. we demonstrate that this assumption imposes performance limitations in particular on difficult problem instances. our results corroborate that an autoregressive approach which captures statistical dependencies among solution variables yields superior performance on many popular co problems. we introduce subgraph tokenization in which the configuration of a set of solution variables is represented by a single token. this tokenization technique alleviates the drawback of the long sequential sampling procedure which is inherent to autoregressive methods without sacrificing expressivity. importantly, we theoretically motivate an annealed entropy regularization and show empirically that it is essential for efficient and stable learning.
arxiv:2311.14156
this paper discusses a novel fully implicit formulation for a 1d electrostatic particle - in - cell ( pic ) plasma simulation approach. unlike earlier implicit electrostatic pic approaches ( which are based on a linearized vlasov - poisson formulation ), ours is based on a nonlinearly converged vlasov - amp \ ` ere ( va ) model. by iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit pic implementations. in particular, the formulation is stable against temporal ( cfl ) and spatial ( aliasing ) instabilities. it is charge - and energy - conserving to numerical roundoff for arbitrary implicit time steps. while momentum is not exactly conserved, errors are kept small by an adaptive particle sub - stepping orbit integrator, which is instrumental to prevent particle tunneling. the va model is orbit - averaged along particle orbits to enforce an energy conservation theorem with particle sub - stepping. as a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. algorithmically, the approach features a jacobian - free newton - krylov solver. a main development in this study is the nonlinear elimination of the new - time particle variables ( positions and velocities ). such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. numerical examples are presented that demonstrate the advertised properties of the scheme. in particular, long - time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant cpu gains are possible.
arxiv:1101.3701
we investigate the threshold - enhanced qcd corrections to the cross sections for direct top quark productions induced by model - independent flavor changing neutral current couplings at hadron colliders. we use the soft - collinear effective theory to describe the incoming massless partons and use the heavy quark effective theory to treat the top quark. then we construct the flavor changing operator based on the above effective theories, and resum the large logarithms near threshold arising from soft gluon emission. our results show that the resummed qcd corrections further enhance the next - to - leading order cross sections significantly. moreover, the resummation effects vastly reduce the dependence of the cross sections on the renormalization and factorization scales, especially in cases where the next - to - leading order results behave worse than the leading order results. our results are more sensitive to the new physics effects. if signals of direct top quark production are found in future experiments, it is more appropriate to use our results as the theoretical inputs for extracting the anomalous couplings.
arxiv:hep-ph/0601180
in this paper we establish some ( presumably new ) interesting expressions for the composition of some well known fractional integral operators $ i ^ { \ mu } _ { a + }, d ^ { \ mu } _ { a + } $, $ i ^ { \ gamma, \ mu } _ { a + } $ and also derive an integral operator $ \ mathcal { h } ^ { w ; m, n ; \ alpha } _ { a + ; p, q ; \ beta } $ whose kernel involve the fox ' s $ h - $ function. by suitably specializing the coefficients and the parameters in these functions we can get a large number of ( new and known ) interesting expressions for the composition formulae which occur rather frequently in many problems of engineering and mathematical analysis but here we can mention only those which follow as particular cases of the srivastava et al. \ cite { zt }.
arxiv:1703.03922
techniques for coherent control of electron spin - nuclear spin interactions in quantum dots can be directly applied in spintronics and in quantum information processing. in this work we study numerically the interaction of electron and nuclear spins in the context of storing the spin - state of an electron in a collective state of nuclear spins. we take into account the errors inherent in a realistic system : the incomplete polarization of the bath of nuclear spins and the different hyperfine interactions between the electron and individual nuclei in the quantum dot. although these imperfections deteriorate the fidelity of the quantum information retrieval, we find reasonable fidelities are achievable for modest bath polarizations.
arxiv:cond-mat/0602499
we use multi - pulse dynamical decoupling to increase the coherence lifetime ( t2 ) of large numbers of nitrogen - vacancy ( nv ) electronic spins in room temperature diamond, thus enabling scalable applications of multi - spin quantum information processing and metrology. we realize an order - of - magnitude extension of the nv multi - spin t2 for diamond samples with widely differing spin environments. for samples with nitrogen impurity concentration < ~ 1 ppm, we find t2 > 2 ms, comparable to the longest coherence time reported for single nv centers, and demonstrate a ten - fold enhancement in nv multi - spin sensing of ac magnetic fields.
arxiv:1201.5686
we investigate the collective spin dynamics of a self - rephasing bosonic ensemble of $ ^ { 87 } $ rb trapped in a 1d vertical optical lattice. we show that the combination of the frequency shifts induced by atomic interactions and inhomogeneous dephasing, together with the spin self - rephasing mechanism leads to the existence of a ` magic density ' : \ textit { i. e } a singular operating point where the clock transition is first - order insensitive to density fluctuations. this feature is very appealing for improving the stability of quantum sensors based on trapped pseudo - spin - 1 / 2 ensembles. ramsey spectroscopy of the $ | f = 1, m _ { f } = 0 \ rangle \ rightarrow | f = 2, m _ { f } = 0 \ rangle $ hyperfine transition is in qualitative agreement with a numerical model based on coupled bloch equations of motion for energy dependent spin vectors.
arxiv:1807.06877
the magnetocrystalline anisotropy energy $ e _ { anis } $ for free - standing chains ( quantum wires ) and rings ( quantum corrals ) of fe - adatoms $ n = $ ( 2... 48 ) is determined using an electronic tight - binding theory. treating spin - orbit coupling non - perturbatively, we analyze the relationship between the electronic structure of the fe $ d $ - electrons and $ e _ { anis } ( n _ { d } ) $, for both the chain and ring conformations. we find that $ e _ { anis } ( n ) $ is larger for wires than for rings or infinite monolayers. generally $ e _ { anis } ( n _ { d } ) $ decreases in chains upon increasing $ n $, while for rings $ e _ { anis } ( n _ { d } ) $ is essentially independent of $ n $. for increasing $ n $, $ e _ { anis } ( n _ { d } ) $ in corrals approaches the results for freestanding monolayers. small rings exhibit clear odd - even oscillations of $ e _ { anis } ( n ) $. within our theoretical framework we are able to explain the experimentally observed oscillations of $ e _ { anis } ( n _ { d } ) $ during film growth with a period of one monolayer. finally, a generalization of hund ' s third rule on spin - orbit coupling to itinerant ferromagnets is proposed.
arxiv:cond-mat/9605137
robotic surgery promises enhanced precision and adaptability over traditional surgical methods. it also offers the possibility of automating surgical interventions, resulting in reduced stress on the surgeon, better surgical outcomes, and lower costs. cholecystectomy, the removal of the gallbladder, serves as an ideal model procedure for automation due to its distinct and well - contrasted anatomical features between the gallbladder and liver, along with standardized surgical maneuvers. dissection is a frequently used subtask in cholecystectomy where the surgeon delivers the energy on the hook to detach the gallbladder from the liver. hence, dissection along tissue boundaries is a good candidate for surgical automation. for the da vinci surgical robot to perform the same procedure as a surgeon automatically, it needs to have the ability to ( 1 ) recognize and distinguish between the two different tissues ( e. g. the liver and the gallbladder ), ( 2 ) understand where the boundary between the two tissues is located in the 3d workspace, ( 3 ) locate the instrument tip relative to the boundary in the 3d space using visual feedback, and ( 4 ) move the instrument along the boundary. this paper presents a novel framework that addresses these challenges through ai - assisted image processing and vision - based robot control. we also present the ex - vivo evaluation of the automated procedure on chicken and pork liver specimens that demonstrates the effectiveness of the proposed framework.
arxiv:2310.09669
few - shot learning ( fsl ) is a central problem in meta - learning, where learners must efficiently learn from few labeled examples. within fsl, feature pre - training has recently become an increasingly popular strategy to significantly improve generalization performance. however, the contribution of pre - training is often overlooked and understudied, with limited theoretical understanding of its impact on meta - learning performance. further, pre - training requires a consistent set of global labels shared across training tasks, which may be unavailable in practice. in this work, we address the above issues by first showing the connection between pre - training and meta - learning. we discuss why pre - training yields more robust meta - representation and connect the theoretical analysis to existing works and empirical results. secondly, we introduce meta label learning ( mela ), a novel meta - learning algorithm that learns task relations by inferring global labels across tasks. this allows us to exploit pre - training for fsl even when global labels are unavailable or ill - defined. lastly, we introduce an augmented pre - training procedure that further improves the learned meta - representation. empirically, mela outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task - specific. we also provide extensive ablation study to highlight its key properties.
arxiv:2212.11702
traditionally, software quality is thought to depend on sound software engineering and development methodologies such as structured programming and agile development. however, high quality software depends just as much on high quality collaboration within the team. since the success rate of software development projects is low ( wateridge, 1995 ; the standish group, 2009 ), it is important to understand which characteristics of interactions within software development teams significantly influence performance. hoegl and gemuenden ( 2001 ) reported empirical evidence for the relation between teamwork quality and software quality, using a six - factor teamwork quality ( twq ) model. this article extends the work of hoegl and gemuenden ( 2001 ) with the aim of finding additional factors that may influence software team performance. we introduce three new twq factors : trust, value sharing, and coordination of expertise. the relationship between twq and team performance and the improvement of the model are tested using data from 252 team members and stakeholders. results show that teamwork quality is significantly related to team performance, as rated by both team members and stakeholders : twq explains 81 % of the variance of team performance as rated by team members and 61 % as rated by stakeholders. this study shows that trust, shared values, and coordination of expertise are important factors for team leaders to consider in order to achieve high quality software team work.
arxiv:1701.06146
the sars - cov - 2 pandemic has emphasised the importance of developing a universal vaccine that can protect against current and future variants of the virus. the present study proposes a novel conditional protein language model architecture, called vaxformer, which is designed to produce natural - looking antigenicity - controlled sars - cov - 2 spike proteins. we evaluate the generated protein sequences of the vaxformer model using ddgun protein stability measure, netmhcpan antigenicity score, and a structure fidelity score with alphafold to gauge its viability for vaccine development. our results show that vaxformer outperforms the existing state - of - the - art conditional variational autoencoder model to generate antigenicity - controlled sars - cov - 2 spike proteins. these findings suggest promising opportunities for conditional transformer models to expand our understanding of vaccine design and their role in mitigating global health challenges. the code used in this study is available at https : / / github. com / aryopg / vaxformer.
arxiv:2305.11194
deep learning ( dl ) techniques have achieved remarkable successes in recent years. however, their ability to generalize and execute reasoning tasks remains a challenge. a potential solution to this issue is neuro - symbolic integration ( nesy ), where neural approaches are combined with symbolic reasoning. most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task. these methods exhibit superior generalization capacity compared to fully neural architectures. however, they suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima. this paper proposes a simple yet effective method to ameliorate these problems. the key idea involves pretraining a neural model on the downstream task. then, a nesy model is trained on the same task via transfer learning, where the weights of the perceptual part are injected from the pretrained network. the key observation of our work is that the neural network fails to generalize only at the level of the symbolic part while being perfectly capable of learning the mapping from perceptions to symbols. we have tested our training strategy on various sota nesy methods and datasets, demonstrating consistent improvements in the aforementioned problems.
arxiv:2402.14047
the covid19 pandemic globally and significantly has affected the life and health of many communities. the early detection of infected patients is effective in fighting covid19. using radiology ( x - ray ) images is perhaps the fastest way to diagnose the patients. thereby, deep convolutional neural networks ( cnns ) can be considered as applicable tools to diagnose covid19 positive cases. due to the complicated architecture of a deep cnn, its real - time training and testing become a challenging problem. this paper proposes using the extreme learning machine ( elm ) instead of the last fully connected layer to address this deficiency. however, the parameters ' stochastic tuning of elm ' s supervised section causes the final model unreliability. therefore, to cope with this problem and maintain network reliability, the sine - cosine algorithm was utilized to tune the elm ' s parameters. the designed network is then benchmarked on the covid - xray - 5k dataset, and the results are verified by a comparative study with canonical deep cnn, elm optimized by cuckoo search, elm optimized by genetic algorithm, and elm optimized by whale optimization algorithm. the proposed approach outperforms comparative benchmarks with a final accuracy of 98. 83 % on the covid - xray - 5k dataset, leading to a relative error reduction of 2. 33 % compared to a canonical deep cnn. even more critical, the designed network ' s training time is only 0. 9421 milliseconds and the overall detection test time for 3100 images is 2. 721 seconds.
arxiv:2105.14192
in this paper we propose a novel bayesian solution for nonlinear regression in complex fields. previous solutions for kernels methods usually assume a complexification approach, where the real - valued kernel is replaced by a complex - valued one. this approach is limited. based on results in complex - valued linear theory and gaussian random processes we show that a pseudo - kernel must be included. this is the starting point to develop the new complex - valued formulation for gaussian process for regression ( cgpr ). we face the design of the covariance and pseudo - covariance based on a convolution approach and for several scenarios. just in the particular case where the outputs are proper, the pseudo - kernel cancels. also, the hyperparameters of the covariance { can be learnt } maximizing the marginal likelihood using wirtinger ' s calculus and patterned complex - valued matrix derivatives. in the experiments included, we show how cgpr successfully solve systems where real and imaginary parts are correlated. besides, we successfully solve the nonlinear channel equalization problem by developing a recursive solution with basis removal. we report remarkable improvements compared to previous solutions : a 2 - 4 db reduction of the mse with { just a quarter } of the training samples used by previous approaches.
arxiv:1511.05710
by considering the 3 - 3 - 1 and the left - right symmetric models as low energy effective theories of the $ su ( 3 ) _ c \ otimes su ( 3 ) _ l \ otimes su ( 3 ) _ r $ ~ ( for short $ [ su ( 3 ) ] ^ 3 $ ) gauge group, alternative versions of these models are found. the new neutral gauge bosons of the universal 3 - 3 - 1 model and its flipped versions are presented ; also, the left - right symmetric model and its flipped variants are studied. our analysis shows that there are two flipped versions of the universal 3 - 3 - 1 model, with the particularity that both of them have the same weak charges. for the left - right symmetric model we also found two flipped versions ; one of them new in the literature which, unlike those of the 3 - 3 - 1, requires a dedicated study of its electroweak properties. for all the models analyzed, the couplings of the $ z ' $ bosons to the standard model fermions are reported. the explicit form of the null space of the vector boson mass matrix for an arbitrary higgs tensor and gauge group is also presented. in the general framework of the $ [ su ( 3 ) ] ^ 3 $ gauge group, and by using the lhc experimental results and ew precision data, limits on the $ z ' $ mass and the mixing angle between $ z $ and the new gauge bosons $ z ' $ are obtained. the general results call for very small mixing angles in the range $ 10 ^ { - 3 } $ radians and $ m _ { z ' } > $ 2. 5 tev.
arxiv:1605.00575
a series of mgb2 thin films were fabricated by pulsed laser deposition ( pld ), doped with various amounts of si up to a level of 18wt %. si was introduced into the pld mgb2 films by sequential ablation of a stoichiometric mgb2 target and a si target. the doped films were deposited at 250 c and annealed in situ at 685 c for 1min. up to a si doping level of ~ 11wt %, the superconducting transition temperature ( tc ) of the film does not change significantly, as compared to the control, undoped film. the magnetic critical current density ( jc ) of the film at 5k was increased by 50 % for a si doping level of ~ 3. 5wt %, as compared to the control film. also, the irreversibility field of si - doped mgb2 films ( hirr ) at low temperature is higher than for the undoped film.
arxiv:cond-mat/0311055
for g a semisimple algebraic group, we revisit the description of the components of the affne springer fiber given by ts, with s a regular semisimple element. we then compute the fixed points of each component of a particular affne springer fiber for type a
arxiv:1910.04780
bialgebroids, separable bialgebroids, and weak hopf algebras are compared from a categorical point of view. then properties of weak hopf algebras and their applications to finite index and finite depth inclusions of von neumann algebras are shortly reviewed. a hint is given at a duality between bialgebroid actions and abstract inclusions in 2 - categories.
arxiv:math/0011036
a second order accurate numerical scheme is proposed and implemented for the landau - lifshitz - gilbert equation, which models magnetization dynamics in ferromagnetic materials, with large damping parameters. the main advantages of this method are associated with the following features : ( 1 ) it only solves linear systems of equations with constant coefficients where fast solvers are available, so that the numerical efficiency has been greatly improved, in comparison with the existing gauss - seidel project method. ( 2 ) the second - order accuracy in time is achieved, and it is unconditionally stable for large damping parameters. moreover, both the second - order accuracy and the great efficiency improvement will be verified by several numerical examples in the 1d and 3d simulations. in the presence of large damping parameters, it is observed that this method is unconditionally stable and finds physically reasonable structures while many existing methods have failed. for the domain wall dynamics, the linear dependence of wall velocity with respect to the damping parameter and the external magnetic field will be obtained through the reported simulations.
arxiv:2105.03576
we present a spectral algorithm for solving the full nonlinear vacuum einstein field equations in the bondi framework. developed within the spectral einstein code ( spec ), we demonstrate spectral characteristic evolution as a technical precursor to cauchy characteristic extraction ( cce ), a rigorous method for obtaining gauge - invariant gravitational waveforms from existing and future astrophysical simulations. we demonstrate the new algorithm ' s stability, convergence, and agreement with existing evolution methods. we explain how an innovative spectral approach enables a two orders of magnitude improvement in computational efficiency.
arxiv:1406.7029
superatomic crystals are composed of discrete modular clusters that emulate the role of atoms in traditional atomic solids $ ^ { 1 - 4 } $. owing to their unique hierarchical structures, these materials are promising candidates to host exotic phenomena, such as superconductivity and magnetism that can be revealed through doping $ ^ { 5 - 10 } $. low - dimensional superatomic crystals hold great promise as electronic components $ ^ { 11, 12 } $, enabling these properties to be applied to nanocircuits, but the impact of doping in such compounds remains unexplored. here we report the electrical transport properties of re $ _ 6 $ se $ _ 8 $ cl $ _ 2 $, a two - dimensional superatomic semiconductor $ ^ { 13, 14 } $. using an in situ current annealing technique, we find that this compound can be n - doped through cl dissociation, drastically altering the transport behaviour from semiconducting to metallic and giving rise to superconductivity below $ \ sim $ 9 k. this work is the first example of superconductivity in a van der waals ( vdw ) superatomic crystal ; more broadly, it establishes a new chemical strategy to manipulate the electronic properties of vdw materials with labile ligands.
arxiv:1906.10785
to reveal the origins of diffuse h - alpha emissions observed around the herbig star mwc 1080, we have performed a high - resolution near - infrared ( nir ) spectroscopic observation using the immersion grating infrared spectrograph ( igrins ). in the nir h and k bands, we detected various emission lines ( six hydrogen brackett lines, seven h2 lines, and an [ fe ii ] line ) and compared their spatial locations with the optical ( h - alpha and [ s ii ] ) and radio ( 13co and cs ) line maps. the shock - induced h2 and [ fe ii ] lines indicate the presence of multiple outflows, consisting of at least three, associated young stars in this region. the kinematics of h2 and [ fe ii ] near the northeast ( ne ) cavity edge supports that the ne main outflow from mwc 1080a is the blueshifted one with a low inclination angle. the h2 and [ fe ii ] lines near the southeast molecular region newly reveal that additional highly - blueshifted outflows originate from other young stars. the fluorescent h2 lines were found to trace photodissociation regions formed on the cylindrical surfaces of the main outflow cavity, which are expanding outward with a velocity of about 10 - 15 km / s. for the h - alpha emission, we identify its components associated with two stellar outflows and two young stars in addition to the dominant component of mwc 1080a scattered by dust. we also report a few faint h - alpha features located ~ 0. 4 pc away in the southwest direction from mwc 1080a, which lie near the axes of the ne main outflow and one of the newly - identified outflows.
arxiv:2105.01453
reinforcement learning ( rl ), with its ability to explore and optimize policies in complex, dynamic decision - making tasks, has emerged as a promising approach to addressing motion planning ( mop ) challenges in autonomous driving ( ad ). despite rapid advancements in rl and ad, a systematic description and interpretation of the rl design process tailored to diverse driving tasks remains underdeveloped. this survey provides a comprehensive review of rl - based mop for ad, focusing on lessons from task - specific perspectives. we first outline the fundamentals of rl methodologies, and then survey their applications in mop, analyzing scenario - specific features and task requirements to shed light on their influence on rl design choices. building on this analysis, we summarize key design experiences, extract insights from various driving task applications, and provide guidance for future implementations. additionally, we examine the frontier challenges in rl - based mop, review recent efforts to addresse these challenges, and propose strategies for overcoming unresolved issues.
arxiv:2503.23650
this article devotes to developing robust but simple correction techniques and efficient algorithms for a class of second - order time stepping methods, namely the shifted fractional trapezoidal rule ( sftr ), for subdiffusion problems to resolve the initial singularity and nonlocality. the stability analysis and sharp error estimates in terms of the smoothness of the initial data and source term are presented. as a byproduct in numerical tests, we find amazingly that the crank - nicolson scheme ( $ \ theta = \ frac { 1 } { 2 } $ ) without initial corrections can restore the optimal convergence rate for the subdiffusion problem with smooth initial data and source terms. to deal with the nonlocality, fast algorithms are considered to reduce the computational cost from $ o ( n ^ 2 ) $ to $ o ( n \ log n ) $ and save the memory storage from $ o ( n ) $ to $ o ( \ log n ) $, where $ n $ denotes the number of time levels. numerical tests are performed to verify the sharpness of the theoretical results and confirm the efficiency and accuracy of initial corrections and the fast algorithms.
arxiv:2010.12242
we analyse a class of four - dimensional heterotic ground states with n = 2 space - time supersymmetry. from the ten - dimensional perspective, such models can be viewed as compactifications on a six - dimensional manifold with su ( 2 ) holonomy, which is locally but not globally k3 x t ^ 2. the maximal n = 4 supersymmetry is spontaneously broken to n = 2. the masses of the two massive gravitinos depend on the ( t, u ) moduli of t ^ 2. we evaluate the one - loop threshold corrections of gauge and r ^ 2 couplings and we show that they fall in several universality classes, in contrast to what happens in usual k3 x t ^ 2 compactifications, where the n = 4 supersymmetry is explicitly broken to n = 2, and where a single universality class appears. these universality properties follow from the structure of the elliptic genus. the behaviour of the threshold corrections as functions of the moduli is analysed in detail : it is singular across several rational lines of the t ^ 2 moduli because of the appearance of extra massless states, and suffers only from logarithmic singularities at large radii. these features differ substantially from the ordinary k3 x t ^ 2 compactifications, thereby reflecting the existence of spontaneously - broken n = 4 supersymmetry. although our results are valid in the general framework defined above, we also point out several properties, specific to orbifold constructions, which might be of phenomenological relevance.
arxiv:hep-th/9807067
we present 18 introductory lectures on k - theory covering its basic three branches, namely topological, analytic ( k - homology ) and higher algebraic k - theory, 6 lectures on each branch. the skeleton of these notes was provided by the author ' s personal notes from a graduate summer school on k - theory organised by the london mathematical society ( lms ) back in 1995 in lancaster, uk.
arxiv:1008.1346
supernovae are nature ' s high - energy, high density laboratory experiments, reaching densities in excess of nuclear densities and temperatures above 10mev. astronomers have built up a suite of diagnostics to study these supernovae. if we can utilize these diagnostics, and tie them together with a theoretical understanding of supernova physics, we can use these cosmic explosions to study the nature of matter at these extreme densities and temperatures. capitalizing on these diagnostics will require understanding a wide range of additional physics. here we review the diagnostics and the physics needed to use them to learn about the supernova engine, and ultimate nuclear physics.
arxiv:1403.3619
the study of continuous phase transitions triggered by spontaneous symmetry breaking has brought revolutionary ideas to physics. recently, through the discovery of symmetry protected topological phases, it is realized that continuous quantum phase transition can also occur between states with the same symmetry but different topology. here we study a specific class of such phase transitions in 1 + 1 dimensions - - the phase transition between bosonic topological phases protected by $ z _ n \ times z _ n $. we find in all cases the critical point possesses two gap opening relevant operators : one leads to a landau - forbidden symmetry breaking phase transition and the other to the topological phase transition. we also obtained a constraint on the central charge for general phase transitions between symmetry protected bosonic topological phases in 1 + 1d.
arxiv:1701.00834
in this paper we introduce a functor, called the simplicial nerve of an a - infinity category, defined on the category of ( small ) a - infinity categories with values in simplicial sets. we prove that the simplicial nerve of any a - infinity category is an infinity category. this construction extends functorially the nerve construction for differential graded categories proposed by j. lurie in higher algebra. we prove that if a differential graded category is pretriangulated in the sense of a. i. bondal - m. kapranov, then its nerve is a stable infinity category in the sense of j. lurie.
arxiv:1312.2127
in this paper, we introduce the notions of tight closure of ideals on witt rings and quasi - tightly closedness of system of parameters. by using the notions, we obtain a characterization of quasi - $ f $ - rationality. furthermore, we study the relationship between the closure operator and integrally closure.
arxiv:2409.06459
a prominent characteristic of write operation in phase - change memory ( pcm ) is that its latency and energy are sensitive to the data to be written as well as the content that is overwritten. we observe that overwriting unknown memory content can incur significantly higher latency and energy compared to overwriting known all - zeros or all - ones content. this is because all - zeros or all - ones content is overwritten by programming the pcm cells only in one direction, i. e., using either set or reset operations, not both. in this paper, we propose data content aware pcm writes ( datacon ), a new mechanism that reduces the latency and energy of pcm writes by redirecting these requests to overwrite memory locations containing all - zeros or all - ones. datacon operates in three steps. first, it estimates how much a pcm write access would benefit from overwriting known content ( e. g., all - zeros, or all - ones ) by comprehensively considering the number of set bits in the data to be written, and the energy - latency trade - offs for set and reset operations in pcm. second, it translates the write address to a physical address within memory that contains the best type of content to overwrite, and records this translation in a table for future accesses. we exploit data access locality in workloads to minimize the address translation overhead. third, it re - initializes unused memory locations with known all - zeros or all - ones content in a manner that does not interfere with regular read and write accesses. datacon overwrites unknown content only when it is absolutely necessary to do so. we evaluate datacon with workloads from state - of - the - art machine learning applications, spec cpu2017, and nas parallel benchmarks. results demonstrate that datacon significantly improves system performance and memory system energy consumption compared to the best of performance - oriented state - of - the - art techniques.
arxiv:2005.04753
motivated by recent proposals of majorana qubits and the read - out of their quantum state we investigate a qubit setup formed by two parallel topological wires shunted by a superconducting bridge. the wires are further coupled to two quantum dots, which are also linked directly, thus creating an interference loop. the transport current through this system shows an interference pattern which distinguishes two basis states of the qubit in a qnd measurement. we analyze various properties of the interference current and the read - out process, including the resulting dephasing and relaxation. we also analyze the effects of varying control parameters such as gate voltages on the current. the characteristic dependencies could serve as a signature of majorana bound states.
arxiv:1901.08312
the conductance and the fano factor in a graphene sheet in the ballistic regime are calculated. the electrostatic potential in the sheet is modeled by a trapezoid barrier, which allows to use the exact solution of the dirac equation in a uniform electric field in the slope areas ( the two lateral sides of the trapezoid ). a special attention is devoted to asymmetry with respect to the sign of the gate voltage, which is connected with the difference between the klein tunneling and the over - barrier reflection. the comparison of the developed theory with the experiment supports the conclusion that the klein tunneling was revealed experimentally.
arxiv:0902.3622
endometrial cancer, the fourth most common cancer in females in the united states, with the lifetime risk for developing this disease is approximately 2. 8 % in women. precise histologic evaluation and molecular classification of endometrial cancer is important for effective patient management and determining the best treatment modalities. this study introduces endonet, which uses convolutional neural networks for extracting histologic features and a vision transformer for aggregating these features and classifying slides based on their visual characteristics into high - and low - grade. the model was trained on 929 digitized hematoxylin and eosin - stained whole - slide images of endometrial cancer from hysterectomy cases at dartmouth - health. it classifies these slides into low - grade ( endometroid grades 1 and 2 ) and high - grade ( endometroid carcinoma figo grade 3, uterine serous carcinoma, carcinosarcoma ) categories. endonet was evaluated on an internal test set of 110 patients and an external test set of 100 patients from the public tcga database. the model achieved a weighted average f1 - score of 0. 91 ( 95 % ci : 0. 86 - 0. 95 ) and an auc of 0. 95 ( 95 % ci : 0. 89 - 0. 99 ) on the internal test, and 0. 86 ( 95 % ci : 0. 80 - 0. 94 ) for f1 - score and 0. 86 ( 95 % ci : 0. 75 - 0. 93 ) for auc on the external test. pending further validation, endonet has the potential to support pathologists without the need of manual annotations in classifying the grades of gynecologic pathology tumors.
arxiv:2312.08479
we investigated the nature of the hitherto unresolved elliptical infrared emission in the centre of the ~ 20000 au disc silhouette in m 17. we combined high - resolution jhksl ' m ' band imaging carried out with naos / conica at the vlt with [ fe ii ] narrow band imaging using sofi at the ntt. the analysis is supported by spitzer / glimpse archival data and by already published sinfoni / vlt integral field spectroscopy data. for the first time, we resolve the elongated central infrared emission into a point - source and a jet - like feature that extends to the northeast in the opposite direction of the recently discovered collimated h2 jet. they are both orientated almost perpendicular to the disc plane. in addition, our images reveal a curved southwestern emission nebula whose morphology resembles that of the previously detected northeastern one. both nebulae are located at a distance of 1500 au from the disc centre. we describe the infrared point - source in terms of a protostar that is embedded in circumstellar material producing a visual extinction of 60 < = av < = 82. the observed ks band magnitude is equivalent to a stellar mass range of 2. 8 msun < = mstar < = 8 msun adopting conversions for a main - sequence star. altogether, we suggest that the large m 17 accretion disc is forming an intermediate to high - mass protostar. part of the accreted material is expelled through a symmetric bipolar jet / outflow.
arxiv:0801.1578
cross - lingual word embeddings encode the meaning of words from different languages into a shared low - dimensional space. an important requirement for many downstream tasks is that word similarity should be independent of language - i. e., word vectors within one language should not be more similar to each other than to words in another language. we measure this characteristic using modularity, a network measurement that measures the strength of clusters in a graph. modularity has a moderate to strong correlation with three downstream tasks, even though modularity is based only on the structure of embeddings and does not require any external resources. we show through experiments that modularity can serve as an intrinsic validation metric to improve unsupervised cross - lingual word embeddings, particularly on distant language pairs in low - resource settings.
arxiv:1906.01926
in this paper, we propose a novel self - distillation method for fake speech detection ( fsd ), which can significantly improve the performance of fsd without increasing the model complexity. for fsd, some fine - grained information is very important, such as spectrogram defects, mute segments, and so on, which are often perceived by shallow networks. however, shallow networks have much noise, which can not capture this very well. to address this problem, we propose using the deepest network instruct shallow network for enhancing shallow networks. specifically, the networks of fsd are divided into several segments, the deepest network being used as the teacher model, and all shallow networks become multiple student models by adding classifiers. meanwhile, the distillation path between the deepest network feature and shallow network features is used to reduce the feature difference. a series of experimental results on the asvspoof 2019 la and pa datasets show the effectiveness of the proposed method, with significant improvements compared to the baseline.
arxiv:2303.01211
we present a new convolutional neural network - based time - series model. typical convolutional neural network ( cnn ) architectures rely on the use of max - pooling operators in between layers, which leads to reduced resolution at the top layers. instead, in this work we consider a fully convolutional network ( fcn ) architecture that uses causal filtering operations, and allows for the rate of the output signal to be the same as that of the input signal. we furthermore propose an undecimated version of the fcn, which we refer to as the undecimated fully convolutional neural network ( ufcnn ), and is motivated by the undecimated wavelet transform. our experimental results verify that using the undecimated version of the fcn is necessary in order to allow for effective time - series modeling. the ufcnn has several advantages compared to other time - series models such as the recurrent neural network ( rnn ) and long short - term memory ( lstm ), since it does not suffer from either the vanishing or exploding gradients problems, and is therefore easier to train. convolution operations can also be implemented more efficiently compared to the recursion that is involved in rnn - based models. we evaluate the performance of our model in a synthetic target tracking task using bearing only measurements generated from a state - space model, a probabilistic modeling of polyphonic music sequences problem, and a high frequency trading task using a time - series of ask / bid quotes and their corresponding volumes. our experimental results using synthetic and real datasets verify the significant advantages of the ufcnn compared to the rnn and lstm baselines.
arxiv:1508.00317
graininess noise is a common artifact in inkjet printing. while current inkjet printing technologies attempt to control graininess in single color images, the results are often less than optimal for multi - color images. this is due to fluidic interactions between inks of different colors. this paper will describe a color decomposition methodology that can be used to study ink flow patterns in multi - color inkjet printed images at a microscopic scale. this technique is used to decompose multi - color images into several independent color components. the ink patterns in these components is analyzed to relate them to visually perceptible graininess noise.
arxiv:1912.08780
embedded applications are widely used in portable devices such as wireless phones, personal digital assistants, laptops, etc. high throughput and real time requirements are especially important in such data - intensive tasks. therefore, architectures that provide the required performance are the most desirable. on the other hand, processor performance is severely related to the average memory access delay, number of processor registers and also size of the instruction window and superscalar parameters. therefore, cache, register file and superscalar parameters are the major architectural concerns in designing a superscalar architecture for embedded processors. although increasing cache and register file size leads to performance improvements in high performance embedded processors, the increased area, power consumption and memory delay are the overheads of these techniques. this paper explores the effect of cache, register file and superscalar parameters on the processor performance to specify the optimum size of these parameters for embedded applications. experimental results show that although having bigger size of these parameters is one of the performance improvement approaches in embedded processors, however, by increasing the size of some parameters over a threshold value, performance improvement is saturated and especially in cache size, increments over this threshold value decrease the performance.
arxiv:1204.2809
here using some methods of combinatorial set theory, particularly the ones related to the construction of independent families of sets and some modified version of the notion of small sets originally introduced by riecan, riecan and neubrunn, we give abstract and generalized formulation of a remarkable theorem of kakutani and oxtoby relating to nonseparable extension of lebesgue measure in spaces with transformation groups
arxiv:1908.08277