text
stringlengths
1
3.65k
source
stringlengths
15
79
we derive a family of singular iterated maps - - closely related to poincare maps - - that describe chaotic interactions between colliding solitary waves. the chaotic behavior of such solitary wave collisions depends on the transfer of energy to a secondary mode of oscillation, often an internal mode of the pulse. unlike previous analyses, this map allows one to understand the interactions in the case when this mode is excited prior to the first collision. the map is derived using melnikov integrals and matched asymptotic expansions and generalizes a ` ` multi - pulse ' ' melnikov integral and allows one to find not only multipulse heteroclinic orbits, but exotic periodic orbits. the family of maps derived exhibits singular behavior, including regions of infinite winding. this problem is shown to be a singular version of the conservative ikeda map from laser physics and connections are made with problems from celestial mechanics and fluid mechanics.
arxiv:0710.3209
self - organization of a long - lived structure is one of the remarkable characteristics of macroscopic systems governed by long - range interactions. in a homogeneous magnetic field, a non - neutral plasma creates a " thermal equilibrium " which is a boltzmann distribution on a rigidly rotating frame. here, we study how a non - neutral plasma self - organizes in inhomogeneous magnetic field ; as a typical system we consider a dipole magnetic field. in this generalized setting, the plasma exhibits its fundamental mechanism that determines the relaxed state. the scale hierarchy of adiabatic invariants is the determinant ; the boltzmann distribution under the topological constraint by the robust adiabatic invariants ( hence, the homogeneous distribution with respect to the fragile invariant ) is the relevant relaxed state, which turns out to be a rigidly rotating clump of particles ( just same as in a homogeneous magnetic field ), while the density is no longer homogeneous.
arxiv:1502.07826
we observe wannier - stark localization in curved photonic lattices, realized using arrays of evanescently coupled optical waveguides. by correctly tuning the strength of inter - site coupling in the lattice, we observe that wannier - stark states become increasingly localized, and eventually fully localized to one site, as the curvature of the lattice is increased. we then demonstrate that tunneling can be successfully restored in the lattice by applying a sinusoidal modulation to the lattice position, an effect that is a direct analogue of photon - assisted tunneling. this precise tuning of the tunneling matrix elements, through laser - fabricated on - site modulations, opens a novel route for the creation of gauge fields in photonic lattices.
arxiv:1505.05217
transformer - based pre - trained language models like bert and its variants have recently achieved promising performance in various natural language processing ( nlp ) tasks. however, the conventional paradigm constructs the backbone by purely stacking the manually designed global self - attention layers, introducing inductive bias and thus leads to sub - optimal. in this work, we make the first attempt to automatically discover novel pre - trained language model ( plm ) backbone on a flexible search space containing the most fundamental operations from scratch. specifically, we propose a well - designed search space which ( i ) contains primitive math operations in the intra - layer level to explore novel attention structures, and ( ii ) leverages convolution blocks to be the supplementary for attentions in the inter - layer level to better learn local dependency. to enhance the efficiency for finding promising architectures, we propose an operation - priority neural architecture search ( op - nas ) algorithm, which optimizes both the search algorithm and evaluation of candidate models. specifically, we propose operation - priority ( op ) evolution strategy to facilitate model search via balancing exploration and exploitation. furthermore, we design a bi - branch weight - sharing ( biws ) training strategy for fast model evaluation. extensive experiments show that the searched architecture ( named autobert - zero ) significantly outperforms bert and its variants of different model capacities in various downstream tasks, proving the architecture ' s transfer and scaling abilities. remarkably, autobert - zero - base outperforms roberta - base ( using much more data ) and bert - large ( with much larger model size ) by 2. 4 and 1. 4 higher score on glue test set.
arxiv:2107.07445
sensor fusion is critical to perception systems for task domains such as autonomous driving and robotics. recently, the transformer integrated with cnn has demonstrated high performance in sensor fusion for various perception tasks. in this work, we introduce a method for fusing data from camera and lidar. by employing transformer modules at multiple resolutions, proposed method effectively combines local and global contextual relationships. the performance of the proposed method is validated by extensive experiments with two adversarial benchmarks with lengthy routes and high - density traffics. the proposed method outperforms previous approaches with the most challenging benchmarks, achieving significantly higher driving and infraction scores. compared with transfuser, it achieves 8 % and 19 % improvement in driving scores for the longest6 and town05 long benchmarks, respectively.
arxiv:2308.10707
we present a general quantum instanton approach to calculating reaction rates for systems with two electronic states and arbitrary values of the electronic coupling. this new approach, which we call the non - adiabatic quantum instanton ( naqi ) approximation, reduces to wolynes theory in the golden rule limit and to a recently proposed projected quantum instanton ( pqi ) method in the adiabatic limit. as in both of these earlier theories, the naqi approach is based on making a saddle point approximation to the time integral of a reactive flux autocorrelation function, although with a generalised definition of the projection operator onto the product states. we illustrate the accuracy of the approach by comparison with exact rates for one dimensional scattering problems and discuss its applicability to more complex reactions.
arxiv:2005.13894
sparsity is a basic property of real vectors that is exploited in a wide variety of applications. in this work, we describe property testing algorithms for sparsity that observe a low - dimensional projection of the input. we consider two settings. in the first setting, for a given design matrix a in r ^ { d x m }, we test whether an input vector y in r ^ d equals ax for some k - sparse unit vector x. our algorithm projects the input onto o ( k \ eps ^ { - 2 } log m ) dimensions, accepts if the property holds, rejects if | | y - ax | | > \ eps for any o ( k / \ eps ^ 2 ) - sparse vector x, and runs in time polynomial in m. our algorithm is based on the approximate caratheodory ' s theorem. previously known algorithms that solve the problem for arbitrary a with qualitatively similar guarantees run in exponential time. in the second setting, the design matrix a is unknown. given input vectors y _ 1, y _ 2,..., y _ p in r ^ d whose concatenation as columns forms y in r ^ { d x p }, the goal is to decide whether y = ax for matrices a in r ^ { d x m } and x in r ^ { m x p } such that each column of x is k - sparse, or whether y is " far " from having such a decomposition. we give such a testing algorithm which projects the input vectors to o ( log p / \ eps ^ 2 ) dimensions and assumes that the unknown a satisfies k - restricted isometry. our analysis gives a new robust characterization of gaussian width in terms of sparsity.
arxiv:1608.01275
coherent states are introduced and their properties are discussed for all simple quantum compact groups. the multiplicative form of the canonical element for the quantum double is used to introduce the holomorphic coordinates on a general quantum dressing orbit and interpret the coherent state as a holomorphic function on this orbit with values in the carrier hilbert space of an irreducible representation of the corresponding quantized enveloping algebra. using gauss decomposition, the commutation relations for the holomorphic coordinates on the dressing orbit are derived explicitly and given in a compact r - - matrix formulation ( generalizing this way the $ q $ - - deformed grassmann and flag manifolds ). the antiholomorphic realization of the irreducible representations of a compact quantum group ( the analogue of the borel - - weil construction ) are described using the concept of coherent state. the relation between representation theory and non - - commutative differential geometry is suggested. }
arxiv:hep-th/9403114
in this article we prove highly improved and flexible strichartz - type estimates allowing us to generalize the asymptotics we obtained for a stratified and rotating incompressible navier - stokes system : for large ( and less regular ) initial data, we obtain global well - posedness, asymptotics ( as the rossby number $ \ epsilon $ goes to zero ) and convergence rates as a power of the small parameter $ \ epsilon $. our approach is lead by the special structure of the limit system : the 3d quasi - geostrophic system.
arxiv:1902.10609
a radio frequency ( rf ) transition is used to convert a pure f = 2, mf = 2 87rb bose - einstein condensate confined in a top trap to a mixture of f = 2, mf = 2 and f = 2, mf = 1 states. we show that the nature of this coupling process is strongly influenced by the presence of the time varying field of the top trap, and complicated by the presence of multiple zeeman substates. in particular, the effective rabi frequency associated with the applied rf field is not constant across the spatial extent of the cloud leading to a complex geometry for atom - laser output coupling and ` averaging out ' of rabi oscillations. further a time - varying detuning can give rise to complex spatial structures.
arxiv:cond-mat/9912045
the internal / external synchrotron shock scenario has proved very successful in interpreting the key observations about gamma ray bursts. there still remains, however, some big uncertainties. the hottest issue concerns the nature of the progenitor, but there are also other problems concerning the global energetics, coupled with the issue of the degree of the collimation of the fireball. to be efficient, internal shocks within the relativistic wind must occur with large contrasts of their bulk lorentz factors, and it is not clear yet the role of the compton drag process in limiting the velocity differences. the fireball itself can be " hot " or " cold " according to what accelerates it to ultrarelativistic bulk speeds. in this respect the recent observations of a black body shape of the early phases of a few bursts shed new light on this issue. the most popular radiation process thought to explain the prompt emission is synchrotron, but it faces severe problems when comparing the expected spectrum with observations. alternatives are called for. emission features in the x - ray afterglow and absorption features in the prompt spectra are a powerful diagnostical tool. besides shedding light on the nature of the progenitor, they can constrain the total energy release in a beaming - independent way.
arxiv:astro-ph/0301256
in this paper, we propose a variational multiphase image segmentation model based on fuzzy membership functions and l1 - norm fidelity. then we apply the alternating direction method of multipliers to solve an equivalent problem. all the subproblems can be solved efficiently. specifically, we propose a fast method to calculate the fuzzy median. experimental results and comparisons show that the l1 - norm based method is more robust to outliers such as impulse noise and keeps better contrast than its l2 - norm counterpart. theoretically, we prove the existence of the minimizer and analyze the convergence of the algorithm.
arxiv:1504.02206
we present an exact formula for the dynamics of $ n $ hard spheres of radius $ r > 0 $ on an infinite line which evolve under the assumption that total linear momentum and kinetic energy of the system is conserved for all times. this model is commonly known as the one - dimensional tonks gas or the hard rod gas model. our exact formula is expressed as a sum over the weyl group associated to the root system $ a _ { n - 1 } $ and is valid for all initial data in a full - measure subset of the tangent bundle of the hard sphere table. as an application of our explicit formula, we produce a simple proof that the associated billiard flow admits the liouville measure on the tangent bundle of the hard sphere table as an invariant measure.
arxiv:2311.00446
we consider differential delay equations of the form $ \ partial _ tx ( t ) = x _ { t } ( x ( t - \ tau ) ) $ in $ \ mathbb { r } ^ n $, where $ ( x _ t ) _ { t \ in s ^ 1 } $ is a time - dependent family of smooth vector fields on $ \ mathbb { r } ^ n $ and $ \ tau $ is a delay parameter. if there is a ( suitably non - degenerate ) periodic solution $ x _ 0 $ of this equation for $ \ tau = 0 $, that is without delay, there are good reasons to expect existence of a family of periodic solutions for all sufficiently small delays, smoothly parametrized by delay. however, it seems difficult to prove this using the classical implicit function theorem, since the equation above is not smooth in the delay parameter. in this paper, we show how to use the m - polyfold implicit function theorem by hofer - wysocki - zehnder [ hwz09, hwz17 ] to overcome this problem in a natural setup.
arxiv:2011.14828
this paper proposes a novel electric vehicle ( ev ) classification scheme for a photovoltaic ( pv ) powered ev charging station ( cs ) that reduces the effect of intermittency of electricity supply as well as reducing the cost of energy trading of the cs. since not all ev drivers would like to be environmentally friendly, all vehicles in the cs are divided into three categories : 1 ) premium, 2 ) conservative, and 3 ) green, according to their charging behavior. premium and conservative evs are considered to be interested only in charging their batteries, with noticeably higher rate of charging for premium evs. green vehicles are more environmentally friendly, and thus assist the cs to reduce its cost of energy trading by allowing the cs to use their batteries as distributed storage. a different charging scheme is proposed for each type of ev, which is adopted by the cs to encourage more evs to be green. a basic mixed integer programming ( mip ) technique is used to facilitate the proposed classification scheme. it is shown that the uncertainty in pv generation can be effectively compensated, along with minimization of total cost of energy trading to the cs, by consolidating more green evs. real solar and pricing data are used for performance analysis of the system. it is demonstrated that the total cost to the cs reduces considerably as the percentage of green vehicles increases, and also that the contributions of green evs in winter are greater than those in summer.
arxiv:1507.07994
there is a mathematical analogy between the propagation of fields in a general relativistic space - time and long ( shallow water ) surface waves on moving water. hawking argued that black holes emit thermal radiation via a quantum spontaneous emission. similar arguments predict the same effect near wave horizons in fluid flow. by placing a streamlined obstacle into an open channel flow we create a region of high velocity over the obstacle that can include wave horizons. long waves propagating upstream towards this region are blocked and converted into short ( deep water ) waves. this is the analogue of the stimulated emission by a white hole ( the time inverse of a black hole ), and our measurements of the amplitudes of the converted waves demonstrate the thermal nature of the conversion process for this system. given the close relationship between stimulated and spontaneous emission, our findings attest to the generality of the hawking process.
arxiv:1008.1911
one consequence of the cosmic censorship conjecture is that any topological structure will ultimately collapse to within the horizons of a set of black holes, and as a result, an external classical observer will be unable to probe it. however a single two - level quantum system ( udw detector ) that remains outside of the horizon has been shown to distinguish between a black hole and its associated geon counterpart via its different response rates. here we extend this investigation of the quantum vacuum outside of an $ \ mathbb { rp } _ 2 $ geon by considering the entanglement structure of the vacuum state of a quantum scalar field in this spacetime, and how this differs from its btz black hole counterpart. employing the entanglement harvesting protocol, where field entanglement is swapped to a pair of udw detectors, we find that the classically hidden topology of the geon can have an appreciable difference in the amount of entanglement harvested in the two spacetimes for sufficiently small mass. in this regime, we find that detectors with a small energy gap harvest more entanglement in the btz spacetime ; however as the energy gap increases, the detectors harvest more entanglement in a geon spacetime. the energy gap at the crossover is dependent on the black hole mass, occurring at a lower values for lower masses. this also impacts the size of the entanglement shadow, the region near the horizon where the detectors cannot harvest entanglement. small gap detectors experience a larger entanglement shadow in a geon spacetime, whereas for large gap detectors the shadow is larger in a btz spacetime.
arxiv:2201.11130
deciding whether a graph can be embedded in a grid using only unit - length edges is np - complete, even when restricted to binary trees. however, it is not difficult to devise a number of graph classes for which the problem is polynomial, even trivial. a natural step, outstanding thus far, was to provide a broad classification of graphs that make for polynomial or np - complete instances. we provide such a classification based on the set of allowed vertex degrees in the input graphs, yielding a full dichotomy on the complexity of the problem. as byproducts, the previous np - completeness result for binary trees was strengthened to strictly binary trees, and the three - dimensional version of the problem was for the first time proven to be np - complete. our results were made possible by introducing the concepts of consistent orientations and robust gadgets, and by showing how the former allows np - completeness proofs by local replacement even in the absence of the latter.
arxiv:1006.3541
the launch of the fermi satellite in 2008, with its large area telescope ( lat ) on board, has opened a new era for the study of gamma - ray sources at gev ( $ 10 ^ 9 $ ev ) energies. similarly, the commissioning of the third generation of imaging atmospheric cherenkov telescopes ( iacts ) - h. e. s. s., magic, and veritas - in the mid - 2000 ' s has firmly established the field of tev ( $ 10 ^ { 12 } $ ev ) gamma - ray astronomy. together, these instruments have revolutionised our understanding of the high - energy gamma - ray sky, and they continue to provide access to it over more than six decades in energy. in recent years, the ground - level particle detector arrays hawc, tibet, and lhaaso have opened a new window to gamma rays of the highest energies, beyond 100 tev. soon, next - generation facilities such as cta and swgo will provide even better sensitivity, thus promising a bright future for the field. in this chapter, we provide a brief overview of methods commonly employed for the analysis of gamma - ray data, focusing on those used for fermi - lat and iact observations. we describe the standard data formats, explain event reconstruction and selection algorithms, and cover in detail high - level analysis approaches for imaging and extraction of spectra, including aperture photometry as well as advanced likelihood techniques.
arxiv:2309.02966
nickel titanium ( niti ) is a protypical shape - memory alloy used in a range of biomedical and engineering devices, but direct molecular dynamics simulations of the martensitic b19 ' - > b2 phase transition driving its shape - memory behavior are rare and have relied on classical force fields with limited accuracy. here, we train four machine - learned force fields for equiatomic niti based on the lda, pbe, pbesol, and scan dft functionals. the models are trained on the fly during npt molecular dynamics, with dft calculations and model updates performed automatically whenever the uncertainty of a local energy prediction exceeds a chosen threshold. the models achieve accuracies of 1 - 2 mev / atom during training and are shown to closely track dft predictions of b2 and b19 ' elastic constants and phonon frequencies. surprisingly, in large - scale molecular dynamics simulations, only the scan model predicts a reversible b19 ' - > b2 phase transition, with the lda, pbe, and pbesol models predicting a reversible transition to a previously uncharacterized low - volume phase, which we hypothesize to be a new stable high - pressure phase. we examine the structure of the new phase and estimate its stability on the temperature - pressure phase diagram. this work establishes an automated active learning protocol for studying displacive transformations, reveals important differences between dft functionals that can only be detected in large - scale simulations, provides an accurate force field for niti, and identifies a new phase.
arxiv:2401.05568
ionizing radiation is known to have a destructive impact on biology by causing damage to the dna, cells, and production of reactive oxygen species ( ros ) among other things. while direct exposure to high radiation dose is indeed not favorable for biological activity, ionizing radiation can, and in some cases is known to produce a number of biologically useful products. one such mechanism is the production of biologically useful products via charged particle - induced radiolysis. energetic charged particles interact with surfaces of planetary objects such as mars, europa and enceladus without much shielding from their rarefied atmospheres. depending on the energy of said particles, they can penetrate several meters deep below the surface and initiate a number of chemical reactions along the way. some of the byproducts are impossible to produce with lower - energy radiation ( such as sunlight ), opening up new avenues for life to utilize them. for each of these cases, we calculate the energy deposition rate as a function of depth, and estimate the energy availability for potential metabolic activity. we discuss various mechanisms through which life could support itself utilizing the byproducts of these ionizing radiation - induced reactions, such as chemoautotrophs using solvated electrons, extracellular electron transfer, and indirect electrophy to facilitate processes like carbon fixation, nitrogen fixation and sulfate reduction, and possibly for atp production.
arxiv:2207.14675
attractive ultra - cold fermions trapped in a one - dimensional periodically shaken opticla lattices are considered. for an appropriate resonant shaking the system realizes paradigmatic dimes physics described by rice - mele model. the important feature of our system is the possible presence of controlled defects. they result in the creation of topologically protected loclaized modes carrying fractional particle number. their possible experimental signatures are discussed.
arxiv:1407.6533
nonlinear interferometers allow spectroscopy in the mid - infrared range by detecting correlated visible light, for which non - cooled detectors with higher specific detectivity and lower dark count rates are available. we present a new approach for the registration of spectral information, which combines a nonlinear interferometer using non - degenerate spontaneous parametric down - conversion ( spdc ) with a fourier - transform spectroscopy concept. in order to increase the spectral coverage, we use broadband non - collinear spdc in periodically poled linbo $ _ 3 $. without the need for spectrally selective detection, continuous spectra with a spectral bandwidth of more than 100 $ \, $ cm $ ^ { - 1 } $ are achieved. we demonstrate transmission spectra of a polypropylene sample measured with ~ 6 $ \, $ cm $ ^ { - 1 } $ resolution in the spectral range between 3. 2 $ \, \ mu $ m to 3. 9 $ \, \ mu $ m.
arxiv:1909.06864
the weather4cast competition ( hosted by neurips 2022 ) required competitors to predict super - resolution rain movies in various regions of europe when low - resolution satellite contexts covering wider regions are given. in this paper, we show that a general baseline 3d u - net can be significantly improved with region - conditioned layers as well as orthogonality regularizations on 1x1x1 convolutional layers. additionally, we facilitate the generalization with a bag of training strategies : mixup data augmentation, self - distillation, and feature - wise linear modulation ( film ). presented modifications outperform the baseline algorithms ( 3d u - net ) by up to 19. 54 % with less than 1 % additional parameters, which won the 4th place in the core test leaderboard.
arxiv:2212.02059
. this trend has been reversed somewhat ( dubbed the reverse brain drain ) as hundreds of iit graduates, who have pursued further studies in the us, started returning to india in the 1990s. the extent of intellectual loss receded substantially over the 1990s and 2000s, with the percentage of students going abroad dropping from as high as 70 % at one time to around 30 % in 2005. this is largely attributed to the liberalization of the indian economy and the opening of previously closed markets. government initiatives are encouraging iit students into entrepreneurship programs and are increasing foreign investment. emerging scientific and manufacturing industries, and outsourcing of technical jobs from north america and western europe have created opportunities for aspiring graduates in india. additionally, iit alumni are giving back generously to their parent institutions. = = = entrance competition = = = the highly competitive examination in the form of jee - advanced has led to the establishment of a large number of coaching institutes throughout the country that provide intensive, and specific preparation for the jee - advanced for substantial fees. it is argued that this favours students from specific regions and richer backgrounds. some coaching institutes say that they have individually coached nearly 800 successful candidates year after year. according to some estimates, nearly 95 % of all students who clear the jee - advanced had joined coaching classes. indeed, this was the case regarding preparation for iit entrance exams even decades ago. in a january 2010 lecture at the indian institute of science, the 2009 nobel laureate in chemistry, venkatraman ramakrishnan revealed that he failed to get a seat at any of the indian engineering and medical colleges. he also said that his parents, being old - fashioned, did not believe in coaching classes to prepare for the iit entrance exam and considered them to be " nonsense ". in a documentary aired by cbs, vinod khosla, co - founder of sun microsystems states, " the iits probably are the hardest schools in the world to get into, to the best of my knowledge ". the documentary further concludes, " put harvard, mit, and princeton together, and you begin to get an idea of the status of iit in india " to depict the competition as well as demand for the elite institutes. not all children are of a similar aptitude level and may be skilled in different paradigms and fields. this has led to criticism of the way the examinations are conducted and the way a student is forced in the indian community. the iit - jee ( now jee - advanced ) format was restructured in 2006
https://en.wikipedia.org/wiki/Indian_Institutes_of_Technology
we investigate disorder - driven topological phase transitions in quantized electric quadrupole insulators. we show that chiral symmetry can protect the quantization of the quadrupole moment $ q _ { xy } $, such that the higher - order topological invariant is well - defined even when disorder has broken all crystalline symmetries. moreover, nonvanishing $ q _ { xy } $ and consequent corner modes can be induced from a trivial insulating phase by disorder that preserves chiral symmetry. the critical points of such topological phase transitions are marked by the occurrence of extended boundary states even in the presence of strong disorder. we provide a systematic characterization of these disorder - driven topological phase transitions from both bulk and boundary descriptions.
arxiv:2008.00513
we classify the finite time blow - up profiles for the following reaction - diffusion equation with unbounded weight : $ $ \ partial _ tu = \ delta u ^ m + | x | ^ { \ sigma } u ^ p, $ $ posed in any space dimension $ x \ in \ mathbf { r } ^ n $, $ t \ geq0 $ and with exponents $ m > 1 $, $ p \ in ( 0, 1 ) $ and $ \ sigma > 2 ( 1 - p ) / ( m - 1 ) $. we prove that blow - up profiles in backward self - similar form exist for the indicated range of parameters, showing thus that the unbounded weight has a strong influence on the dynamics of the equation, merging with the nonlinear reaction in order to produce finite time blow - up. we also prove that all the blow - up profiles are \ emph { compactly supported } and might present two different types of interface behavior and three different possible \ emph { good behaviors } near the origin, with direct influence on the blow - up behavior of the solutions. we classify all these profiles with respect to these different local behaviors depending on the magnitude of $ \ sigma $. this paper generalizes in dimension $ n > 1 $ previous results by the authors in dimension $ n = 1 $ and also includes some finer classification of the profiles for $ \ sigma $ large that is new even in dimension $ n = 1 $.
arxiv:2108.09088
adapting the powerful integrability - based formalism invented previously for the calculation of gluon scattering amplitudes at strong coupling, we develop a method for computing the holographic three point functions for the large spin limit of gubser - klebanov - polyakov ( gkp ) strings. although many of the ideas from the gluon scattering problem can be transplanted with minor modifications, the fact that the information of the external states is now encoded in the singularities at the vertex insertion points necessitates several new techniques. notably, we develop a new generalized riemann bilinear identity, which allows one to express the area integral in terms of appropriate contour integrals in the presence of such singularities. we also give some general discussions on how semiclassical vertex operators for heavy string states should be constructed systematically from the solutions of the hamilton - jacobi equation.
arxiv:1110.3949
the segmentation of lesions in moderate to severe traumatic brain injury ( mstbi ) presents a significant challenge in neuroimaging due to the diverse characteristics of these lesions, which vary in size, shape, and distribution across brain regions and tissue types. this heterogeneity complicates traditional image processing techniques, resulting in critical errors in tasks such as image registration and brain parcellation. to address these challenges, the aims - tbi segmentation challenge 2024 aims to advance innovative segmentation algorithms specifically designed for t1 - weighted mri data, the most widely utilized imaging modality in clinical practice. our proposed solution leverages a large - scale multi - dataset supervised pretraining approach inspired by the multitalent method. we train a resenc l network on a comprehensive collection of datasets covering various anatomical and pathological structures, which equips the model with a robust understanding of brain anatomy and pathology. following this, the model is fine - tuned on mstbi - specific data to optimize its performance for the unique characteristics of t1 - weighted mri scans and outperforms the baseline without pretraining up to 2 dice points.
arxiv:2504.06741
we prove that a standard realization of the direct image complex via the so - called douady - barlet morphism associated with a smooth complex analytic surface admits a natural decomposition in the form of an injective quasi - isomorphism of complexes. this is a more precise form of a special case of the decomposition theorems of beilinson - bernstein - deligne - gabber and m. saito. the proof hinges on the special case of the bi - disk in the complex affine plane where we make explicit use of a construction of nakajima ' s and of the corresponding representation - theoretic interpretation foreseen by vafa - witten. some consequences of the decomposition theorem : g \ " ottsche formula holds for complex surfaces ; interpretation of the rational cohomologies of douady spaces as a kind of fock space ; new proofs of results of brian \ c { c } on and ellingsrud - stromme on punctual hilbert schemes ; computation of the mixed hodge structure of the douady spaces in the k \ " ahler case. we also derive a natural connection with equivariant k - theory for which, in the case of algebraic surfaces, bezrukavnikov - ginzburg have proposed a different approach.
arxiv:math/9811159
this work proposes a data - driven surrogate modeling framework for cost - effectively inferring the torque of a permanent magnet synchronous machine under geometric design variations. the framework is separated into a reduced - order modeling and an inference part. given a dataset of torque signals, each corresponding to a different set of design parameters, torque dimension is first reduced by post - processing a discrete fourier transform and keeping a reduced number of frequency components. this allows to take advantage of torque periodicity and preserve physical information contained in the frequency components. next, a response surface model is computed by means of machine learning regression, which maps the design parameters to the reduced frequency components. the response surface models of choice are polynomial chaos expansions, feedforward neural networks, and gaussian processes. torque inference is performed by evaluating the response surface model for new design parameters and then inverting the dimension reduction. numerical results show that the resulting surrogate models lead to sufficiently accurate torque predictions for previously unseen design configurations. the framework is found to be significantly advantageous compared to approximating the original ( not reduced ) torque signal directly, as well as slightly advantageous compared to using principal component analysis for dimension reduction. the combination of discrete fourier transform - based dimension reduction with gaussian process - based response surfaces yields the best - in - class surrogate model for this use case. the surrogate models replace the original, high - fidelity model in monte carlo - based uncertainty quantification studies, where they provide accurate torque statistics estimates at significantly reduced computational cost.
arxiv:2412.06485
gene regulatory circuits show significant stochastic fluctuations in their circuit signals due to the low copy number of transcription factors. when a gene circuit component is connected to an existing circuit, the dynamic properties of the existing circuit can be affected by the connected component. in this paper, we investigate modularity in the dynamics of the gene circuit based on stochastic fluctuations in the circuit signals. we show that the noise in the output signal of the existing circuit can be affected significantly when the output is connected to the input of another circuit component. more specifically, the output signal noise can show significantly longer correlations when the two components are connected. this equivalently means that the noise power spectral density becomes narrower. we define the relative change in the correlation time or the spectrum bandwidth by stochastic retroactivity, which is shown to be directly related to the retroactivity defined in the deterministic framework by del vecchio et al. this provides an insight on how to measure retroactivity, by investigating stochastic fluctuations in gene expression levels, more specifically, by obtaining an autocorrelation function of the fluctuations. we also provide an interesting aspect of the frequency response of the circuit. we show that depending on the magnitude of operating frequencies, different kinds of signals need to be preferably chosen for circuit description in a modular fashion : at low enough frequency, expression level of transcription factor that are not bound to their specific promoter region needs to be chosen, and at high enough frequency, that of the total transcription factor, both bound and unbound, does.
arxiv:0910.5522
originally developed to image the shadow region of the central black hole in sagittarius a * and in the nearby galaxy m87, the event horizon telescope ( eht ) provides deep, very high angular resolution data on other agn sources too. the challenges of working with eht data have spurred the development of new image reconstruction algorithms. this work briefly reviews the status of the eht and its utility for observing agn sources, with emphasis on novel imaging techniques that offer the promise of better reconstructions at 1. 3 mm and other wavelengths.
arxiv:1607.03034
we consider the problem of counting lattice points contained in domains in $ \ mathbb { r } ^ d $ defined by products of linear forms and we show that the normalized discrepancies in these counting problems satisfy non - degenerate central limit theorems, provided that $ d \ geq 9 $. we also study more refined versions pertaining to " spiraling of approximations ". our techniques are dynamical in nature and exploit effective exponential mixing of all orders for actions of higher - rank abelian groups on the space of unimodular lattices.
arxiv:2101.04931
component & connector ( c & c ) architecture description languages ( adls ) combine component - based software engineering and model - driven engineering to increase reuse and to abstract from implementation details. applied to robotics application development, current c & c adls often require domain experts to provide component behavior descriptions as programming language artifacts or as models of a - priori mixed behavior modeling languages. they are limited to specific target platforms or require extensive handcrafting to transform platform - independent software architecture models into platform - specific implementations. we have developed the montiarcautomaton framework that combines structural extension of c & c concepts with integration of application - specific component behavior modeling languages, seamless transformation from logical into platform - specific software architectures, and a - posteriori black - box composition of code generators for different robotics platforms. this paper describes the roles and activities for tailoring montiarcautomaton to application - specific demands.
arxiv:1511.05364
the dark matter particle explorer ( dampe ) is one of the four satellites within strategic pioneer research program in space science of the chinese academy of science ( cas ). dampe can detect electrons, photons in a wide energy range ( 5 gev to 10 tev ) and ions up to iron ( 100gev to 100 tev ). silicon - tungsten tracker ( stk ) is one of the four subdetectors in dampe, providing photon - electron conversion, track reconstruction and charge identification for ions. ion beam test was carried out in cern with 60gev / u lead primary beams. charge reconstruction and charge resolution of stk detectors were investigated.
arxiv:1705.09791
we incorporate the color - screening effect due to light quark pair creation into the heavy quark - antiquark potential, and investigate the effects of screened potential on the spectrum of higher charmonium. we calculate the masses, electromagnetic decays, and e1 transitions of charmonium states in the screened potential model, and propose possible assignments for the newly discovered charmonium or charmonium - like $ " x, y, z " $ states. we find the masses of higher charmonia with screened potential are considerably lower than those with unscreened potential. the $ \ chi _ { c2 } ( 2p ) $ mass agrees well with that of the z ( 3930 ), and the mass of $ \ psi ( 4415 ) $ is compatible with $ \ psi ( 5s ) $ rather than $ \ psi ( 4s ) $. in particular, the discovered four $ y $ states in the isr process, i. e., $ y ( 4008 ), y ( 4260 ), y ( 4320 / 4360 ), y ( 4660 ) $ may be assigned as the $ \ psi ( 3s ), \ psi ( 4s ), \ psi ( 3d ), \ psi ( 6s ) $ states respectively. the x ( 3940 ) and x ( 4160 ) found in the double charmonium production in $ e ^ + e ^ - $ annihilation may be assigned as the $ \ eta _ c ( 3s ) $ and $ \ chi _ { c0 } ( 3p ) $ states. based on the calculated e1 transition widths for $ \ chi _ { c1 } ( 2p ) \ to \ gamma j / \ psi $ and $ \ chi _ { c1 } ( 2p ) \ to \ gamma \ psi ( 2s ) $ and other results, we argue that the x ( 3872 ) may be a $ \ chi _ { c1 } ( 2p ) $ dominated charmonium state with some admixture of the $ d ^ 0 \ bar { d } ^ { * 0 } $ component. possible problems encountered in these assignments and comparisons with other interpretations for these $ x, y, z $ states are discussed in detail. we emphasize that more theoretical and experimental investigations are urgently needed to clarify these assignments and other interpretations.
arxiv:0903.5506
the excitation spectrum in single - layered nd ( 0. 33 ) sr ( 1. 67 ) mno4 and pr ( 0. 33 ) ca ( 1. 67 ) mno4 resembles the hourglass - like excitation dispersion seen in various cuprates superconductors. however, our spin - wave dispersion in pr ( 0. 33 ) ca ( 1. 67 ) mno4, which exhibits a large correlation length of the magnetic order, showing outward - dispersing branches starting from the incommensurate zone - centres. the magnetic correlation length is identified as the decisive parameter to suppress this branches and generating an correct hourglass shape.
arxiv:1112.1799
this paper deals with denial of service attack. overview of the existing attacks and methods is proposed. classification scheme is presented for a different denial of service attacks. there is considered agent - based intrusion detection systems architecture. considered main components and working principles for a systems of such kind.
arxiv:0904.4174
we discuss a model of dark matter consisting of high energy anti - electron - neutrinos with leptonic force, which is produced by the conserved leptonic charge $ g _ \ ell $ associated with lee - yang ' s $ u _ 1 $ gauge symmetry. based on particle - cosmology for early universe, the high energy neutrino ( hen ) model of dark matter assumes that the neutron decay processes, $ n \ to p ^ + + e ^ - + \ ov { \ nu } _ e $, dominate the epoch after the creation, collision and confinement processes of quarks and antiquarks in the beginning. the hen model implies the following results : there are almost equal numbers of electrons, protons and anti - electron - neutrinos dominated the matter cosmos. there are unobservable and ubiquitous anti - electron - neutrinos $ \ ov { \ nu } _ e $ with leptonic charge $ g _ \ ell $ in the universe. although the total mass of anti - electron - neutrino dark matter is negligible in the universe, its enhanced gravitational and leptonic forces could lead to the observed flat rotation curves due to relativistic $ \ ov { \ nu } _ e $, whose static force involves a factor $ e _ \ nu / m _ \ nu \ approx 10 ^ 6 $. we estimate the leptonic charge to be $ g _ \ ell \ approx 7 \ times 10 ^ { - 21 } $. the model predicts that the anti - electron - neutrino dark matter can interact with cosmic - ray protons to produce positrons, i. e. $ \ ov { \ nu } _ e + p ^ + \ to e ^ + + n $, through weak interaction of the unified electroweak theory. the anti - electron - neutrino dark matter sheds light on the alpha magnetic spectrometer ( ams ) experiment, which has detected the intriguing excess of cosmic - ray positrons over what is expected. the hen model of dark matter suggests an experimental test of the new lee - yang force between electrons by using modern precision cavendish experiment.
arxiv:2109.09235
we get asymptotics for the volume of large balls in an arbitrary locally compact group g with polynomial growth. this is done via a study of the geometry of g and a generalization of p. pansu ' s thesis. in particular, we show that any such g is weakly commensurable to some simply connected solvable lie group s, the lie shadow of g. we also show that large balls in g have an asymptotic shape, i. e. after a suitable renormalization, they converge to a limiting compact set which can be interpreted geometrically. we then discuss the speed of convergence, treat some examples and give an application to ergodic theory. we also answer a question of burago about left invariant metrics and recover some results of stoll on the irrationality of growth series of nilpotent groups.
arxiv:0704.0095
recent diffusion models provide a promising zero - shot solution to noisy linear inverse problems without retraining for specific inverse problems. in this paper, we reveal that recent methods can be uniformly interpreted as employing a gaussian approximation with hand - crafted isotropic covariance for the intractable denoising posterior to approximate the conditional posterior mean. inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation. to achieve posterior covariance optimization without retraining, we provide general plug - and - play solutions based on two approaches specifically designed for leveraging pre - trained models with and without reverse covariance. we further propose a scalable method for learning posterior covariance prediction based on representation with orthonormal basis. experimental results demonstrate that the proposed methods significantly enhance reconstruction performance without requiring hyperparameter tuning.
arxiv:2402.02149
displaystyle \ mathbb { r } } that are both open and closed. a degenerate interval is any set consisting of a single real number ( i. e., an interval of the form [ a, a ] ). some authors include the empty set in this definition. a real interval that is neither empty nor degenerate is said to be proper, and has infinitely many elements. an interval is said to be left - bounded or right - bounded, if there is some real number that is, respectively, smaller than or larger than all its elements. an interval is said to be bounded, if it is both left - and right - bounded ; and is said to be unbounded otherwise. intervals that are bounded at only one end are said to be half - bounded. the empty set is bounded, and the set of all reals is the only interval that is unbounded at both ends. bounded intervals are also commonly known as finite intervals. bounded intervals are bounded sets, in the sense that their diameter ( which is equal to the absolute difference between the endpoints ) is finite. the diameter may be called the length, width, measure, range, or size of the interval. the size of unbounded intervals is usually defined as + ∞, and the size of the empty interval may be defined as 0 ( or left undefined ). the centre ( midpoint ) of a bounded interval with endpoints a and b is ( a + b ) / 2, and its radius is the half - length | a − b | / 2. these concepts are undefined for empty or unbounded intervals. an interval is said to be left - open if and only if it contains no minimum ( an element that is smaller than all other elements ) ; right - open if it contains no maximum ; and open if it contains neither. the interval [ 0, 1 ) = { x | 0 ≤ x < 1 }, for example, is left - closed and right - open. the empty set and the set of all reals are both open and closed intervals, while the set of non - negative reals, is a closed interval that is right - open but not left - open. the open intervals are open sets of the real line in its standard topology, and form a base of the open sets. an interval is said to be left - closed if it has a minimum element or is left - unbounded, right - closed if it has a maximum or is right unbounded ; it is
https://en.wikipedia.org/wiki/Interval_(mathematics)
the classification of textual data often yields important information. most classifiers work in a closed world setting where the classifier is trained on a known corpus, and then it is tested on unseen examples that belong to one of the classes seen during training. despite the usefulness of this design, often there is a need to classify unseen examples that do not belong to any of the classes on which the classifier was trained. this paper describes the open set scenario where unseen examples from previously unseen classes are handled while testing. this further examines a process of enhanced open set classification with a deep neural network that discovers new classes by clustering the examples identified as belonging to unknown classes, followed by a process of retraining the classifier with newly recognized classes. through this process the model moves to an incremental learning model where it continuously finds and learns from novel classes of data that have been identified automatically. this paper also develops a new metric that measures multiple attributes of clustering open set data. multiple experiments across two author attribution data sets demonstrate the creation an incremental model that produces excellent results.
arxiv:1910.12944
the inverse tangent function can be bounded by different inequalities, for example by shafer ' s inequality. in this publication, we propose a new sharp double inequality, consisting of a lower and an upper bound, for the inverse tangent function. in particular, we sharpen shafer ' s inequality and calculate the best corresponding constants. the maximum relative errors of the obtained bounds are approximately smaller than 0. 27 % and 0. 23 % for the lower and upper bound, respectively. furthermore, we determine an upper bound on the relative errors of the proposed bounds in order to describe their tightness analytically. moreover, some important properties of the obtained bounds are discussed in order to describe their behavior and achieved accuracy.
arxiv:1307.4983
in einstein - maxwell - chern - simons theory the extremal reissner - nordstr \ " om solution is no longer the single extremal solution with vanishing angular momentum, when the chern - simons coupling constant reaches a critical value. instead a whole sequence of rotating extremal j = 0 solutions arises, labeled by the node number of the magnetic u ( 1 ) potential. associated with the same near horizon solution, the mass of these radially excited extremal solutions converges to the mass of the extremal reissner - nordstr \ " om solution. on the other hand, not all near horizon solutions are also realized as global solutions
arxiv:1308.0548
we study the theory of safety and liveness in a reversible calculus where reductions are totally ordered and rollbacks lead the systems to past states. similar to previous work on communicating transactions, liveness and safety respectively correspond to the should - testing and inverse may - testing preorders. we develop fully abstract models for these preorders in a reversible calculus, which are based only on forward transitions, thus providing a simple proof technique for refinement of such systems. we show that with respect to safety, total reversibility is a conservative extension to ccs. with respect to liveness, however, adding total reversibility to ccs distinguishes more systems. to our knowledge, this work provides the first characterisations of safety and liveness, and the first testing theory for a reversible calculus.
arxiv:1604.05555
we explore the information geometry and asymptotic behaviour of estimators for kronecker - structured covariances, in both growing - $ n $ and growing - $ p $ scenarios, with a focus towards examining the quadratic form or partial trace estimator proposed by linton and tang. it is shown that the partial trace estimator is asymptotically inefficient an explanation for this inefficiency is that the partial trace estimator does not scale sub - blocks of the sample covariance matrix optimally. to correct for this, an asymptotically efficient, rescaled partial trace estimator is proposed. motivated by this rescaling, we introduce an orthogonal parameterization for the set of kronecker covariances. high - dimensional consistency results using the partial trace estimator are obtained that demonstrate a blessing of dimensionality. in settings where an array has at least order three, it is shown that as the array dimensions jointly increase, it is possible to consistently estimate the kronecker covariance matrix, even when the sample size is one.
arxiv:2308.02260
an acoustic topological insulator ( ti ) is synthesized using topology optimization, a free material inverse design method. the ti appears spontaneously from the optimization process without imposing requirements on the existence of pseudo spin - 1 / 2 states at the ti interface edge, or the chern number of the topological phases. the resulting ti is passive ; consisting of acoustically hard members placed in an air background and has an operational bandwidth of $ \ approx $ 12. 5 \ % showing high transmission. further analysis demonstrates confinement of more than 99 \ % of the total field intensity in the ti within at most six lattice constants from the ti interface. the proposed design hereby outperforms a reference from recent literature regarding energy transmission, field confinement and operational bandwidth.
arxiv:1904.02771
we study the entanglement of closed strings degrees of freedom in order to investigate the microscopic structure and statistics of objects as d - branes. by considering the macroscopic pure state ( mps ) limit, whenever the entanglement entropy goes to zero ( in such a way that the macroscopic properties of the state are preserved ), we show that boundary states may be recovered in this limit and, furthermore, the description through closed string ( perturbative ) degrees of freedom collapses. we also show how the thermal properties of branes and closed strings could be described by this model, and it requires that dissipative effects be taken into account. extensions of the mps analysis to more general systems at finite temperature are finally emphasized.
arxiv:0906.3049
the elementary vortex pinning potential is studied in unconventional superconductors within the framework of the quasiclassical theory of superconductivity. numerical results are presented for d -, anisotropic s -, and isotropic s - wave superconductors to show explicitly that in unconventional superconductors the vortex pinning potential is determined mainly by the loss of the condensation energy in bulk due to the presence of the pinning center, i. e., by the breakdown of the anderson ' s theorem. it is found that the vortex pinning energy in the d - wave pairing case is 4 - - 13 times larger than those in the s - wave pairing cases. this means that an enhancement of pinning effect in unconventional superconductors occurs due to the breakdown of the anderson ' s theorem. the case of a chiral p - wave superconductor is also investigated in terms of the vortex core states subject to the andreev reflection, where important is whether the vorticity and chirality are parallel or antiparallel.
arxiv:cond-mat/0108154
a \ emph { periodic graph } $ { \ cal g } = ( g _ 0, g _ 1, g _ 2, \ dots ) $ with period $ p $ is an infinite periodic sequence of graphs $ g _ i = g _ { i + p } = ( v, e _ i ) $, where $ i \ geq 0 $. the graph $ g = ( v, \ cup _ i e _ i ) $ is called the footprint of $ { \ cal g } $. recently, the arena where the cops and robber game is played has been extended from a graph to a periodic graph ; in this case, the \ emph { cop number } is also the minimum number of cops sufficient for capturing the robber. we study the connections and distinctions between the cop number $ c ( { \ cal g } ) $ of a periodic graph $ { \ cal g } $ and the cop number $ c ( g ) $ of its footprint $ g $ and establish several facts. for instance, we show that the smallest periodic graph with $ c ( { \ cal g } ) = 3 $ has at most $ 8 $ nodes ; in contrast, the smallest graph $ g $ with $ c ( g ) = 3 $ has $ 10 $ nodes. we push this investigation by generating multiple examples showing how the cop numbers of a periodic graph $ { \ cal g } $, the subgraphs $ g _ i $ and its footprint $ g $ can be loosely tied. based on these results, we derive upper bounds on the cop number of a periodic graph from properties of its footprint such as its treewidth.
arxiv:2310.13616
consider a fully connected network where up to $ t $ processes may crash, and all processes start in an arbitrary memory state. the self - stabilizing firing squad problem consists of eventually guaranteeing simultaneous response to an external input. this is modeled by requiring that the non - crashed processes " fire " simultaneously if some correct process received an external " go " input, and that they only fire as a response to some process receiving such an input. this paper presents firealg, the first self - stabilizing firing squad algorithm. the firealg algorithm is optimal in two respects : ( a ) once the algorithm is in a safe state, it fires in response to a go input as fast as any other algorithm does, and ( b ) starting from an arbitrary state, it converges to a safe state as fast as any other algorithm does.
arxiv:0908.2295
in 1980 kowal and drake found that in december 1612 and january 1613 galileo observed the planet neptune. at that time, according to these authors, galileo was able to measure angular separations with an accuracy of about 10 seconds of arc. however, as noticed by kowal and drake, the position of neptune reported by galileo is wrong with respect to the position computed with the modern ephemeris of about 1 minute of arc. this led kowal and drake to speculate on the possible errors of modern ephemeris of neptune and sparked some debate about neptune ' s ephemeris and / or possible errors in galileo ' s measures. until today this anomaly has remained without a conclusive answer. here we show that, in addition to the random errors, there are other significant measurement errors present in galileo ' s observations. these errors may help clarify the origin of the alleged anomalies in the position of neptune.
arxiv:2207.06097
} + c } } } and they found square roots efficiently using division and averaging. problems of this type included finding the dimensions of a rectangle given its area and the amount by which the length exceeds the width. tables of values of n3 + n2 were used to solve certain cubic equations. for example, consider the equation : a x 3 + b x 2 = c. { \ displaystyle \ ax ^ { 3 } + bx ^ { 2 } = c. } multiplying the equation by a2 and dividing by b3 gives : ( a x b ) 3 + ( a x b ) 2 = c a 2 b 3. { \ displaystyle \ left ( { \ frac { ax } { b } } \ right ) ^ { 3 } + \ left ( { \ frac { ax } { b } } \ right ) ^ { 2 } = { \ frac { ca ^ { 2 } } { b ^ { 3 } } }. } substituting y = ax / b gives : y 3 + y 2 = c a 2 b 3 { \ displaystyle y ^ { 3 } + y ^ { 2 } = { \ frac { ca ^ { 2 } } { b ^ { 3 } } } } which could now be solved by looking up the n3 + n2 table to find the value closest to the right - hand side. the babylonians accomplished this without algebraic notation, showing a remarkable depth of understanding. however, they did not have a method for solving the general cubic equation. = = = growth = = = babylonians modeled exponential growth, constrained growth ( via a form of sigmoid functions ), and doubling time, the latter in the context of interest on loans. clay tablets from c. 2000 bc include the exercise " given an interest rate of 1 / 60 per month ( no compounding ), compute the doubling time. " this yields an annual interest rate of 12 / 60 = 20 %, and hence a doubling time of 100 % growth / 20 % growth per year = 5 years. = = = plimpton 322 = = = the plimpton 322 tablet contains a list of " pythagorean triples ", i. e., integers ( a, b, c ) { \ displaystyle ( a, b, c ) } such that a 2 + b 2 = c 2 { \ displaystyle a ^ { 2 } + b ^ { 2 } = c ^ { 2 } }. the triples
https://en.wikipedia.org/wiki/Babylonian_mathematics
we show that the differential - geometric description of matter by differential structures of spacetime leads to a unifying model of the three types of energy in the cosmos : matter, dark matter and dark energy. using this model we are able to calculate the ratio of dark energy to the total energy of the cosmos.
arxiv:0710.1562
the subject of time - band - limiting, originating in signal processing, is dominated by the miracle that a naturally appearing integral operator admits a commuting differential one allowing for a numerically efficient way to compute its eigenfunctions. bispectrality is an effort to dig into the reasons behind this miracle and goes back to joint work with h. duistermaat. this search has revealed unexpected connections with several parts of mathematics, including integrable systems. here we consider a matrix valued version of bispectrality and give a general condition under which we can display a constructive and simple way to obtain the commuting differential operator. furthermore, we build an operator that commutes with both the time - limiting operator and the band - limiting operators.
arxiv:1801.10261
we report on x - ray measurements constraining the spectral energy distribution ( sed ) of the high - redshift $ z = 5. 18 $ blazar sdss j013127. 34 $ - $ 032100. 1 with new xmm - newton and nustar exposures. the blazar ' s x - ray spectrum is well fit by a power law with $ \ gamma = 1. 9 $ and $ n _ { \ rm h } = 1. 1 \ times10 ^ { 21 } \ rm \ cm ^ { - 2 } $, or a broken power law with $ \ gamma _ l = 0. 5 $, $ \ gamma _ h = 1. 8 $, and a break energy $ e _ b = 0. 7 $ kev for an expected absorbing column density of $ n _ { \ rm h } = 3. 6 \ times 10 ^ { 20 } \ rm \ cm ^ { - 2 } $, supported by spectral fitting of a nearby bright source. no additional spectral break is found at higher x - ray energies ( 1 - 30 kev ). we supplement the x - ray data with lower - energy radio - to - optical measurements and fermi - lat gamma - ray upper limits, construct broadband seds of the source, and model the seds using a synchro - compton scenario. this modeling constrains the bulk doppler factor of the jets to $ \ ge $ 7 and $ \ ge $ 6 ( 90 % ) for the low - and high - $ n _ { \ rm h } $ seds, respectively. the corresponding beaming implies $ \ ge $ 130 ( low $ n _ { \ rm h } $ ) or $ \ ge $ 100 ( high $ n _ { \ rm h } $ ) high - spin supermassive black holes similar to j0131 exist at similar redshifts.
arxiv:2009.11450
how cooperation emerges in human societies is both an evolutionary enigma, and a practical problem with tangible implications for societal health. population structure has long been recognized as a catalyst for cooperation because local interactions enable reciprocity. analysis of this phenomenon typically assumes bi - directional social interactions, even though real - world interactions are often uni - directional. uni - directional interactions - - where one individual has the opportunity to contribute altruistically to another, but not conversely - - arise in real - world populations as the result of organizational hierarchies, social stratification, popularity effects, and endogenous mechanisms of network growth. here we expand the theory of cooperation in structured populations to account for both uni - and bi - directional social interactions. even though directed interactions remove the opportunity for reciprocity, we find that cooperation can nonetheless be favored in directed social networks and that cooperation is provably maximized for networks with an intermediate proportion of directed interactions, as observed in many empirical settings. we also identify two simple structural motifs that allow efficient modification of interaction directionality to promote cooperation by orders of magnitude. we discuss how our results relate to the concepts of generalized and indirect reciprocity.
arxiv:2105.01167
nanoparticles with anti - stokes emissions enable many sensing applications, but their efficiencies are considerably low. the key to enable the process of anti - stokes emissions is to create phonons and assist the excited photons to be pumped from a lower energy state onto a higher one. increasing the temperature will generate more phonons, but it unavoidably quenches the luminescence. here by quantifying the number of phonons being generated from the host crystal and at the surface of yb3 + / nd3 + co - doped nanoparticles, we systematically investigated mechanisms towards the large enhancements of the phonon - assisted anti - stokes emissions from 980 nm to 750 nm and 803 nm. moreover, we provided direct evidence that moisture release from the nanoparticle surface at high temperature was not a main reason. we further demonstrated that the brightness of 10 nm nanoparticles were enhanced by more than two orders of magnitude, standing in stark contrast to the thermal quenching effect.
arxiv:1905.03503
we show that the new qcd production mechanisms which were proposed by s. j. brodsky, p. hoyer, a. h. mueller and the author can explain at least some of the anomalous behavior of open and / or closed charm production at large $ x _ { f } $.
arxiv:hep-ph/9408372
we study the differential uniformity of the wan - lidl polynomials over finite fields. a general upper bound, independent of the order of the field, is established. additional bounds are established in settings where one of the parameters is restricted. in particular, we establish a class of permutation polynomials which have differential uniformity at most 5 over fields of order $ 3 \ bmod 4 $, irrespective of the field size. computational results are also given.
arxiv:2211.04527
we consider a random process on recursive trees, with three types of events. vertices give birth at a constant rate ( growth ), each edge may be removed independently ( fragmentation of the tree ) and clusters ( or trees ) are frozen with a rate proportional to their sizes ( isolation of connected component ). a phase transition occurs when the isolation is able to stop the growth fragmentation process and cause extinction. when the process survives, the number of clusters increases exponentially and we prove that the normalized empirical measure of clusters a. s. converges to a limit law on recursive trees. we exploit the branching structure associated with the size of clusters, which is inherited from the splitting property of random recursive trees. this work is motivated by the control of epidemics and contact tracing where clusters correspond to trees of infected individuals that can be identified and isolated. we complement this work by providing results on the malthusian exponent to describe the effect of control policies on epidemics.
arxiv:2109.05760
this paper addresses the problem of group target tracking ( gtt ), wherein multiple closely spaced targets within a group pose a coordinated motion. to improve the tracking performance, the labeled random finite sets ( lrfss ) theory is adopted, and this paper develops a new kind of lrfss, i. e., augmented lrfss, which introduces group information into the definition of lrfss. specifically, for each element in an lrfs, the kinetic states, track label, and the corresponding group information of its represented target are incorporated. furthermore, by means of the labeled multi - bernoulli ( lmb ) filter with the proposed augmented lrfss, the group structure is iteratively propagated and updated during the tracking process, which achieves the simultaneously estimation of the kinetic states, track label, and the corresponding group information of multiple group targets, and further improves the gtt tracking performance. finally, simulation experiments are provided, which well demonstrates the effectiveness of the labeled multi - bernoulli filter with the proposed augmented lrfss for gtt tracking.
arxiv:2403.13562
a josephson junction, formed between two phase - biased superconductors and a normal metal, hosts a discrete spectrum of andreev bound states ( abs ). in this paper, we develop a theory for long ballistic andreev interferometers in two - dimensional metals. we consider three frameworks in our theoretical analysis : ( i ) perturbation theory in the tunneling amplitudes ; ( ii ) non - perturbative transport theory ; and ( iii ) physically motivated approximations to visualize the conductance maps in the ( flux, voltage ) plane. we find a non - standard phase - sensitive andreev reflection process in ballistic interferometers that couples the supercurrent to the non - equilibrium populations of the abs in the normal region. furthermore, our model shows that conductance spectroscopy follows the spectrum of the abs in long junctions. we also discuss our results in terms of the semiclassical theory, the classical orbits being the one - dimensional andreev tubes. our theoretical analysis captures the results of recent experiments by the penn state and harvard groups.
arxiv:2403.13669
realistic single - photon sources do not generate single photons with certainty. instead they produce statistical mixtures of photons in fock states $ \ ket { 1 } $ and vacuum ( noise ). we describe how to eliminate the noise in the output of the sources by means of another noisy source or a coherent state and cross phase modulation ( xpm ). we present a scheme which announces the production of pure single photons and thus eliminates the vacuum contribution. this is done by verifying a xpm related phase shift with a mach - zehnder interferometer.
arxiv:quant-ph/0602225
let phi : p ^ 1 - - > p ^ 1 be a rational map defined over a field k. we construct the moduli space m _ d ( n ) parameterizing conjugacy classes of degree - d maps with a point of formal period n and present an algebraic proof that m _ 2 ( n ) is geometrically irreducible for n > 1. restricting ourselves to maps phi of arbitrary degree d > = 2 such that the composition h ^ { - 1 } phi h = phi for some nontrivial h in pgl _ 2, we show that the moduli space parameterizing these maps with a point of formal period n is geometrically reducible for infinitely many n.
arxiv:0902.1813
motivated by applications where impatience is pervasive and service times are uncertain, we study a scheduling model where jobs may depart at an unknown point in time and service times are stochastic. initially, we have access to a single server and $ n $ jobs with known non - negative values : these jobs have unknown stochastic service and departure times with known distributional information, which we assume to be independent. when the server is free, we can run an available job which occupies the server for an unknown amount of time, and collect its value. the objective is to maximize the expected total value obtained from jobs run on the server. natural formulations of this problem suffer from the curse of dimensionality. in fact, this problem is np - hard even in the deterministic case. hence, we focus on efficiently computable approximation algorithms that can provide high expected reward compared to the optimal expected value. towards this end, we first provide a compact linear programming ( lp ) relaxation that gives an upper bound on the expected value obtained by the optimal policy. then we design a polynomial - time algorithm that is nearly a $ ( 1 / 2 ) \ cdot ( 1 - 1 / e ) $ - approximation to the optimal lp value ( so also to the optimal expected value ). we next shift our focus to the case of independent and identically distributed ( i. i. d. ) service times. in this case, we show that the greedy policy that always runs the highest - valued job whenever the server is free obtains a $ 1 / 2 $ - approximation to the optimal expected value. our approaches extend effortlessly and we demonstrate their flexibility by providing approximations to natural extensions of our problem. finally, we evaluate our lp - based policies and the greedy policy empirically on synthetic and real datasets.
arxiv:2406.15691
we study near - infrared ( jhk ) and x - ray light curves of cyg x - 3 obtained with the 2. 5 - m telescope of the caucasian mountain observatory of msu sai and collected from rxte asm and maxi archives. the light curves in the x - ray and ir domains are strongly affected by irregular variations. however, the mean curves are remarkably stable and qualitatively similar in both domains. this means that the ir flux of the system originates not only from the free - free radiation of the wr wind but also from a compact ir source located near the relativistic companion. the shape of the mean x - ray and ir light curves suggest the existence of two additional structures in the wr wind - a bow shock near the relativistic companion and a so - called " clumpy trail " ( vilhu et al. 2013 ). modeling of the mean x - ray and ir light curves allowed us to obtain important system parameters : the orbital phase of the superior conjunction of the relativistic companion $ \ phi _ 0 = - 0. 066 \ pm 0. 006 $, the orbital inclination angle $ i = 29. 5 ^ \ circ \ pm 1. 2 ^ \ circ $, and the wr mass - loss rate $ \ dot { m } = ( 0. 96 \ pm 0. 14 ) \ times 10 ^ { - 5 } \ rm m _ \ odot yr ^ { - 1 } $. by using relations between $ \ dot { m } $ and the rate of the period change and between $ \ dot { m } $ and the wr mass, we estimated the probable mass of the relativistic companion $ m _ { \ rm c } \ simeq 7. 2 \ rm m _ \ odot $ which points towards the black hole hypothesis. however, this estimate is based on the assumption of a smooth wr wind. considering the uncertainty associated with clumping, the mass - loss rate can be lower which leaves room for the neutron star hypothesis.
arxiv:2112.04805
the presence of quantum vortices determines the electromagnetic response of superconducting materials and devices. controlling the vortex motion, their pinning on intrinsic and artificial defects is therefore essential for superconducting electronics. here we take advantage of the attractive force between a magnetic cantilever of the magnetic force microscope and a single quantum vortex to spatially map the pinning force inside 50 - 240 nm thick magnetron - sputtered nb - films, commonly used in advanced superconducting electronics. the revealed pinning nano - network is related to the thickness - dependent granular structure of the films as well as to the characteristic microscopic scales of superconductivity. our approach is general, and can be directly applied to other type ii granular superconducting materials and nanodevices.
arxiv:2403.20125
a fruitful approach to the study of concentration of laplacian eigenfunctions on a compact manifold as the eigenvalue grows to infinity is to bound their restriction to submanifolds. in this paper we take this approach in the setting of a compact lie group, and provide sharp restriction bounds of general laplacian eigenfunctions as well as important special ones such as sums of matrix coefficients and in particular characters of irreducible representations of the group. we deal with two classes of submanifolds, namely, maximal flats and all of their submanifolds, and the conjugation - invariant submanifolds. we prove conjecturally sharp asymptotic $ l ^ p $ bounds of restriction of general laplacian eigenfunctions to maximal flats and all of their submanifolds for all $ p \ geq 2 $. we also prove sharp asymptotic $ l ^ p $ bounds of restriction of characters to maximal tori and all of their submanifolds for all $ p > 0 $ and to the conjugation - invariant submanifolds for all $ p \ geq 2 $, and of general sums of matrix coefficients to maximal flats and all of their submanifolds for all $ p \ geq 2 $. in the appendix we present similar results for products of compact rank - one symmetric spaces.
arxiv:2402.03178
an approach to lesion recognition is described that for lesion localization uses an ensemble of segmentation techniques and for lesion classification an exhaustive structural analysis. for localization, candidate regions are obtained from global thresholding of the chromatic maps and from applying the k - means algorithm to the rgb image ; the candidate regions are then integrated. for classification, a relatively exhaustive structural analysis of contours and regions is carried out.
arxiv:1807.06905
oxypnictide superconductor ndfeaso0. 85 sample was irradiated with 2 gev ta ions at a fluence of 5x10 ^ 10 ions / cm2. high resolution transmission electron microscopy study revealed that the irradiation produced columnar - like defects. the effect of these defects on the irreversible magnetisation in polycrystalline randomly oriented fragments was studied as a function of field angle and field sweep rate. we find that the critical current density is enhanced at fields below the matching field ( ~ 1 tesla ) but only marginally. the pinning enhancement is anisotropic and maximum along the defect direction at high temperatures but the pinning then becomes more isotropic at low temperatures. the creep rate is suppressed at high temperatures and at fields below the matching field, indicating the columnar defects are efficient pinning sites at these h and t conditions.
arxiv:0907.0217
we introduce a stochastic lattice model to investigate the effects of pore formation in a passive layer grown with products of metal corrosion. it considers that an anionic species diffuses across that layer and reacts at the corrosion front ( metal - oxide interface ), producing a random distribution of compact regions and large pores, respectively represented by o ( oxide ) and p ( pore ) sites. o sites are assumed to have very small pores, so that the fraction $ \ phi $ of p sites is an estimate of the porosity, and the ratio between anion diffusion coefficients in those regions is $ d _ { \ text r } < 1 $. simulation results without the large pores ( $ \ phi = 0 $ ) are similar to those of a formerly studied model of corrosion and passivation and are explained by a scaling approach. if $ \ phi > 0 $ and $ d _ { \ text r } \ ll 1 $, significant changes are observed in passive layer growth and corrosion front roughness. for small $ \ phi $, a slowdown of the growth rate is observed, which is interpreted as a consequence of the confinement of anions in isolated pores for long times. however, the presence of large pores near the corrosion front increases the frequency of reactions at those regions, which leads to an increase in the roughness of that front. this model may be a first step to represent defects in a passive layer which favor pitting corrosion.
arxiv:1710.01548
in this note we investigate the simplicial volume of fiber bundles with connected structure group. we are able to show that if the structure group is either compact or a lie group, or if the fiber is aspherical that the simplicial volume of the total space agrees with the simplicial volume of the trivial bundle.
arxiv:2404.14818
we present three - dimensional astrochemical simulations and synthetic observations of magnetised, turbulent, self - gravitating molecular clouds. we explore various galactic interstellar medium environments, including cosmic - ray ionization rates in the range of $ \ zeta _ { \ rm cr } = 10 ^ { - 17 } $ - $ 10 ^ { - 14 } \, { \ rm s } ^ { - 1 } $, far - uv intensities in the range of $ g _ 0 = 1 $ - $ 10 ^ 3 $ and metallicities in the range of $ z = 0. 1 $ - $ 2 \, { \ rm z } _ { \ odot } $. the simulations also probe a range of densities and levels of turbulence, including cases where the gas has undergone recent compression due to cloud - cloud collisions. we examine : i ) the column densities of carbon species across the cycle of cii, ci and co, along with oi, in relation to the hi - to - h $ _ 2 $ transition ; ii ) the velocity - integrated emission of [ cii ] ~ $ 158 \ mu $ m, [ $ ^ { 13 } $ cii ] ~ $ 158 \ mu $ m, [ ci ] ~ $ 609 \ mu $ m and $ 370 \ mu $ m, [ oi ] ~ $ 63 \ mu $ m and $ 146 \ mu $ m, and of the first ten $ ^ { 12 } $ co rotational transitions ; iii ) the corresponding spectral line energy distributions ; iv ) the usage of [ cii ] and [ oi ] ~ $ 63 \ mu $ m to describe the dynamical state of the clouds ; v ) the behavior of the most commonly used ratios between transitions of co and [ ci ] ; and vi ) the conversion factors for using co and ci as h $ _ 2 $ - gas tracers. we find that enhanced cosmic - ray energy densities enhance all aforementioned line intensities. at low metallicities, the emission of [ cii ] is well connected with the h $ _ 2 $ column, making it a promising new h $ _ 2 $ tracer in metal - poor environments. the conversion factors of $ x _ { \ rm co } $ and $ x _ { \ rm ci } $ depend on metallicity and the cosmic - ray ionization rate, but not on fuv intensity. in the era of alma, sofia and the forthcoming ccat - prime telescope, our results can be used to understand better
arxiv:2012.06773
science degrees in child studies. the basc tends to focus more on the application of the engineering sciences. in australia and new zealand, this degree is awarded in various fields of study and is considered a highly specialized professional degree. in the united kingdom ' s educational system, applied science refers to a suite of " vocational " science qualifications that run alongside " traditional " general certificate of secondary education or a - level sciences. applied science courses generally contain more coursework ( also known as portfolio or internally assessed work ) compared to their traditional counterparts. these are an evolution of the gnvq qualifications offered up to 2005. these courses regularly come under scrutiny and are due for review following the wolf report 2011 ; however, their merits are argued elsewhere. in the united states, the college of william & mary offers an undergraduate minor as well as master of science and doctor of philosophy degrees in " applied science ". courses and research cover varied fields, including neuroscience, optics, materials science and engineering, nondestructive testing, and nuclear magnetic resonance. university of nebraska – lincoln offers a bachelor of science in applied science, an online completion bachelor of science in applied science, and a master of applied science. coursework is centered on science, agriculture, and natural resources with a wide range of options, including ecology, food genetics, entrepreneurship, economics, policy, animal science, and plant science. in new york city, the bloomberg administration awarded the consortium of cornell - technion $ 100 million in city capital to construct the universities ' proposed applied sciences campus on roosevelt island. = = see also = = applied mathematics basic research exact sciences hard and soft science invention secondary research = = references = = = = external links = = media related to applied sciences at wikimedia commons
https://en.wikipedia.org/wiki/Applied_science
we present a new approach to equivariant version of the topological complexity, called a symmetric topological complexity. it seems that the presented approach is more adequate for the analysis of an impact of symmetry on the the motion planning algoritm than the one introduced and studied by colman and grant. we show many bounds for the symmetric topological complexity comparing it with already known invariants and prove that in the case of a free action it is equal to the farber ' s topological complexity of the orbit space. we define the whitehead version of it.
arxiv:1303.0171
optical computing uses photons as information carriers, opening up the possibility for ultrahigh - speed and ultrawide - band information processing. integrated all - optical logic devices are indispensible core components of optical computing systems. however, up to now, little experimental progress has been made in nanoscale all - optical logic discriminators, which have the function of discriminating and encoding incident light signals according to wavelength. here, we report a strategy to realize a nanoscale all - optical logic discriminator based on plasmonic bandgap engineering in a planar plasmonic microstructure. light signals falling within different operating wavelength ranges are differentiated and endowed with different logic state encodings. compared with values previously reported, the operating bandwidth is enlarged by one order of magnitude. also the spp light source is integrated with the logic device while retaining its ultracompact size. this opens up a way to construct on - chip all - optical information processors and artificial intelligence systems.
arxiv:1309.4554
we explore the behavior of periodic arrays of magnetic nanowires by micromagnetic simulations using the nmag modeling package. a large number of modeling studies on such arrays of nanowires have been performed using finite size models. we show that these finite size micromagnetic descriptions can only be used in specific situations. we perform a systematic study of more or less dense 1d and 2d arrays of nanowires using either finite size or infinite size models and we show that finite size models fail to capture some of the features of real infinite systems. we show that that the mean field model scaled to the system porosity is valid. this work can be used as a basis to the extension of micromagnetic calculations of the magnetization dynamics in arrays of nanowires.
arxiv:1008.0172
the fast aerosol spectrometer ( fasp ) is a device for spectral aerosol measurements. its purpose is to safely monitor the atmosphere inside a reactor containment. first we describe the fasp and explain its basic physical laws. then we introduce our reconstruction methods for aerosol particle size distributions designed for the fasp. we extend known existence results for constrained tikhonov regularization by uniqueness criteria and use those to generate reasonable models for the size distributions. we apply a bayesian model - selection framework on these pre - generated models. we compare our algorithm with classical inversion methods using simulated measurements. we then extend our reconstruction algorithm for two - component aerosols, so that we can simultaneously retrieve their particle - size distributions and unknown volume fractions of their two components. finally we present the results of a numerical study for the extended algorithm.
arxiv:1606.01293
diamond light source is the uk ' s national synchrotron facility and as such provides access to world class experimental services for uk and international researchers. as a user facility, that is one that focuses on providing a good user experience to our varied visitors, diamond invests heavily in software infrastructure and staff. over 100 members of the 600 strong workforce consider software development as a significant tool to help them achieve their primary role. these staff work on a diverse number of different software packages, providing support for installation and configuration, maintenance and bug fixing, as well as additional research and development of software when required. this talk focuses on one of the software projects undertaken to unify and improve the user experience of several experiments. the " mapping project " is a large 2 year, multi group project targeting the collection and processing experiments which involve scanning an x - ray beam over a sample and building up an image of that sample, similar to the way that google maps bring together small pieces of information to produce a full map of the world. the project itself is divided into several work packages, ranging from teams of one to 5 or 6 in size, with varying levels of time commitment to the project. this paper aims to explore one of these work packages as a case study, highlighting the experiences of the project team, the methodologies employed, their outcomes, and the lessons learnt from the experience.
arxiv:1703.00958
we introduce a novel approach to unsupervised and semi - supervised domain adaptation for semantic segmentation. unlike many earlier methods that rely on adversarial learning for feature alignment, we leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains. as a result, the networks are easier to train and deliver better performance. our approach consistently outperforms state - of - the - art unsupervised and semi - supervised methods on two challenging domain adaptive segmentation tasks, particularly with a small number of target domain annotations. it can also be naturally extended to weakly - supervised domain adaptation, where only a minor drop in accuracy can save up to 75 % of annotation cost.
arxiv:2104.11056
recent approaches in generative adversarial networks ( gans ) can automatically synthesize realistic images from descriptive text. despite the overall fair quality, the generated images often expose visible flaws that lack structural definition for an object of interest. in this paper, we aim to extend state of the art for gan - based text - to - image synthesis by improving perceptual quality of generated images. differentiated from previous work, our synthetic image generator optimizes on perceptual loss functions that measure pixel, feature activation, and texture differences against a natural image. we present visually more compelling synthetic images of birds and flowers generated from text descriptions in comparison to some of the most prominent existing work.
arxiv:1708.09321
we provide experimental evidence for confinement of water molecules in the pores of hexagonal structure of ypo4 at elevated temperatures upto 600 k using powder neutron diffraction. in order to avoid the large incoherent scattering from the hydrogen, deuterated samples of doped ypo4 : ce - eu were used for diffraction measurements. the presence of water molecules in the triangular and hexagonal pores in the hexagonal structure was established by detailed simulation of the diffraction pattern and rietveld refinement of the experimental data. it was observed that the presence of water leads specifically to suppression of the intensity of a peak around q = 1. 04 { \ aa } - 1while the intensity of peaks around q = 1. 83 { \ aa } - 1 is enhanced in the neutron diffraction pattern. we estimate the number of water molecules as 2. 36 ( 6 ) per formula units at 300 k and the sizes of the hexagonal and triangular pores as7. 2 ( 1 ) { \ aa } and 4. 5 ( 1 ) { \ aa }, respectively. with increase in temperature, the water content in both the pores decreases above 450 k and vanishes around 600 k. analysis of the powder diffraction data reveals that the hexagonal structure with the pores persist up to 1273 k, and transforms to another structure at 1323 k. the high temperature phase is not found to have the zircon or the monazite type structure, but a monoclinic structure ( space group p2 / m ) with lattice parameters am = 6. 826 ( 4 ) { \ aa }, bm = 6. 645 ( 4 ) { \ aa }, cm = 10. 435 ( 9 ) { \ aa }, and \ b { eta } = 107. 21 ( 6 ) { \ deg }. the monoclinic structure has about 14 % smaller volume than the hexagonal structure which essentially reflects the collapse of the pores. the phase transition and the change in the volume are also confirmed by x - ray diffraction measurements. the hexagonal to the monoclinic phase transition is found to be irreversible on cooling to room temperature.
arxiv:1705.06540
recent years have seen a rapid increase in research activity in the field of dram - based processing - in - memory ( pim ) accelerators, where the analog computing capability of dram is employed by minimally changing the inherent structure of dram peripherals to accelerate various data - centric applications. several dram - based pim accelerators for convolutional neural networks ( cnns ) have also been reported. among these, the accelerators leveraging in - dram stochastic arithmetic have shown manifold improvements in processing latency and throughput, due to the ability of stochastic arithmetic to convert multiplications into simple bit - wise logical and operations. however, the use of in - dram stochastic arithmetic for cnn acceleration requires frequent stochastic to binary number conversions. for that, prior works employ full adder - based or serial counter based in - dram circuits. these circuits consume large area and incur long latency. their in - dram implementations also require heavy modifications in dram peripherals, which significantly diminishes the benefits of using stochastic arithmetic in these accelerators. to address these shortcomings, this paper presents a new substrate for in - dram stochastic - to - binary number conversion called agni. agni makes minor modifications in dram peripherals using pass transistors, capacitors, encoders, and charge pumps, and re - purposes the sense amplifiers as voltage comparators, to enable in - situ binary conversion of input statistic operands of different sizes with iso latency.
arxiv:2302.07746
we present the deep convolutional gaussian mixture model ( dcgmm ), a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference. dcgmm instances exhibit a cnn - like layered structure, in which the principal building blocks are convolutional gaussian mixture ( cgmm ) layers. a key innovation w. r. t. related models like sum - product networks ( spns ) and probabilistic circuits ( pcs ) is that each cgmm layer optimizes an independent loss function and therefore has an independent probabilistic interpretation. this modular approach permits intervening transformation layers to harness the full spectrum of ( potentially non - invertible ) mappings available to cnns, e. g., max - pooling or half - convolutions. dcgmm sampling and inference are realized by a deep chain of hierarchical priors, where a sample generated by a given cgmm layer defines the parameters of sampling in the next - lower cgmm layer. for sampling through non - invertible transformation layers, we introduce a new gradient - based sharpening technique that exploits redundancy ( overlap ) in, e. g., half - convolutions. dcgmms can be trained end - to - end by sgd from random initial conditions, much like cnns. we show that dcgmms compare favorably to several recent pc and spn models in terms of inference, classification and sampling, the latter particularly for challenging datasets such as svhn. we provide a public tf2 implementation.
arxiv:2203.11034
measurements performed by the cms experiment of the cross section for inclusive b - quark production in proton - proton collisions at sqrt ( s ) = 7 tev are presented. the measurements are based on different methods, such as inclusive jet measurements with secondary vertex tagging or selecting a sample of events containing jets and at least one muon, where the transverse momentum of the muon with respect to the closest jet axis discriminates b events from the background. the results are compared with predictions based on perturbative qcd calculations at leading and next - to - leading order.
arxiv:1109.2003
instantaneous heat transfer between different phases is a common assumption for modeling heat transfer in porous media, known as local thermal equilibrium ( lte ). this assumption may not hold in certain technical and environmental applications, especially in systems with large temperature gradients, large differences in thermal properties, or high velocities. local thermal non - equilibrium ( ltne ) models aim to describe heat transfer processes when the lte assumption may fail. in this work, we compare three continuum - scale models from the pore to the representative elementary volume ( rev ) scale. specifically, dual - network and rev - scale models are evaluated against a pore - resolved model, which we perceive as a reference in the absence of experimental results. different effective models are used to obtain upscaled properties on the rev scale and to compare resulting temperature profiles. the systems investigated are fully saturated, consisting of one fluid and one solid phase. this study focuses on purely conductive systems without significant differences in thermal properties. results show that lte holds then for low interfacial resistances. however, for large interfacial resistances, solid and fluid temperatures differ. the rev - scale model with effective parameters obtained by homogenization leads to similar results as the pore - resolved model, whereas the dual - network model shows greater deviation due to its fixed spatial resolution. among the evaluated effective parameter formulations for the rev - scale model, only the homogenization - based approach captures the ltne behavior, as it incorporates the interfacial heat transfer coefficient. convection is relevant for most practical applications, and its impact will be addressed in a follow - up article.
arxiv:2504.05920
many physical systems share the property of scale invariance. most of them show ordinary power - law scaling, where quantities can be expressed as a leading power law times a scaling function which depends on scaling - invariant ratios of the parameters. however, some systems do not obey power - law scaling, instead there is numerical evidence for a logarithmic scaling form, in which the scaling function depends on ratios of the logarithms of the parameters. based on previous ideas by c. tang we propose that this type of logarithmic scaling can be explained by a concept of local scaling invariance with continuously varying exponents. the functional dependence of the exponents is constrained by a homomorphism, which can be expressed as a set of partial differential equations. solving these equations we obtain logarithmic scaling as a special case. the other solutions lead to scaling forms where logarithmic and power - law scaling are mixed.
arxiv:cond-mat/0208277
the physicochemical characterization of trivalent ions is limited due to a lack of accurate force fields. by leveraging the latest machine learning force field to model aqueous $ \ text { alcl } _ { 3 } $, we discover that upon dissolution of $ \ text { al } ^ { 3 + } $, water molecules beyond the second hydration shell involve in the hydration process. a combination of scissoring of coordinating water is followed by synchronized secondary motion of water in the second solvation shell due to hydrogen bonding. consequently, the water beyond the second solvation penetrates through the second solvation shell and coordinates to the $ \ text { al } ^ { 3 + } $. our study reveals a novel microscopic understanding of solvation dynamics for trivalent ion.
arxiv:2407.16178
the sum - product or belief propagation ( bp ) algorithm is a widely used message - passing technique for computing approximate marginals in graphical models. we introduce a new technique, called stochastic orthogonal series message - passing ( sosmp ), for computing the bp fixed point in models with continuous random variables. it is based on a deterministic approximation of the messages via orthogonal series expansion, and a stochastic approximation via monte carlo estimates of the integral updates of the basis coefficients. we prove that the sosmp iterates converge to a \ delta - neighborhood of the unique bp fixed point for any tree - structured graph, and for any graphs with cycles in which the bp updates satisfy a contractivity condition. in addition, we demonstrate how to choose the number of basis coefficients as a function of the desired approximation accuracy \ delta and smoothness of the compatibility functions. we illustrate our theory with both simulated examples and in application to optical flow estimation.
arxiv:1212.3850
we address an essential problem in computer vision, that of unsupervised object segmentation in video, where a main object of interest in a video sequence should be automatically separated from its background. an efficient solution to this task would enable large - scale video interpretation at a high semantic level in the absence of the costly manually labeled ground truth. we propose an efficient unsupervised method for generating foreground object soft - segmentation masks based on automatic selection and learning from highly probable positive features. we show that such features can be selected efficiently by taking into consideration the spatio - temporal, appearance and motion consistency of the object during the whole observed sequence. we also emphasize the role of the contrasting properties between the foreground object and its background. our model is created in two stages : we start from pixel level analysis, on top of which we add a regression model trained on a descriptor that considers information over groups of pixels and is both discriminative and invariant to many changes that the object undergoes throughout the video. we also present theoretical properties of our unsupervised learning method, that under some mild constraints is guaranteed to learn a correct discriminative classifier even in the unsupervised case. our method achieves competitive and even state of the art results on the challenging youtube - objects and segtrack datasets, while being at least one order of magnitude faster than the competition. we believe that the competitive performance of our method in practice, along with its theoretical properties, constitute an important step towards solving unsupervised discovery in video.
arxiv:1704.05674
in the field of multi - object tracking ( mot ), traditional methods often rely on the kalman filter for motion prediction, leveraging its strengths in linear motion scenarios. however, the inherent limitations of these methods become evident when confronted with complex, nonlinear motions and occlusions prevalent in dynamic environments like sports and dance. this paper explores the possibilities of replacing the kalman filter with a learning - based motion model that effectively enhances tracking accuracy and adaptability beyond the constraints of kalman filter - based tracker. in this paper, our proposed method mambamot and mambamot +, demonstrate advanced performance on challenging mot datasets such as dancetrack and sportsmot, showcasing their ability to handle intricate, non - linear motion patterns and frequent occlusions more effectively than traditional methods.
arxiv:2403.10826
we discuss a model comprising two coupled nonlinear oscillators ( kerr - like nonlinear coupler ) with one of them pumped by an external coherent excitation. applying the method of nonlinear quantum scissors we show that the quantum evolution of the coupler can be closed within a finite set of n - photon fock states. moreover, we show that the system is able to generate bell - like states and, as a consequence, the coupler discussed behaves as a two - qubit system. we also analyze the effects of dissipation on entanglement of formation parametrized by concurrence.
arxiv:quant-ph/0408024
in this paper, we analyze a scheme for the time - dependent variable density navier - stokes equations. the algorithm is implicit in time, and the space approximation is based on a low - order staggered non - conforming finite element, the so - called rannacher - turek element. the convection term in the momentum balance equation is discretized by a finite volume technique, in such a way that a solution obeys a discrete kinetic energy balance, and the mass balance is approximated by an upwind finite volume method. we first show that the scheme preserves the stability properties of the continuous problem ( l $ \ infty $ - estimate for the density, l $ \ infty $ ( l 2 ) - and l 2 ( h 1 ) - estimates for the velocity ), which yields, by a topological degree technique, the existence of a solution. then, invoking compactness arguments and passing to the limit in the scheme, we prove that any sequence of solutions ( obtained with a sequence of discretizations the space and time step of which tend to zero ) converges up to the extraction of a subsequence to a weak solution of the continuous problem.
arxiv:1603.07221
the nature of the fractional quantum hall state at quarter filling in a wide quantum well is still under debate. both one - component non - abelian and two - component abelian orders have been proposed to describe the system. interestingly, these candidates received support from different experiments under disparate conditions. in this article, we focus on non - abelian orders from cooper pairing between composite fermions and the abelian halperin - ( 5, 5, 3 ) order. we discuss and predict systematically different experimental signatures to identify them in future experiment. in particular, we address the mach - zehnder interferometry experiment and show that it can identify the recently proposed 22111 parton order.
arxiv:1909.04265
although radar and communications signal classification are usually treated separately, they share similar characteristics, and methods applied in one domain can be potentially applied in the other. we propose a simple and unified scheme for the classification of radar and communications signals using long short - term memory ( lstm ) neural networks. this proposal provides an improvement of the state of the art on radar signals where lstm models are starting to be applied within schemes of higher complexity. to date, there is no standard public dataset for radar signals. therefore, we propose deepradar2022, a radar dataset used in our systematic evaluations that is available publicly and will facilitate a standard comparison between methods.
arxiv:2305.03192
for an $ r $ - uniform hypergraph $ h $ and a family of $ r $ - uniform hypergraphs $ \ mathcal { f } $, the relative tur \ ' { a } n number $ \ mathrm { ex } ( h, \ mathcal { f } ) $ is the maximum number of edges in an $ \ mathcal { f } $ - free subgraph of $ h $. in this paper we give lower bounds on $ \ mathrm { ex } ( h, \ mathcal { f } ) $ for certain families of hypergraph cycles $ \ mathcal { f } $ such as berge cycles and loose cycles. in particular, if $ \ mathcal { c } _ \ ell ^ 3 $ denotes the set of all $ 3 $ - uniform berge $ \ ell $ - cycles and $ h $ is a 3 - uniform hypergraph with maximum degree $ \ delta $, we prove \ [ \ mathrm { ex } ( h, \ mathcal { c } _ 4 ^ { 3 } ) \ ge \ delta ^ { - 3 / 4 - o ( 1 ) } e ( h ), \ ] \ [ \ mathrm { ex } ( h, \ mathcal { c } _ 5 ^ { 3 } ) \ ge \ delta ^ { - 3 / 4 - o ( 1 ) } e ( h ), \ ] and these bounds are tight up to the $ o ( 1 ) $ term.
arxiv:2012.11061
from a data perspective, the materials mechanics field is characterized by sparsity of available data, mainly due to the strong microstructure - sensitivity of properties like strength, fracture toughness, and fatigue limit. this requires testing specimens with different thermo - mechanical histories, even when the composition is similar. experimental data on mechanical behavior is rare, as mechanical testing is destructive and requires significant material and effort. furthermore, mechanical behavior is typically characterized in simplified tests under uniaxial loading conditions, whereas a complete characterization requires multiaxial testing. to address this data sparsity, simulation methods like micromechanical modeling can contribute to microstructure - sensitive data collections. this work introduces a novel data schema integrating both metadata and mechanical data, following the workflows of the material modeling processes by which the data has been generated. each workflow run produces unique data objects by incorporating user, system, and job - specific information correlated with mechanical properties. this approach can be applied to any type of workflow as long as it is well - defined. this integrated format provides a sustainable way of generating findable, accessible, interoperable, and reusable ( fair ) data objects. the metadata elements focus on key features required to characterize microstructure - specific data, simplifying the collection of purpose - specific datasets by search algorithms.
arxiv:2408.03965