text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
using the photon - ion merged - beams technique at a synchrotron light source, we have measured relative cross sections for single and up to five - fold photoionization of fe $ ^ { 2 + } $ ions in the energy range 690 - - 920 ev. this range contains thresholds and resonances associated with ionization and excitation of $ 2p $ and $ 2s $ electrons. calculations were performed to simulate the total absorption spectra. the theoretical results show very good agreement with the experimental data, if overall energy shifts of up to 2. 5 ev are applied to the calculated resonance positions and assumptions are made about the initial experimental population of the various levels of the fe $ ^ { 2 + } $ ( [ ar ] $ 3d ^ 6 $ ) ground configuration. furthermore, we performed extensive calculations of the auger cascades that result when an electron is removed from the $ 2p $ subshell of fe $ ^ { 2 + } $. these computations lead to a better agreement with the measured product - charge - state distributions as compared to earlier work. we conclude that the $ l $ - shell absorption features of low - charged iron ions are useful for identifying gas - phase iron in the interstellar medium and for discriminating against the various forms of condensed - phase iron bound to composite interstellar dust grains. | arxiv:2010.00473 |
cyber - physical systems combine software and physical components. specification - driven trace - checking tools for cps usually provide users with a specification language to express the requirements of interest, and an automatic procedure to check whether these requirements hold on the execution traces of a cps. although there exist several specification languages for cps, they are often not sufficiently expressive to allow the specification of complex cps properties related to the software and the physical components and their interactions. in this paper, we propose ( i ) the hybrid logic of signals ( hls ), a logic - based language that allows the specification of complex cps requirements, and ( ii ) theodore, an efficient smt - based trace - checking procedure. this procedure reduces the problem of checking a cps requirement over an execution trace, to checking the satisfiability of an smt formula. we evaluated our contributions by using a representative industrial case study in the satellite domain. we assessed the expressiveness of hls by considering 212 requirements of our case study. hls could express all the 212 requirements. we also assessed the applicability of theodore by running the trace - checking procedure for 747 trace - requirement combinations. theodore was able to produce a verdict in 74. 5 % of the cases. finally, we compared hls and theodore with other specification languages and trace - checking tools from the literature. our results show that, from a practical standpoint, our approach offers a better trade - off between expressiveness and performance. | arxiv:2009.12250 |
a small switched ethernet lan, where each client connects to a central network switch, and logically in a wireless lan, where each wireless client associates with the central wireless access point. ring network : each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left - or rightwards. token ring networks, and the fiber distributed data interface ( fddi ), made use of such a topology. mesh network : each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other. fully connected network : each node is connected to every other node in the network. tree network : nodes are arranged hierarchically. this is the natural topology for a larger ethernet network with multiple switches and without redundant meshing. the physical layout of the nodes in a network may not necessarily reflect the network topology. as an example, with fddi, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding. = = = overlay network = = = an overlay network is a virtual network that is built on top of another network. nodes in the overlay network are connected by virtual or logical links. each link corresponds to a path, perhaps through many physical links, in the underlying network. the topology of the overlay network may ( and often does ) differ from that of the underlying one. for example, many peer - to - peer networks are overlay networks. they are organized as nodes of a virtual system of links that run on top of the internet. overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed. the most striking example of an overlay network is the internet itself. the internet itself was initially built as an overlay on the telephone network. even today, each internet node can communicate with virtually any other through an underlying mesh of sub - networks of wildly different topologies and technologies. address resolution and routing are the means that allow mapping of a fully connected ip overlay network to its underlying network. another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. in | https://en.wikipedia.org/wiki/Computer_network |
we consider the recently introduced model of \ emph { low ply graph drawing }, in which the ply - disks of the vertices do not have many common overlaps, which results in a good distribution of the vertices in the plane. the \ emph { ply - disk } of a vertex in a straight - line drawing is the disk centered at it whose radius is half the length of its longest incident edge. the largest number of ply - disks having a common overlap is called the \ emph { ply - number } of the drawing. we focus on trees. we first consider drawings of trees with constant ply - number, proving that they may require exponential area, even for stars, and that they may not even exist for bounded - degree trees. then, we turn our attention to drawings with logarithmic ply - number and show that trees with maximum degree $ 6 $ always admit such drawings in polynomial area. | arxiv:1608.08538 |
we investigate the role of superconducting phase fluctuations in generic 2d superconductors. for nodal d - wave superconductors, it is found that the thermal ( static ) phase fluctuations in the normal state significantly broaden the d - wave nodes, and lead to the pseudogap and accompanying fermi arcs in a quite general manner. the formation of fermi arcs can be depicted by the intertwinement between d - wave superconductivity and the scattering of cooper pairs. to support our theoretical findings, we numerically report the observation of fermi arcs in a concrete lattice model, proposed originally by x. y. xu and t. grover in phys. rev. lett. 126, 217002 ( 2021 ), using the determinant quantum monte carlo ( dqmc ) method without the infamous sign problem. as far as we noticed, it is the first time in a highly - correlated model that the fermi arcs are identified with unbiased dqmc simulations. despite the conciseness of our theory, our simulation results confirm the theoretical predictions qualitatively. | arxiv:2310.05376 |
a method to identify the parity of unconventional superconductors is proposed based on tunneling spectroscopy. for a model of calculation, we adopt a ferromagnet / superconductor ( f / s ) junction of which tunneling current is spin polarized. the tunneling conductance spectra are shown to be quite sensitive to the direction of the magnetization axis in the ferromagnet only when the superconductor has odd parity. therefore, it is possible to distinguish the parity of the superconductor by measuring the tunneling spectroscopy in f / s junctions. | arxiv:cond-mat/0105468 |
for explicitly time depending mass density, which satisfies a continuity equation, it is shown that maxwell - like equations for gravitational field follow naturally without any need of general relativity theory approximation or related assumptions. as a consequences, it is shown that several features already known in electrodynamics ( poynting vector, density of energy, tensor stress, radiation ) are totally reproduced for gravitational field. | arxiv:1705.11067 |
we examine the commonly explored beyond - standard - model physics scenario of secret neutrino forces, and point out a model prediction that appears to have been overlooked : the generation of unique flavor - changing effects in experiments featuring decay - at - rest ( dar ) neutrino sources. these flavor changes occur because the decay that drives neutrino and antineutrino production, $ \ mu ^ { + } \ rightarrow e ^ + + \ bar { \ nu } _ { \ mu } + \ nu _ { e } $, is unique in producing two neutrinos in the final state. any non - flavor - universal force between the emerging neutrinos would thus induce a new oscillation phase as they escape from each - other ' s potential wells, an effect which is largely absent in experiments that primarily rely on meson decay - in - flight and nuclear decay. we calculate the magnitude of the associated observable and compare it to the anomalous neutrino flavor transformation seen by the lsnd experiment, finding a wide but constrained allowed parameter space. we also evaluate existing limits from other experiments, and the testability of this new effect at the future dar programs jsns $ ^ 2 $ and oscsns. | arxiv:1911.06342 |
several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. in the case of identical training each perceptron receives the output of its neighbour. the symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. an ensemble of competitive perceptrons is used as decision - making algorithms in a model of a closed market ( el farol bar problem or minority game ) ; each network is trained on the history of minority decisions. this ensemble of perceptrons relaxes to a stationary state whose performance can be better than random. | arxiv:cond-mat/0003051 |
we review the problem of electron - electron interactions in graphene. starting from the screening of long range interactions in these systems, we discuss the existence of an emerging dirac liquid of lorentz invariant quasi - particles in the weak coupling regime, and strongly correlated electronic states in the strong coupling regime. we also analyze the analogy and connections between the many - body problem and the coulomb impurity problem. the problem of the magnetic instability and kondo effect of impurities and / or adatoms in graphene is also discussed in analogy with classical models of many - body effects in ordinary metals. we show that lorentz invariance plays a fundamental role and leads to effects that span the whole spectrum, from the ultraviolet to the infrared. the effect of an emerging lorentz invariance is also discussed in the context of finite size and edge effects as well as mesoscopic physics. we also briefly discuss the effects of strong magnetic fields in single layers and review some of the main aspects of the many - body problem in graphene bilayers. in addition to reviewing the fully understood aspects of the many - body problem in graphene, we show that a plethora of interesting issues remain open, both theoretically and experimentally, and that the field of graphene research is still exciting and vibrant. | arxiv:1012.3484 |
we study numerically the properties of local low - energy excitations in the two - dimensional ising spin glass. given the ground state, we determine the lowest - lying connected cluster of flipped spins containing one given spin, either with a fixed volume, or with a volume constrained to lie in a certain range. our aim is to understand corrections to the scaling predicted by the droplet picture of spin glasses and to resolve contradictory results reported in the literature for the stiffness exponent. we find no clear trace of corrections to scaling, and the obtained stiffness exponent is in relatively good agreement with standard domain wall calculations. | arxiv:cond-mat/0304576 |
we use rotational gravity darkening in the disk of \ emph { kepler } star koi - 2138 to show that the orbit of $ 2. 1 - r _ \ oplus $ transiting planet candidate koi - 2138. 01 has a low projected spin - orbit alignment of $ \ lambda = 1 ^ \ circ \ pm13 $. koi - 2138. 01 is just the second super - earth with a measured spin - orbit alignment after 55 cancri e, and the first to be aligned. with a 23. 55 - day orbital period, koi - 2138. 01 may represent the tip of a future iceberg of solar - system - like terrestrial planets having intermediate periods and low - inclination circular orbits. | arxiv:1512.03855 |
in this work we investigate if a small fraction of quarks and gluons, which escaped hadronization and survived as a uniformly spread perfect fluid, can play the role of both dark matter and dark energy. this fluid, as developed in \ citep { brilenkov }, is characterized by two main parameters : $ \ beta $, related to the amount of quarks and gluons which act as dark matter ; and $ \ gamma $, acting as the cosmological constant. we explore the feasibility of this model at cosmological scales using data from type ia supernovae ( sneia ), long gamma - ray bursts ( lgrb ) and direct observational hubble data. we find that : ( i ) in general, $ \ beta $ cannot be constrained by sneia data nor by lgrb or h ( z ) data ; ( ii ) $ \ gamma $ can be constrained quite well by all three data sets, contributing with $ \ approx78 \ % $ to the energy - matter content ; ( iii ) when a strong prior on ( only ) baryonic matter is assumed, the two parameters of the model are constrained successfully. | arxiv:1404.0388 |
we consider a bilevel program involving a linear lower level problem with left - hand - side perturbation. we then consider the karush - kuhn - tucker reformulation of the problem and subsequently build a tractable optimization problem with linear constraints by means of a partial exact penalization. a semismooth system of equations is then generated from the later problem and a newton - type method is developed to solve it. finally, we illustrate the convergence and practical implementation of the algorithm on the optimal toll - setting problem in transportation networks. | arxiv:2010.11662 |
the compelling science case for the observation of b - mode polarization in the cosmic microwave background ( cmb ) is driving the cmb community to expand the observed sky fraction, either by extending survey sizes or by deploying receivers to potential new northern sites. for ground - based cmb instruments, poorly - mixed atmospheric water vapor constitutes the primary source of short - term sky noise. this results in short - timescale brightness fluctuations, which must be rejected by some form of modulation. to maximize the sensitivity of ground - based cmb observations, it is useful to understand the effects of atmospheric water vapor over timescales and angular scales relevant for cmb polarization measurements. to this end, we have undertaken a campaign to perform a coordinated characterization of current and potential future observing sites using scanning 183 ghz water vapor radiometers ( wvrs ). so far, we have deployed two identical wvr units ; one at the south pole, antarctica, and the other at summit station, greenland. the former site has a long heritage of ground - based cmb observations and is the current location of the bicep / keck array telescopes as well as the south pole telescope. the latter site, though less well characterized, is under consideration as a northern - hemisphere location for future cmb receivers. data collection from this campaign began in january 2016 at south pole and july 2016 at summit station. data analysis is ongoing to reduce the data to a single spatial and temporal statistic that can be used for one - to - one site comparison. | arxiv:1808.01349 |
we introduce a framework for designing hamiltonian engineering pulse sequences that systematically accounts for the effects of higher - order contributions to the floquet - magnus expansion. our techniques result in simple, intuitive decoupling rules, despite the higher - order contributions naively involving complicated, non - local - in - time commutators. we illustrate how these rules can be used to efficiently design improved hamiltonian engineering pulse sequences for a wide variety of tasks, such as dynamical decoupling, quantum sensing, and quantum simulation. | arxiv:2303.07374 |
we propose a " matter - antimatter coexistence method " for finite - density lattice qcd, aiming at a possible solution of the sign problem. in this method, we consider matter and anti - matter systems on two parallel $ { \ bf r } ^ 4 $ - sheets in five - dimensional euclidean space - time. for the matter system $ m $ with a chemical potential $ \ mu \ in { \ bf c } $ on a $ { \ bf r } ^ 4 $ - sheet, we also prepare the anti - matter system $ \ bar m $ with $ - \ mu ^ * $ on the other $ { \ bf r } ^ 4 $ - sheet shifted in the fifth direction. in the lattice qcd formalism, we introduce a correlation term between the gauge variables $ u _ \ nu \ equiv e ^ { iaga _ \ nu } $ in $ m $ and $ \ tilde u _ \ nu \ equiv e ^ { iag \ tilde a _ \ nu } $ in $ \ bar m $, such as $ s _ \ lambda \ equiv \ sum _ { x, \ nu } 2 \ lambda \ { n _ c - { \ rm re ~ tr } [ u _ \ nu ( x ) \ tilde u _ \ nu ^ \ dagger ( x ) ] \ } \ simeq \ sum _ x \ frac { 1 } { 2 } \ lambda a ^ 2 \ { a _ \ nu ^ a ( x ) - \ tilde a _ \ nu ^ a ( x ) \ } ^ 2 $ with a real parameter $ \ lambda $. in the limit of $ \ lambda \ rightarrow \ infty $, a strong constraint $ \ tilde u _ \ nu ( x ) = u _ \ nu ( x ) $ is realized, and the total fermionic determinant is real and non - negative. in the limit of $ \ lambda \ rightarrow 0 $, this system goes to two separated ordinary qcd systems with the chemical potential of $ \ mu $ and $ - \ mu ^ * $. on a finite - volume lattice, if one takes an enough large value of $ \ lambda $, $ \ tilde u _ \ nu ( x ) \ simeq u _ \ nu ( x ) $ is realized and there occurs a phase cancellation approximately between two fermionic determinants in $ m $ and $ \ bar m $, which | arxiv:1707.05996 |
the kaczmarz and gauss - seidel methods both solve a linear system $ \ bf { x } \ bf { \ beta } = \ bf { y } $ by iteratively refining the solution estimate. recent interest in these methods has been sparked by a proof of strohmer and vershynin which shows the randomized kaczmarz method converges linearly in expectation to the solution. lewis and leventhal then proved a similar result for the randomized gauss - seidel algorithm. however, the behavior of both methods depends heavily on whether the system is under or overdetermined, and whether it is consistent or not. here we provide a unified theory of both methods, their variants for these different settings, and draw connections between both approaches. in doing so, we also provide a proof that an extended version of randomized gauss - seidel converges linearly to the least norm solution in the underdetermined case ( where the usual randomized gauss seidel fails to converge ). we detail analytically and empirically the convergence properties of both methods and their extended variants in all possible system settings. with this result, a complete and rigorous theory of both methods is furnished. | arxiv:1503.08235 |
we establish generalizations of the nyman - beurling and b \ ' aez - duarte criteria concerning lack of zeros of dirichlet $ l $ - functions in the semi - plane $ \ re ( s ) > 1 / p $ for $ p \ in ( 1, 2 ] $. we pose and solve a natural extremal problem for dirichlet polynomials which take values one at the zeros of the corresponding $ l $ - function on the vertical line $ \ re ( s ) = 1 / p $. | arxiv:1608.07887 |
we study model - based and model - free policy optimization in a class of nonzero - sum stochastic dynamic games called linear quadratic ( lq ) deep structured games. in such games, players interact with each other through a set of weighted averages ( linear regressions ) of the states and actions. in this paper, we focus our attention to homogeneous weights ; however, for the special case of infinite population, the obtained results extend to asymptotically vanishing weights wherein the players learn the sequential weighted mean - field equilibrium. despite the non - convexity of the optimization in policy space and the fact that policy optimization does not generally converge in game setting, we prove that the proposed model - based and model - free policy gradient descent and natural policy gradient descent algorithms globally converge to the sub - game perfect nash equilibrium. to the best of our knowledge, this is the first result that provides a global convergence proof of policy optimization in a nonzero - sum lq game. one of the salient features of the proposed algorithms is that their parameter space is independent of the number of players, and when the dimension of state space is significantly larger than that of the action space, they provide a more efficient way of computation compared to those algorithms that plan and learn in the action space. finally, some simulations are provided to numerically verify the obtained theoretical results. | arxiv:2011.14391 |
we demonstrate superconducting single - photon detectors that integrate signals locally at each pixel. this capability is realized by the monolithic integration of superconducting - nanowire single - photon detectors with josephson electronics. the motivation is to realize superconducting sensor elements with integrating capabilities similar to their cmos - sensor counterparts. the pixels can operate in several modes. first, we demonstrate that photons can be counted individually, with each detection event adding an identical amount of supercurrent to an integrating element. second, we demonstrate an active gain control option, in which the signal added per detection event can be dynamically adjusted to account for variable light conditions. additionally, the pixels can either retain signal indefinitely to record all counts incurred over an integration period, or the pixels can record a fading signal of detection events within a decay time constant. we describe additional semiconductor readout circuitry that will be used in future work to realize scalable, large - format sensor arrays of superconducting single photon detectors compatible with cmos array readout architectures. | arxiv:2310.13107 |
we study strings on orbifolds of $ ads _ { 5 } \ times s ^ 5 $ by su ( 2 ) discrete groups in the penrose limit. we derive the degenerate metrics of pp wave of $ ads _ { 5 } \ times s ^ 5 / \ gamma $ using ordinary $ ade $ and affine $ \ wildtilde { ade } $ singularities of complex surfaces and results on $ { \ cal n } = 4 $ cft $ _ 4 $ ' s. we also give explicit metric moduli depencies for abelian and non abelian orbifolds. | arxiv:hep-th/0210168 |
grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. in particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. in the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 – 350 °c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. if a mixture of different materials is used together in a ceramic, the sintering temperature is sometimes above the melting point of one minor component – a liquid phase sintering. this results in shorter sintering times compared to solid state sintering. such liquid phase sintering involves in faster diffusion processes and may result in abnormal grain growth. = = strength of ceramics = = a material ' s strength is dependent on its microstructure. the engineering processes to which a material is subjected can alter its microstructure. the variety of strengthening mechanisms that alter the strength of a material include the mechanism of grain boundary strengthening. thus, although yield | https://en.wikipedia.org/wiki/Ceramic_engineering |
we present and analyze a series of conservative diagonally implicit runge - - kutta schemes for the nonlinear schr \ " odiner equation. with the application of the newly developed invariant energy quadratization approach, these schemes possess not only high accuracy, high order convergence ( up to fifth order ) and efficiency due to diagonally implicity but also mass and energy conservative properties. both theoretical analysis and numerical experiments of one - and two - dimensional dynamics are carried out to verify the invariant conservative properties, convergence orders and longtime simulation stability. | arxiv:1910.13700 |
an $ n \ times n $ matrix $ c $ is said to be { \ it centrosymmetric } if it satisfies the relation $ jcj = c $, where $ j $ is the $ n \ times n $ counteridentity matrix. centrosymmetric matrices have a rich eigenstructure that has been studied extensively in the literature. many results for centrosymmetric matrices have been generalized to wider classes of matrices that arise in a wide variety of disciplines. in this paper, we obtain interesting spectral properties for nonnegative centrosymmetric matrices. we show how to change one single eigenvalue, two or three eigenvalues of an $ n \ times n $ nonnegative centrosymmetric matrix without changing any of the remaining eigenvalues neither nonnegativity nor the centrosymmetric structure. moreover, our results allow partially answer some known questions given by guo [ 11 ] and by guo and guo [ 12 ]. our proofs generate algorithmic procedures that allow to compute a solution matrix. | arxiv:2109.01563 |
class imbalance is a characteristic known for making learning more challenging for classification models as they may end up biased towards the majority class. a promising approach among the ensemble - based methods in the context of imbalance learning is dynamic selection ( ds ). ds techniques single out a subset of the classifiers in the ensemble to label each given unknown sample according to their estimated competence in the area surrounding the query. because only a small region is taken into account in the selection scheme, the global class disproportion may have less impact over the system ' s performance. however, the presence of local class overlap may severely hinder the ds techniques ' performance over imbalanced distributions as it not only exacerbates the effects of the under - representation but also introduces ambiguous and possibly unreliable samples to the competence estimation process. thus, in this work, we propose a ds technique which attempts to minimize the effects of the local class overlap during the classifier selection procedure. the proposed method iteratively removes from the target region the instance perceived as the hardest to classify until a classifier is deemed competent to label the query sample. the known samples are characterized using instance hardness measures that quantify the local class overlap. experimental results show that the proposed technique can significantly outperform the baseline as well as several other ds techniques, suggesting its suitability for dealing with class under - representation and overlap. furthermore, the proposed technique still yielded competitive results when using an under - sampled, less overlapped version of the labelled sets, specially over the problems with a high proportion of minority class samples in overlap areas. code available at https : / / github. com / marianaasouza / lords. | arxiv:2206.08455 |
we demonstrate the accurate calculation of entropies and free energies for a variety of liquid metals using an extension of the two phase thermodynamic ( 2pt ) model based on a decomposition of the velocity autocorrelation function into gas - like ( hard sphere ) and solid - like ( harmonic ) subsystems. the hard sphere model for the gas - like component is shown to give systematically high entropies for liquid metals as a direct result of the unphysical lorentzian high - frequency tail. using a memory function framework we derive a generally applicable velocity autocorrelation and frequency spectrum for the diffusive component which recovers the low frequency ( long time ) behavior of the hard sphere model while providing for realistic short time coherence and high frequency tails to the spectrum. this approach provides a significant increase in the accuracy of the calculated entropies for liquid metals and is compared to ambient pressure data for liquid sodium, aluminum, gallium, tin, and iron. the use of this method for the determination of melt boundaries is demonstrated with a calculation of the high pressure bcc melt boundary for sodium. with the significantly improved accuracy available with the memory function treatment for softer interatomic potentials, the 2pt model for entropy calculations should find broader application in high energy density science, warm dense matter, planetary science, geophysics, and material science. | arxiv:1311.1840 |
we study a status update system with a source, a sampler, a transmitter, and a monitor. the source governs a stochastic process that the monitor wants to observe in a timely manner. to achieve this, the sampler samples fresh update packets which the transmitter transmits via an error prone communication channel to the monitor. the transmitter can transmit without any constraint, i. e., it can transmit whenever an update packet is available to the transmitter. however, the sampler is imposed with a sampling rate constraint. the goal of the sampler is to devise an optimal policy that satisfies the resource constraint while minimizing the age of the monitor. we formulate this problem as a constrained markov decision process ( cmdp ). we find several structures of an optimal policy. we leverage the optimal structures to find a low complexity optimal policy in an explicit manner, without resorting to complex iterative schemes or techniques that require bounding the age. | arxiv:2502.18404 |
high signal to noise ratio ( snr ) consistency of model selection criteria in linear regression models has attracted a lot of attention recently. however, most of the existing literature on high snr consistency deals with model order selection. further, the limited literature available on the high snr consistency of subset selection procedures ( ssps ) is applicable to linear regression with full rank measurement matrices only. hence, the performance of ssps used in underdetermined linear models ( a. k. a compressive sensing ( cs ) algorithms ) at high snr is largely unknown. this paper fills this gap by deriving necessary and sufficient conditions for the high snr consistency of popular cs algorithms like $ l _ 0 $ - minimization, basis pursuit de - noising or lasso, orthogonal matching pursuit and dantzig selector. necessary conditions analytically establish the high snr inconsistency of cs algorithms when used with the tuning parameters discussed in literature. novel tuning parameters with snr adaptations are developed using the sufficient conditions and the choice of snr adaptations are discussed analytically using convergence rate analysis. cs algorithms with the proposed tuning parameters are numerically shown to be high snr consistent and outperform existing tuning parameters in the moderate to high snr regime. | arxiv:1703.03596 |
safety has been a critical issue for the deployment of learning - based approaches in real - world applications. to address this issue, control barrier function ( cbf ) and its variants have attracted extensive attention for safety - critical control. however, due to the myopic one - step nature of cbf and the lack of principled methods to design the class - $ \ mathcal { k } $ functions, there are still fundamental limitations of current cbfs : optimality, stability, and feasibility. in this paper, we proposed a novel and unified approach to address these limitations with adaptive multi - step control barrier function ( am - cbf ), where we parameterize the class - $ \ mathcal { k } $ function by a neural network and train it together with the reinforcement learning policy. moreover, to mitigate the myopic nature, we propose a novel \ textit { multi - step training and single - step execution } paradigm to make cbf farsighted while the execution remains solving a single - step convex quadratic program. our method is evaluated on the first and second - order systems in various scenarios, where our approach outperforms the conventional cbf both qualitatively and quantitatively. | arxiv:2305.03608 |
the liquid scintillator neutrino detector ( lsnd ) at the los alamos meson physics facility sets bounds on neutrino oscillations in the appearance channel nu _ mu _ bar - - > nu _ e _ bar by searching for the signature of the reaction nu _ e _ bar p - - > e ^ + n : an e $ ^ + $ followed by a 2. 2mev gamma ray from neutron capture. five e ^ { + / - } - - gamma coincidences are observed in time with the lampf beam, with an estimated background of 6. 2 events. the 90 \ % confidence limits obtained are : delta ( m ^ 2 ) < 0. 07ev ^ 2 for sin ^ 2 ( 2theta ) = 1, and sin ^ 2 ( 2theta ) < 6 10 ^ { - 3 } for delta ( m ^ 2 ) > 20 ev ^ 2. | arxiv:hep-ex/9504009 |
we study the epstein zeta function $ e _ n ( l, s ) $ for $ s > \ frac { n } { 2 } $ and determine for fixed $ c > \ frac { 1 } { 2 } $ the value distribution and moments of $ e _ n ( \ cdot, cn ) $ ( suitably normalized ) as $ n \ to \ infty $. we further discuss the random function $ c \ mapsto e _ n ( \ cdot, cn ) $ for $ c \ in [ a, b ] $ with $ \ frac { 1 } { 2 } < a < b $ and determine its limit distribution as $ n \ to \ infty $. | arxiv:1006.1723 |
the transport of excess electrons in liquid argon driven out of equilibrium by an applied electric field is revisited using a multi - term solution of boltzmann ' s equation together with ab initio liquid phase cross - sections calculated using the dirac - fock scattering equations. the calculation of liquid phase cross - sections extends previous treatments to consider multipole polarisabilities and a non - local treatment of exchange while the accuracy of the electron - argon potential is validated through comparison of the calculated gas phase cross - section with experiment. the results presented highlight the inadequacy of local treatments of exchange that are commonly used in liquid and cluster phase cross - section calculations. the multi - term boltzmann equation framework accounting for coherent scattering enables the inclusion of the full anisotropy in the differential cross - section arising from the interaction and the structure factor, without an a priori assumption of quasi - isotropy in the velocity distribution function. the model, which contains no free parameters and accounts for both coherent scattering and liquid phase screening effects, was found to reproduce well the experimental drift velocities and characteristic energies. | arxiv:1503.00377 |
we propose in - context translation ( ict ), a general learning framework to unify visual recognition ( e. g., semantic segmentation ), low - level image processing ( e. g., denoising ), and conditional image generation ( e. g., edge - to - image synthesis ). thanks to unification, ict significantly reduces the inherent inductive bias that comes with designing models for specific tasks, and it maximizes mutual enhancement across similar tasks. however, the unification across a large number of tasks is non - trivial due to various data formats and training pipelines. to this end, ict introduces two designs. firstly, it standardizes input - output data of different tasks into rgb image pairs, e. g., semantic segmentation data pairs an rgb image with its segmentation mask in the same rgb format. this turns different tasks into a general translation task between two rgb images. secondly, it standardizes the training of different tasks into a general in - context learning, where " in - context " means the input comprises an example input - output pair of the target task and a query image. the learning objective is to generate the " missing " data paired with the query. the implicit translation process is thus between the query and the generated image. in experiments, ict unifies ten vision tasks and showcases impressive performance on their respective benchmarks. notably, ict performs well across three major categories of computer vision tasks, while its two competitors ( painter and promptdiffusion ) are only effective in at most two of these task categories. in addition, compared to its competitors, ict trained on only 4 rtx 3090 gpus is shown to be more efficient and less costly in training. | arxiv:2404.09633 |
we consider the schr \ " odinger operator on the halfline with the potential $ ( m ^ 2 - \ frac14 ) \ frac1 { x ^ 2 } $, often called the bessel operator. we assume that $ m $ is complex. we study the domains of various closed homogeneous realizations of the bessel operator. in particular, we prove that the domain of its minimal realization for $ | \ re ( m ) | < 1 $ and of its unique closed realization for $ \ re ( m ) > 1 $ coincide with the minimal second order sobolev space. on the other hand, if $ \ re ( m ) = 1 $ the minimal second order sobolev space is a subspace of infinite codimension of the domain of the unique closed bessel operator. the properties of bessel operators are compared with the properties of the corresponding bilinear forms. | arxiv:2101.01001 |
visual place recognition techniques based on deep learning, which have imposed themselves as the state - of - the - art in recent years, do not generalize well to environments visually different from the training set. thus, to achieve top performance, it is sometimes necessary to fine - tune the networks to the target environment. to this end, we propose a self - supervised domain calibration procedure based on robust pose graph optimization from simultaneous localization and mapping ( slam ) as the supervision signal without requiring gps or manual labeling. moreover, we leverage the procedure to improve uncertainty estimation for place recognition matches which is important in safety critical applications. we show that our approach can improve the performance of a state - of - the - art technique on a target environment dissimilar from its training set and that we can obtain uncertainty estimates. we believe that this approach will help practitioners to deploy robust place recognition solutions in real - world applications. our code is available publicly : https : / / github. com / mistlab / vpr - calibration - and - uncertainty | arxiv:2203.04446 |
the positive mass theorem in higher dimensions is proved using causality arguments inspired by those of penrose, sorkin and woolgar in 3 + 1 dimensions. | arxiv:2010.05086 |
maximum entropy is an image reconstruction method conceived to image a sparsely occupied field of view and therefore particularly appropriate to achieve super - resolution effects. although widely used in image deconvolution, this method has been formulated in radio astronomy for the analysis of observations in the spatial frequency domain, and an interactive data language ( idl ) code has been implemented for image reconstruction from solar x - ray fourier data. however, this code relies on a non - convex formulation of the constrained optimization problem addressed by the maximum entropy approach and this sometimes results in unreliable reconstructions characterized by unphysical shrinking effects. this paper introduces a new approach to maximum entropy based on the constrained minimization of a convex functional. in the case of observations recorded by the reuven ramaty high energy solar spectroscopic imager ( rhessi ), the resulting code provides the same super - resolution effects of the previous algorithm, while working properly also when that code produces unphysical reconstructions. results are also provided of testing the algorithm with synthetic data simulating observations of the spectrometer / telescope for imaging x - rays ( stix ) in solar orbiter. the new code is available in the { \ em { hessi } } folder of the solar software ( ssw ) tree. | arxiv:2002.07921 |
it is easy to see that in a connected graph any 2 longest paths have a vertex in common. for k > = 7, skupien in [ 7 ] obtained a connected graph in which some k longest paths have no common vertex, but every k - 1 longest paths have a common vertex. it is not known whether every 3 longest paths in a connected graph have a common vertex and similarly for 4, 5, and 6 longest path. in [ 5 ] the authors give an upper bound on distance among 3 longest paths in a connected graph. in this paper we give a similar upper bound on distance between 4 longest paths and also for k longest paths, in general. | arxiv:1607.08850 |
we carry out the first main step towards the construction of new examples of complete embedded self - similar surfaces under mean curvature flow. an approximate solution is obtained by taking two known examples of self - similar surfaces and desingularizing the intersection circle using an appropriately modified singly periodic scherk surface, called the core. using an inverse function theorem, we show that for small boundary conditions on the core, there is an embedded surface close to the core that is a solution of the equation for self - similar surfaces. this provides us with an adequate central piece to substitute for the intersection. | arxiv:math/0610695 |
current implementations of quantum key distribution ( qkd ) typically rely on prepare - and - measure ( p & m ) schemes. unfortunately, these implementations are not completely secure, unless security proofs fully incorporate all imperfections of real devices. so far, existing proofs have primarily focused on imperfections of either the light source or the measurement device. in this paper, we establish a security proof for the loss - tolerant p & m qkd protocol that incorporates imperfections in both the source and the detectors. specifically, we demonstrate the security of this scheme when the emitted states deviate from the ideal ones and bob ' s measurement device does not meet the basis - independent detection efficiency condition. furthermore, we conduct an experiment to characterise the detection efficiency mismatch of commercial single - photon detectors as a function of the polarisation state of the input light, and determine the expected secret key rate in the presence of state preparation flaws when using such detectors. our work provides a way towards guaranteeing the security of actual implementations of widely deployed p & m qkd. | arxiv:2412.09684 |
dom / wdeg is one of the best performing heuristics for dynamic variable ordering in backtrack search [ boussemart et al., 2004 ]. as originally defined, this heuristic increments the weight of the constraint that causes a domain wipeout ( i. e., a dead - end ) when enforcing arc consistency during search. " the process of weighting constraints with dom / wdeg is not defined when more than one constraint lead to a domain wipeout [ vion et al., 2011 ]. " in this paper, we investigate how weights should be updated in the context of two high - level consistencies, namely, singleton ( poac ) and relational consistencies ( rnic ). we propose, analyze, and empirically evaluate several strategies for updating the weights. we statistically compare the proposed strategies and conclude with our recommendations. | arxiv:1711.00909 |
we perform the real - time lattice simulation of an open quantum system, which is based on the schwinger - keldysh path integral representation of the lindblad formalism. although the real - time simulation generally suffers from the sign problem, there exist a few exceptional cases. we focus on a sign - problem - free system of a non - relativistic spinless fermion and analyze time evolution under driving and dissipation. | arxiv:2111.04937 |
the high latitude spectroscopic survey ( hlss ) is the reference baseline spectroscopic survey for nasa ' s nancy grace roman space telescope, measuring redshifts of $ \ sim 10 $ m h $ \ alpha $ emission line galaxies over a $ 2000 $ deg $ ^ 2 $ footprint at $ z = 1 - 2 $. in this work, we use a realistic roman galaxy mock catalogue to explore optimal phenomenological modeling of the measured power spectrum. we consider two methods for modeling the redshift - space distortions ( kaiser squashing and another with a window function on $ \ beta $ that selects out the coherent radial infall pairwise velocities, $ m _ a $ and $ m _ b $, respectively ), two models for the nonlinear impact of baryons that smears the bao signal ( a fixed ratio between the smearing scales in the perpendicular and parallel dimensions and another where these smearing scales are kept as a free parameters, p $ _ { dw } ( k | k _ * ) $ and p $ _ { dw } ( k | \ sigma _ \ perp, \ sigma _ \ parallel ) $, respectively ), and two analytical emulations of nonlinear growth ( one employing the halo model and another formulated from simulated galaxy clustering of a semi - analytical model, $ f _ { hm } $ and $ f _ { sam } $, respectively ). we find that the best model combination employing $ f _ { hm } $ is $ p _ { dw } ( k | k _ * ) * f _ { hm } * m _ b $, while the best combination employing $ f _ { sam } $ is $ p _ { dw } ( k | k _ * ) * f _ { sam } * m _ b $, which leads to unbiased measurements of cosmological parameters. we compare these to the effective field theory of large - scale structure perturbation theory model $ p _ { eft } ( k | \ theta ) $, and find that our simple phenomenological models are comparable across the entire redshift range for $ k _ { max } = 0. 25 $ and $ 0. 3 $ $ h $ / mpc. we expect the tools that we have developed to be useful in probing dark energy and testing gravity using roman in an accurate and robust manner. | arxiv:2212.08699 |
the physics reach of a low threshold ( 100 ev ) scintillating argon bubble chamber sensitive to coherent elastic neutrino - nucleus scattering ( ce $ \ nu $ ns ) from reactor neutrinos is studied. the sensitivity to the weak mixing angle, neutrino magnetic moment, and a light $ z ' $ gauge boson mediator are analyzed. a monte carlo simulation of the backgrounds is performed to assess their contribution to the signal. the analysis shows that world - leading sensitivities are achieved with a one - year exposure for a 10 kg chamber at 3 m from a 1 mw $ _ { th } $ research reactor or a 100 kg chamber at 30 m from a 2000 mw $ _ { th } $ power reactor. such a detector has the potential to become the leading technology to study ce $ \ nu $ ns using nuclear reactors. | arxiv:2101.08785 |
a canonical analysis for general relativity is performed on a null surface without fixing the diffeomorphism gauge, and the canonical pairs of configuration and momentum variables are derived. next to the well - known spin - 2 pair, also spin - 1 and spin - 0 pairs are identified. the boundary action for a null boundary segment of spacetime is obtained, including terms on codimension two corners. | arxiv:1611.03096 |
mems ( micro - electromechanical systems ) is the technology of microscopic devices incorporating both electronic and moving parts. mems are made up of components between 1 and 100 micrometres in size ( i. e., 0. 001 to 0. 1 mm ), and mems devices generally range in size from 20 micrometres to a millimetre ( i. e., 0. 02 to 1. 0 mm ), although components arranged in arrays ( e. g., digital micromirror devices ) can be more than 1000 mm2. they usually consist of a central unit that processes data ( an integrated circuit chip such as microprocessor ) and several components that interact with the surroundings ( such as microsensors ). because of the large surface area to volume ratio of mems, forces produced by ambient electromagnetism ( e. g., electrostatic charges and magnetic moments ), and fluid dynamics ( e. g., surface tension and viscosity ) are more important design considerations than with larger scale mechanical devices. mems technology is distinguished from molecular nanotechnology or molecular electronics in that the latter two must also consider surface chemistry. the potential of very small machines was appreciated before the technology existed that could make them ( see, for example, richard feynman ' s famous 1959 lecture there ' s plenty of room at the bottom ). mems became practical once they could be fabricated using modified semiconductor device fabrication technologies, normally used to make electronics. these include molding and plating, wet etching ( koh, tmah ) and dry etching ( rie and drie ), electrical discharge machining ( edm ), and other technologies capable of manufacturing small devices. they merge at the nanoscale into nanoelectromechanical systems ( nems ) and nanotechnology. = = history = = an early example of a mems device is the resonant - gate transistor, an adaptation of the mosfet, developed by robert a. wickstrom for harvey c. nathanson in 1965. another early example is the resonistor, an electromechanical monolithic resonator patented by raymond j. wilfinger between 1966 and 1971. during the 1970s to early 1980s, a number of mosfet microsensors were developed for measuring physical, chemical, biological and environmental parameters. the term " mems " was introduced in 1986. s. c. jacobsen ( pi ) and j. e | https://en.wikipedia.org/wiki/MEMS |
we investigate the effect of the proportional hazards assumption on prognostic and predictive models of the survival time of patients suffering from amyotrophic lateral sclerosis ( als ). we theoretically compare the underlying model formulations of several variants of survival forests and implementations thereof, including random forests for survival, conditional inference forests, ranger, and survival forests with $ l _ 1 $ splitting, with two novel variants, namely distributional and transformation survival forests. theoretical considerations explain the low power of log - rank - based splitting in detecting patterns in non - proportional hazards situations in survival trees and corresponding forests. this limitation can potentially be overcome by the alternative split procedures suggested herein. we empirically investigated this effect using simulation experiments and a re - analysis of the pro - act database of als survival, giving special emphasis to both prognostic and predictive models. | arxiv:1902.01587 |
humans can understand and produce new utterances effortlessly, thanks to their compositional skills. once a person learns the meaning of a new verb " dax, " he or she can immediately understand the meaning of " dax twice " or " sing and dax. " in this paper, we introduce the scan domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences. we then test the zero - shot generalization capabilities of a variety of recurrent neural networks ( rnns ) trained on scan with sequence - to - sequence methods. we find that rnns can make successful zero - shot generalizations when the differences between training and test commands are small, so that they can apply " mix - and - match " strategies to solve the task. however, when generalization requires systematic compositional skills ( as in the " dax " example above ), rnns fail spectacularly. we conclude with a proof - of - concept experiment in neural machine translation, suggesting that lack of systematicity might be partially responsible for neural networks ' notorious training data thirst. | arxiv:1711.00350 |
we present results of a theoretical study of structural and superfluid properties of parahydrogen clusters comprising 25, 26 and 27 molecules at low temperature. the microscopic model utilized here is based on the silvera - goldman pair potential. numerical results are obtained by means of quantum monte carlo simulations, making use of the continuous - space worm algorithm. the clusters are superfluid in the low temperature limit, but display markedly different physical behaviours. for n = 25 and 27, superfluidity at low temperature arises as clusters melt, i. e., become progressively liquid - like as a result of quantum effects. on the other hand, for n = 26 the cluster remains rigid and solid - like. we argue that this cluster can be regarded as a mesoscopic " supersolid ". this physical picture is supported by results of simulations in which a single parahydrogen molecule in the cluster is isotopically substituted. | arxiv:1101.5054 |
we construct various systems of coherent states ( scs ) on the $ o ( d ) $ - equivariant fuzzy spheres $ s ^ d _ \ lambda $ ( $ d = 1, 2 $, $ d = d \! + \! 1 $ ) constructed in [ g. fiore, f. pisacane, j. geom. phys. 132 ( 2018 ), 423 - 451 ] and study their localizations in configuration space as well as angular momentum space. these localizations are best expressed through the $ o ( d ) $ - invariant square space and angular momentum uncertainties $ ( \ delta \ boldsymbol { x } ) ^ 2, ( \ delta \ boldsymbol { l } ) ^ 2 $ in the ambient euclidean space $ \ mathbb { r } ^ d $. we also determine general bounds ( e. g. uncertainty relations from commutation relations ) for $ ( \ delta \ boldsymbol { x } ) ^ 2, ( \ delta \ boldsymbol { l } ) ^ 2 $, and partly investigate which scs may saturate these bounds. in particular, we determine $ o ( d ) $ - equivariant systems of optimally localized coherent states, which are the closest quantum states to the classical states ( i. e. points ) of $ s ^ d $. we compare the results with their analogs on commutative $ s ^ d $. we also show that on $ s ^ 2 _ \ lambda $ our optimally localized states are better localized than those on the madore - hoppe fuzzy sphere with the same cutoff $ \ lambda $. | arxiv:1906.01881 |
scientists pursue collective knowledge, but they also seek personal recognition from their peers. when scientists decide whether or not to work on a big new problem, they weigh the potential rewards of a major discovery against the costs of setting aside other projects. these self - interested choices can potentially spread researchers across problems in an efficient manner, but efficiency is not guaranteed. we use simple economic models to understand such decisions and their collective consequences. academic science differs from industrial r & d in that academics often share partial solutions to gain reputation. this convention of open science is thought to accelerate collective discovery, but we find that it need not do so. the ability to share partial results influences which scientists work on a particular problem ; consequently, open science can slow down the solution of a problem if it deters entry by important actors. | arxiv:1605.05822 |
developed to alleviate prohibitive labeling costs, active learning ( al ) methods aim to reduce label complexity in supervised learning. while recent work has demonstrated the benefit of using al in combination with large pre - trained language models ( plms ), it has often overlooked the practical challenges that hinder the effectiveness of al. we address these challenges by leveraging representation smoothness analysis to ensure al is feasible, that is, both effective and practicable. firstly, we propose an early stopping technique that does not require a validation set - - often unavailable in realistic al conditions - - and observe significant improvements over random sampling across multiple datasets and al methods. further, we find that task adaptation improves al, whereas standard short fine - tuning in al does not provide improvements over random sampling. our work demonstrates the usefulness of representation smoothness analysis for al and introduces an al stopping criterion that reduces label complexity. | arxiv:2212.11680 |
data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system ' s time evolution. rather than solving the problem from scratch each time new observations become available, one uses the model to ` ` forecast ' ' the current state, using a prior state estimate ( which incorporates information from past data ) as the initial condition, then uses current data to correct the prior forecast to a current state estimate. this bayesian approach is most effective when the uncertainty in both the observations and in the state estimate, as it evolves over time, are accurately quantified. in this article, we describe a practical method for data assimilation in large, spatiotemporally chaotic systems. the method is a type of ` ` ensemble kalman filter ' ', in which the state estimate and its approximate uncertainty are represented at any given time by an ensemble of system states. we discuss both the mathematical basis of this approach and its implementation ; our primary emphasis is on ease of use and computational speed rather than improving accuracy over previously published approaches to ensemble kalman filtering. we include some numerical results demonstrating the efficiency and accuracy of our implementation for assimilating real atmospheric data with the global forecast model used by the u. s. national weather service. | arxiv:physics/0511236 |
we study an initial and boundary value problem modelling the motion of a rigid body in a heat conducting gas. the solid is supposed to be a perfect thermal insulator. the gas is described by the compressible navier - stokes - fourier equations, whereas the motion of the solid is governed by newton ' s laws. the main results assert the existence of strong solutions, in an l p - l q setting, both locally in time and globally in time for small data. the proof is essentially using the maximal regularity property of associated linear systems. this property is checked by proving the r - sectoriality of the corresponding operators, which in turn is obtained by a perturbation method. | arxiv:1710.08245 |
equations of hammerstein type cover large variety of areas and are of much interest to a wide audience due to the fact that they have applications in numerous areas. suitable conditions are imposed to obtain a strong convergence result for nonlinear integral equations of hammerstein type with monotone type mappings. a technique which does not involve the assumption of existence of a real constant whose calculation is unclear has been used in this study to obtain the strong convergence result. moreover, our technique is applied to show the forced oscillations of finite amplitude of a pendulum as a specific example of nonlinear integral equations of hammerstein type. numerical example is given for the illustration of the convergence of the sequences of iteration. these are done to demonstrate to our readers that this approach can be applied to problems arising in physical systems. | arxiv:2112.07117 |
a local ring $ r $ is called $ z $ - local if $ j ( r ) = z ( r ) $ and $ j ( r ) ^ 2 = 0 $. in this paper the structures of a class of $ z $ - local rings are determined. | arxiv:math/0512566 |
deep learning recommendation models ( dlrms ) have gained popularity in recommendation systems due to their effectiveness in handling large - scale recommendation tasks. the embedding layers of dlrms have become the performance bottleneck due to their intensive needs on memory capacity and memory bandwidth. in this paper, we propose updlrm, which utilizes real - world processingin - memory ( pim ) hardware, upmem dpu, to boost the memory bandwidth and reduce recommendation latency. the parallel nature of the dpu memory can provide high aggregated bandwidth for the large number of irregular memory accesses in embedding lookups, thus offering great potential to reduce the inference latency. to fully utilize the dpu memory bandwidth, we further studied the embedding table partitioning problem to achieve good workload - balance and efficient data caching. evaluations using real - world datasets show that, updlrm achieves much lower inference time for dlrm compared to both cpu - only and cpu - gpu hybrid counterparts. | arxiv:2406.13941 |
in a graph $ g $, a vertex dominates itself and its neighbors. a subset $ d \ subseteq v ( g ) $ is a double dominating set of $ g $ if $ d $ dominates every vertex of $ g $ at least twice. a signed graph $ \ sigma = ( g, \ sigma ) $ is a graph $ g $ together with an assignment $ \ sigma $ of positive or negative signs to all its edges. a cycle in a signed graph is positive if the product of its edge signs is positive. a signed graph is balanced if all its cycles are positive. a subset $ d \ subseteq v ( \ sigma ) $ is a double dominating set of $ \ sigma $ if it satisfies the following conditions : ( i ) $ d $ is a double dominating set of $ g $, and ( ii ) $ \ sigma [ d : v \ setminus d ] $ is balanced, where $ \ sigma [ d : v \ setminus d ] $ is the subgraph of $ \ sigma $ induced by the edges of $ \ sigma $ with one end point in $ d $ and the other end point in $ v \ setminus d $. the cardinality of a minimum double dominating set of $ \ sigma $ is the double domination number $ \ gamma _ { \ times 2 } ( \ sigma ) $. in this paper, we give bounds for the double domination number of signed cubic graphs. we also obtain some bounds on the double domination number of signed generalized petersen graphs and signed i - graphs. | arxiv:1907.11099 |
probability density functions and conditional averages of velocity gradients derived from upper ocean observations are compared with results from forced simulations of the two - dimensional navier - stokes equations. ocean data are derived from topex satellite altimeter measurements. the simulations use rapid forcing on large scales, characteristic of surface winds. the probability distributions of transverse velocity derivatives from the ocean observations agree with the forced simulations, though they differ from unforced simulations reported elsewhere. the distribution and cross - correlation of velocity derivatives provide clear evidence that large coherent eddies play only a minor role in generating the observed statistics. | arxiv:cond-mat/0007076 |
we study the ring of weyl invariant $ e _ 8 $ weak jacobi forms. wang recently proved that the ring is not a polynomial algebra. we consider a polynomial algebra which contains the ring as a subset and clarify the necessary and sufficient condition for an element of the polynomial algebra to be a weyl invariant $ e _ 8 $ weak jacobi form. this serves as a new algorithm for constructing all the jacobi forms of given weight and index. the algorithm is purely algebraic and does not require fourier expansion. using this algorithm we determine the generators of the free module of weyl invariant $ e _ 8 $ weak jacobi forms of given index $ m $ for $ m \ le 20 $. we also determine the lowest weight generators of the free module of index $ m $ for $ m \ le 28 $. our results support the lower bound conjecture of sun and wang and prove explicitly that there exist generators of the ring of weyl invariant $ e _ 8 $ weak jacobi forms of weight $ - 4m $ and index $ m $ for all $ 12 \ le m \ le 28 $. | arxiv:2201.06895 |
scores of exotic hadrons, particularly tetraquarks and pentaquarks in the heavy - quark sector, have been observed in the past 20 years, and more continue to be discovered to this day. unlike mesons and baryons, such exotics are not mandated to exist under our current understanding of qcd, meaning that each new discovery presents fresh insights into the expansive possibilities of strong - interaction physics. even the basic architecture of these multiquark states remains an open question. here we discuss the merits and deficiencies of their best - known dynamical descriptions, recognizing that the eventual universal model for exotics will almost certainly require the synthesis of more than one fundamental paradigm. | arxiv:2308.00781 |
symbolic data analysis works with variables for which each unit or class of units takes a finite set of values / categories, an interval or a distribution ( an histogram, for instance ). when to each observation corresponds an empirical distribution, we have a histogram - valued variable ; it reduces to the case of an interval - valued variable if each unit takes values on only one interval with probability equal to one. distribution and symmetric distribution is a linear regression model proposed for histogram - valued variables that may be particularized to interval - valued variables. this model is defined for n explicative variables and is based on the distributions considered within the intervals. in this paper we study the special case where the uniform distribution is assumed in each observed interval. as in the classical case, a goodness - of - fit measure is deduced from the model. some illustrative examples are presented. a simulation study allows discussing interpretations of the behavior of the model for this variable type. | arxiv:1304.8089 |
we investigate the impact of dissipation on weak measurements. while weak measurements have been successful in signal amplification, dissipation can compromise their usefulness. more precisely, we show that in systems with non - degenerate eigenstates, weak values always converge to the expectation value of the measured observable as dissipation time tends to infinity, in contrast to systems with degenerate eigenstates, where the weak values can remain anomalous, i. e., outside the range of eigenvalues of the observable, even in the limit of an infinite dissipation time. in addition, we propose a method for extracting information about the dissipative dynamics of a system using weak values at short dissipation times. specifically, we explore the amplification of the dissipation rate in a two - level system and the use of weak values to differentiate between markovian and non - markovian dissipative dynamics. we also find that weak measurements operating around a weak atom - cavity coupling can probe the atom dissipation through the weak value of non - hermitian operators within the rotating - wave approximation of the weak interaction. | arxiv:2308.00722 |
let $ \ { x _ i ( t ), t \ ge0 \ }, 1 \ le i \ le n $ be independent copies of a random process $ \ { x ( t ), t \ ge0 \ } $. for a given positive constant $ u $, define the set of $ r $ th conjunctions $ c _ r ( u ) : = \ { t \ in [ 0, 1 ] : x _ { r : n } ( t ) > u \ } $ with $ x _ { r : n } $ the $ r $ th largest order statistics of $ x _ i, 1 \ le i \ le n $. in numerical applications such as brain mapping and digital communication systems, of interest is the approximation of $ p _ r ( u ) = \ mathbb p \ { c _ r ( u ) \ neq \ phi \ } $. instead of stationary processes dealt with by d \ c { e } bicki et al. ( 2014 ), we consider in this paper $ x $ a self - similar $ \ mathbb r $ - valued process with $ p $ - continuous sample paths. by imposing the albin ' s conditions directly on $ x $, we establish an exact asymptotic expansion of $ p _ r ( u ) $ as $ u $ tends to infinity. as a by - product we derive the asymptotic tail behaviour of the mean sojourn time of $ x _ { r : n } $ over an increasing threshold. finally, our findings are illustrated for the case that $ x $ is a bi - fractional brownian motion, a sub - fractional brownian motion, and a generalized self - similar skew - gaussian process. | arxiv:1412.3934 |
the $ d $ - meson production is investigated by considering the unintegrated gluon distribution within the dipole approach in the momentum representation. we analyze the $ d $ - meson spectrum accounting for the effects of nonlinear behavior of the qcd dynamics which can be accordingly addressed in the dipole framework. the unintegrated gluon distribution is obtained by using geometric scaling property and the results are compared to the glauber - gribov framework. the absolute transverse momentum spectra and the nuclear modification ratios are investigated. predictions are compared with the experimental measurements by the alice and lhcb collaborations in $ pa $ collisions for different rapidity bins. | arxiv:2205.00925 |
broadcasting quantum and classical information is a basic task in quantum information processing, and is also a useful model in the study of quantum correlations including quantum discord. we establish a full operational characterization of two - sided quantum discord in terms of bilocal broadcasting of quantum correlations. moreover, we show that both the optimal fidelity of unilocal broadcasting of the correlations in an arbitrary bipartite quantum state and that of broadcasting an arbitrary set of quantum states can be formulized as semidefinite programs ( sdps ), which are efficiently computable. we also analyze some properties of these sdps and evaluate the broadcasting fidelities for some cases of interest. | arxiv:1705.06071 |
in this paper, we study the multi - robot task allocation problem where a group of robots needs to be allocated to a set of tasks so that the tasks can be finished optimally. one task may need more than one robot to finish it. therefore the robots need to form coalitions to complete these tasks. multi - robot coalition formation for task allocation is a well - known np - hard problem. to solve this problem, we use a linear - programming based graph partitioning approach along with a region growing strategy which allocates ( near ) optimal robot coalitions to tasks in a negligible amount of time. our proposed algorithm is fast ( only taking 230 secs. for 100 robots and 10 tasks ) and it also finds a near - optimal solution ( up to 97. 66 % of the optimal ). we have empirically demonstrated that the proposed approach in this paper always finds a solution which is closer ( up to 9. 1 times ) to the optimal solution than a theoretical worst - case bound proved in an earlier work. | arxiv:1805.08629 |
breast implants are widely used after breast cancer resection and must be changed regularly to avoid a rupture. to date, there are no quantitative criteria to help this decision. the mechanical evolution of the gels and membranes of the implants is still under investigated, although it can lead to early rupture. in this study, 35 breast explants having been implanted in patients for up to 17 years were characterized by ex vivo measurements of their mechanical properties. using acoustic radiation force impulse ( arfi ) ultrasound elastography, an imaging method for non - destructive mechanical characterization, an increase in the stiffness of the explants has been observed. this increase was correlated with the implantation duration, primarily after 8 years of implantation. with an increase of the shear modulus of up to a factor of nearly 3, the loss of flexibility of the implants is likely to lead to a significant increase of their risk of rupture. a complementary analysis of the gel from the explants by mass spectrometry imaging ( msi ) and liquid chromatography coupled to high resolution mass spectrometry ( lc - hrms ) confirms the presence of metabolites of cholesterol originating from the breast tissues, which most likely crossed the membrane of the implants and most likely degrades the gel. by observing the consequences of the physical - chemical mechanisms at work within patients, this study shows that ultrasound elastography could be used in vivo as a quantitative indicator of the risk of breast implant rupture and help diagnose their replacement. | arxiv:2401.14026 |
controlling the balanced gain and loss in a pt - symmetric system is a rather challenging task. utilizing floquet theory, we explore the constructive role of periodic modulation in controlling the gain and loss of a pt - symmetric optical coupler. it is found that the gain and loss of the system can be manipulated by applying a periodic modulation. further, such an original non - hermitian system can even be modulated into an effective hermitian system derived by the high - frequency floquet method. therefore, compared with other pt symmetry control schemes, our protocol can modulate the unbroken pt - symmetric range to a wider parameter region. our results provide a promising approach for controlling the gain and loss of a realistic system. | arxiv:1612.04628 |
in the preceeding works, we have given a non - abelian dual superconductivity picture for quark confinement, and demonstrated the numerical evidences on the lattice. in this talk, we discuss the confinement and deconfinement phase transition at finite temperature in view of the dual superconductivity. we investigate chromomagnetic monopole currents induced by chromoelectric flux in both confinement and deconfinement phase by the numerical simulations on a lattice at finite temperature, and discuss the role of the chromomagnetic monopole in the confinement / deconfinement phase transition. | arxiv:1511.04155 |
in this paper we analyze the necessary number of samples to estimate the gradient of any multidimensional smooth ( possibly non - convex ) function in a zero - order stochastic oracle model. in this model, an estimator has access to noisy values of the function, in order to produce the estimate of the gradient. we also provide an analysis on the sufficient number of samples for the finite difference method, a classical technique in numerical linear algebra. for $ t $ samples and $ d $ dimensions, our information - theoretic lower bound is $ \ omega ( \ sqrt { d / t } ) $. we show that the finite difference method for a bounded - variance oracle has rate $ o ( d ^ { 4 / 3 } / \ sqrt { t } ) $ for functions with zero third and higher order derivatives. these rates are tight for gaussian oracles. thus, the finite difference method is not minimax optimal, and therefore there is space for the development of better gradient estimation methods. | arxiv:2003.13881 |
we perform a three - dimensional study of steady state viscous fingers that develop in linear channels. by means of a three - dimensional lattice - boltzmann scheme that mimics the full macroscopic equations of motion of the fluid momentum and order parameter, we study the effect of the thickness of the channel in two cases. first, for total displacement of the fluids in the channel thickness direction, we find that the steady state finger is effectively two - dimensional and that previous two - dimensional results can be recovered by taking into account the effect of a curved meniscus across the channel thickness as a contribution to surface stresses. secondly, when a thin film develops in the channel thickness direction, the finger narrows with increasing channel aspect ratio in agreement with experimental results. the effect of the thin film renders the problem three - dimensional and results deviate from the two - dimensional prediction. | arxiv:0709.2785 |
in this paper we analyze a system of n identical quantum particles in a weak - coupling regime. the time evolution of the wigner transform of the one - particle reduced density matrix is represented by means of a perturbative series. the expansion is obtained upon iterating the duhamel formula. for short times, we rigorously prove that a subseries of the latter, converges to the solution of the boltzmann equation which is physically relevant in the context. in particular, we recover the transition rate as it is predicted by fermi ' s golden rule. however, we are not able to prove that the quantity neglected while retaining a subseries of the complete original perturbative expansion, indeed vanishes in the limit : we only give plausibility arguments in this direction. the present study holds in any space dimension greater than 2. | arxiv:math-ph/0301025 |
using the boltzmann transport model, we show that, somewhat unintuitively, ballistic transport of electrons in metals is weaker than diffusive transport. this happens because the femtosecond - scale collision rates of the non - thermal electrons makes their mean - free path negligible. our predictions are correlated with various photoluminescence and nonlinear optics experimental examples both for continuous wave ( cw ) and pulsed illumination, and open the way to easy modelling of the non - thermal electron distributions in metal nanostructures of arbitrary complexity. | arxiv:2402.15226 |
using high - quality data, we report several statistical regularities of equity auctions in the paris stock exchange. first, the average order book density is linear around the auction price at the time of auction clearing and has a large peak at the auction price. while the peak is due to slow traders, the order density shape is the result of subtle dynamics. the impact of a new market order or cancellation at the auction time can be decomposed into three parts as a function of the size of the additional order : ( 1 ) zero impact, caused by the discrete nature of prices, sometimes up to a surprisingly large additional volume relative to the auction volume ( 2 ) linear impact for additional orders up to a large fraction of the auction volume ( 3 ) for even larger orders price impact is non - linear, frequently super - linear. | arxiv:2301.05677 |
we present the scheme - invariant unpolarized and polarized flavor non - singlet evolution equation to n $ ^ 3 $ lo for the structure functions $ f _ 2 ( x, q ^ 2 ) $ and $ g _ 1 ( x, q ^ 2 ) $ including the charm - and bottom quark effects in the asymptotic representation. the corresponding evolution is based on the experimental measurement of the non - singlet structure functions at a starting scale $ q _ 0 ^ 2 $. in this way the evolution does only depend on the strong coupling constant $ \ alpha _ s ( m _ z ) $ or the qcd scale $ \ lambda _ { \ rm qcd } $ and the charm and bottom quark masses $ m _ c $ and $ m _ b $ and provides one of the cleanest ways to measure the strong coupling constant in future high luminosity deep - inelastic scattering experiments. the yet unknown parts of the 4 - loop anomalous dimensions introduce only a marginal error in this analysis. | arxiv:2107.01293 |
the semi - classical conductance of a two - dimensional electron gas is calculated in the presence of a one - dimensional modulated magnetic field with zero average. in the limit of small magnetic field amplitudes ( b ) the contribution of the magnetic modulation to the magnetoresistance increases as $ b ^ { 3 / 2 } $ in the diffusive limit, while the increase is linear in $ b $ in the ballistic regime. temperature does not influence the power law behavior but it decreases the prefactor of this functional behavior. | arxiv:cond-mat/0001359 |
we further examine a theory of phase contrast imaging ( pci ) of cold atomic gases, first introduced by us in phys. rev. lett. { \ bf 112 }, 233602 ( 2014 ). we model the pci measurement by directly calculating the entangled state between the light and the atoms due to the ac stark shift, which induces a conditional phase shift on the light depending upon the atomic state. by interfering the light that passes through the bec with the original light, one can obtain information of the atomic state at a single shot level. we derive an exact expression for a measurement operator that embodies the information obtained from pci, as well as the back - action on the atomic state. by the use of exact expressions for the measurement process, we go beyond the continuous variables approximation such that the non - gaussian regime can be accessed for both the measured state and the post - measurement state. features such as the photon probability density, signal, signal variance, fisher information, error of the measurement, and the backaction are calculated by applying the measurement operator to an atomic two spin state system. for an atomic state that is initially in a spin coherent state, we obtain analytical expression for these quantities. there is an optimal atom - light interaction time that scales inversely proportional to the number of atoms, which maximizes the information readout. | arxiv:1606.08466 |
high - fidelity quantum gates are an essential prerequisite for large - scale quantum computation. when manipulating practical quantum systems, environmentally and operationally induced errors are inevitable, and thus, in addition to being fast, it is preferable that operations should be intrinsically robust against different errors. here, we propose a general protocol for constructing geometric quantum gates with on - demand trajectories by modulating the applied pulse shapes that define the system ' s evolution trajectory. our scheme adopts reverse engineering of the target hamiltonian using smooth pulses, which also eliminates the difficulty of calculating geometric phases for an arbitrary trajectory. furthermore, because a particular geometric gate can be induced by various different trajectories, we can further optimize the gate performance under different scenarios ; the results of numerical simulations indicate that this optimization can greatly enhance the quality of the gate. in addition, we present an implementation of our proposal using superconducting circuits, showcasing substantial enhancements in gate performance compared with conventional schemes. our protocol thus presents a promising approach for high - fidelity and strong - robust geometric quantum gates for large - scale quantum computation. | arxiv:2401.11147 |
in this work, we address the problem of sensor selection for state estimation via kalman filtering. we consider a linear time - invariant ( lti ) dynamical system subject to process and measurement noise, where the sensors we use to perform state estimation are randomly drawn according to a sampling with replacement policy. since our selection of sensors is randomly chosen, the estimation error covariance of the kalman filter is also a stochastic quantity. fortunately, concentration inequalities ( cis ) for the estimation error covariance allow us to gauge the estimation performance we intend to achieve when our sensors are randomly drawn with replacement. to obtain a non - trivial improvement on existing cis for the estimation error covariance, we first present novel matrix cis for a sum of independent and identically - distributed ( i. i. d. ) and positive semi - definite ( p. s. d. ) random matrices, whose support is finite. next, we show that our guarantees generalize an existing matrix ci. also, we show that our generalized guarantees require significantly fewer number of sampled sensors to be applicable. lastly, we show through a numerical study that our guarantees significantly outperform existing ones in terms of their ability to bound ( in the semi - definite sense ) the steady - state estimation error covariance of the kalman filter. | arxiv:2403.06032 |
let g be the lie algebra of a connected, simply connected semisimple algebraic group over an algebraically closed field of sufficiently large positive characteristic. we study the compatibility between the koszul grading on the restricted enveloping algebra ( ug ) _ 0 of g constructed in a previous paper, and the structure of frobenius algebra of ( ug ) _ 0. this answers a question raised to the author by w. soergel. | arxiv:1010.0495 |
##s. ) the plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. since neutral particles attack the wafer from all angles, this process is isotropic. plasma etching can be isotropic, i. e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i. e., exhibiting a smaller lateral undercut rate than its downward etch rate. such anisotropy is maximized in deep reactive ion etching. the use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation - dependent etching. the source gas for the plasma usually contains small molecules rich in chlorine or fluorine. for instance, carbon tetrachloride ( ccl4 ) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. a plasma containing oxygen is used to oxidize ( " ash " ) photoresist and facilitate its removal. ion milling, or sputter etching, uses lower pressures, often as low as 10−4 torr ( 10 mpa ). it bombards the wafer with energetic ions of noble gases, often ar +, which knock atoms from the substrate by transferring momentum. because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. on the other hand, it tends to display poor selectivity. reactive - ion etching ( rie ) operates under conditions intermediate between sputter and plasma etching ( between 10−3 and 10−1 torr ). deep reactive - ion etching ( drie ) modifies the rie technique to produce deep, narrow features. in reactive - ion etching ( rie ), the substrate is placed inside a reactor, and several gases are introduced. a plasma is struck in the gas mixture using an rf power source, which breaks the gas molecules into ions. the ions accelerate towards, and react with, the surface of the material being etched, forming another gaseous material. this is known as the chemical part of reactive ion etching. there is also a physical part, which is similar to the sputtering deposition process. if the ions have high enough energy, they can knock atoms out of the material to be etched without a chemical reaction. it is a | https://en.wikipedia.org/wiki/MEMS |
the initiation of the international clinical trials registry platform. there has also been action from the pharmaceutical industry, which released plans to make clinical trial data more transparent and publicly available. released in october 2008, the revised declaration of helsinki, states that " every clinical trial must be registered in a publicly accessible database before recruitment of the first subject. " the world health organization maintains an international registry portal at http : / / apps. who. int / trialsearch /. who states that the international registry ' s mission is " to ensure that a complete view of research is accessible to all those involved in health care decision making. this will improve research transparency and will ultimately strengthen the validity and value of the scientific evidence base. " since 2007, the international committee of medical journal editors icmje accepts all primary registries in the who network in addition to clinicaltrials. gov. clinical trial registration in other registries excluding clinicaltrials. gov has increased irrespective of study designs since 2014. = = = reporting compliance = = = various studies have measured the extent to which various trials are in compliance with the reporting standards of their registry. = = = overview of clinical trial registries = = = worldwide, there is growing number of registries. a 2013 study identified the following top five registries ( numbers updated as of august 2013 ) : = = = overview of preclinical study registries = = = similar to clinical research, preregistration can help to improve transparency and quality of research data in preclinical research. in contrast to clinical research where preregistration is mandatory for vast parts it is still new in preclinical research. a large part of preclinical and basic biomedical research relies on animal experiments. the non - publication of results gained from animal experiments not only distorts the state of research by reinforcing the publication bias, it further represents an ethical issue. preregistration is discussed as a measure that could counteract this problem. following registries are suited for the preregistration of preclinical studies. = = journal support = = over 200 journals offer a registered reports option ( centre for open science, 2019 ), and the number of journals that are adopting registered reports is approximately doubling each year ( chambers et al., 2019 ). psychological science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. the editor - in - chief also noted that the editorial staff will be asking for replication of studies with surprising findings from | https://en.wikipedia.org/wiki/Preregistration_(science) |
we discuss a conjecture which says that the automorphism group of the weyl algebra in characteristic zero is canonically isomorphic to the automorphism group of the corresponding poisson algebra of classical polynomial symbols. several arguments in favor of this conjecture are presented, all based on the consideration of the reduction of the weyl algebra to positive characteristic. | arxiv:math/0512169 |
evaluation of social robot navigation inherently requires human input due to its qualitative nature. motivated by the need to scale human evaluation, we propose a general method for deploying interactive, rich - client robotic simulations on the web. prior approaches implement specific web - compatible simulators or provide tools to build a simulator for a specific study. instead, our approach builds on standard linux tools to share a graphical desktop with remote users. we leverage these tools to deploy simulators on the web that would typically be constrained to desktop computing environments. as an example implementation of our approach, we introduce the sean experimental platform ( sean - ep ). with sean - ep, remote users can virtually interact with a mobile robot in the social environment for autonomous navigation, without installing any software on their computer or needing specialized hardware. we validated that sean - ep could quickly scale the collection of human feedback and its usability through an online survey. in addition, we compared human feedback from participants that interacted with a robot using sean - ep with feedback obtained through a more traditional video survey. our results suggest that human perceptions of robots may differ based on whether they interact with the robots in simulation or observe them in videos. also, they suggest that people perceive the surveys with interactive simulations as less mentally demanding than video surveys. | arxiv:2012.12336 |
we use time - and angle - resolved photoemission to measure the nodal non - equilibrium electronic states in various dopings of bi $ _ 2 $ sr $ _ 2 $ cacu $ _ 2 $ o $ _ { 8 + \ delta } $. we find that the initial pump - induced transient signal of these ungapped states is strongly affected by the onset of the superconducting gap at $ t _ c $, superconducting pairing fluctuations at $ t _ p $, and the pseudogap at $ t ^ * $. moreover, $ t _ p $ marks a suggestive threshold in the fluence - dependent transient signal, with the appearance of a critical fluence below $ t _ p $ that corresponds to the energy required to break apart all cooper pairs. these results challenge the notion of a nodal - antinodal dichotomy in cuprate superconductors by establishing a new link between nodal quasiparticles and the cuprate phase diagram. | arxiv:1312.1735 |
the extraction of cosmological parameters from microwave background observations relies on specific assumptions about the statistical properties of the data, in particular that the p - point distributions of temperature fluctuations are jointly - normal. using a battery of statistical tests, we assess the multivariate gaussian nature of the wilkinson microwave anisotropy probe ( wmap ) 1st year data. the statistics we use fall into three classes which test different aspects of joint - normality : the first set assess the normality of marginal ( one - point ) distributions using familiar univariate methods ; the second involves statistics that directly assess joint - normality ; and the third explores the evidence of non - linearity in the relationship between variates. we applied these tests to frequency maps, ` foreground - cleaned ' assembly maps and all - sky cmb - only maps. the assembly maps are of particular interest as when combined with the kp2 mask, we recreate the region used in the computation of the angular power spectrum. significant departures from normality were found in all the maps. in particular, the kurtosis coefficient, d ' agostino ' s statistic and bivariate kurtosis calculated from temperature pairs extracted from all the assembly maps were found to be non - normal at 99 % confidence level. we found that the results were unaffected by the size of the galactic cut and were evident on either hemisphere of the cmb sky. the latter suggests that the non - gaussianity is not simply related to previous claims of north - south asymmetry or localized abnormalities detected through wavelet techniques. | arxiv:astro-ph/0511802 |
kinetic theory of particles near resonances is a current topic of discussion in plasma physics and astrophysics. we extend this discussion to the kinetic theory of the interaction between alpha particles ( energetic particles predicted to exist in large quantities in next - generation fusion experiments ) and a neoclassical tearing mode ( ntm ), a resistively - driven perturbation that sometimes exists in a tokamak. we develop a quasilinear treatment of the interaction between alphas and an ntm, showing why an ntm can be a source of significant passing alpha particle transport in tokamaks. the limitations on quasilinear theory constrain our theory ' s applicability to small amplitude ntms, highlighting the importance of nonlinear studies. | arxiv:2406.13884 |
we introduce fof - x for real - time reconstruction of detailed human geometry from a single image. balancing real - time speed against high - quality results is a persistent challenge, mainly due to the high computational demands of existing 3d representations. to address this, we propose fourier occupancy field ( fof ), an efficient 3d representation by learning the fourier series. the core of fof is to factorize a 3d occupancy field into a 2d vector field, retaining topology and spatial relationships within the 3d domain while facilitating compatibility with 2d convolutional neural networks. such a representation bridges the gap between 3d and 2d domains, enabling the integration of human parametric models as priors and enhancing the reconstruction robustness. based on fof, we design a new reconstruction framework, fof - x, to avoid the performance degradation caused by texture and lighting. this enables our real - time reconstruction system to better handle the domain gap between training images and real images. additionally, in fof - x, we enhance the inter - conversion algorithms between fof and mesh representations with a laplacian constraint and an automaton - based discontinuity matcher, improving both quality and robustness. we validate the strengths of our approach on different datasets and real - captured data, where fof - x achieves new state - of - the - art results. the code will be released for research purposes. | arxiv:2412.05961 |
galaxy surveys map the three - dimensional distribution of matter in the universe, encoding information about both the primordial cosmos and its subsequent evolution. by comparing the angular and physical scales of features in the galaxy distribution, we can compute the physical distance to the sample, and thus extract the hubble parameter, $ h _ 0 $. in this chapter, we discuss how this is performed in practice, introducing two key ` ` standard rulers ' '. the first, the sound horizon at recombination, leads to baryon acoustic oscillations, and, by combining with external data from the cmb or big bang nucleosynthesis, leads to a competitive $ h _ 0 $ constraint. information can also be extracted from the physical scale of the horizon at matter - radiation equality ; though somewhat less constraining, this depends on very different physics and is an important validation test of the physical model. we discuss how both such constraints can be derived ( using ` ` template ' ' and ` ` full - shape ' ' methodologies ), and present a number of recent constraints from the literature, some of which are comparable in precision to ( and independent from ) planck. finally, we discuss future prospects for improving these constraints in the future. | arxiv:2305.07977 |
we review various cosmological models with a local underdense region ( local void ) and the averaged models with the backreaction of inhomogeneities, which have been proposed to explain ( without assuming a positive cosmological constant ) the observed accelerating behaviors appearing in the magnitude - redshift relation of snia. to clarify their reality, we consider their consistency with the other observational studies such as cmb temperature anisotropy, baryon acoustic oscillation, kinematic sunyaev - zeldovich effect, and so on. it is found as a result that many inhomogeneous models seem to be ruled out and only models with the parametrs in the narrow range remain to be examined, and that, unless we assume very high amplitudes of perturbations or gravitational energies, the averaged models cannot have the accelerated expansion and the fitted effective lambda has not the value necessary for the observed acceleration. | arxiv:0906.1325 |
run - and - tumble is a common but vital strategy that bacteria employ to explore environment suffused with boundaries, as well as to escape from entrapment. in this study we reveal how this strategy and the resulting dynamical behavior can be sensitively regulated by bacterial dimensions. our results demonstrate that the logarithm of the surface residence time for bacteria with constant tumble bias is linearly related to a dimensionless parameter of bacterial intrinsic size characteristics, where a small variation in bacterial dimensions, which is natural in a suspension, reproduces well the experimentally observed large variation in bacterial residence time. furthermore, our results predict that the optimal tumble bias corresponding to the maximum surface diffusivity depends strongly on bacterial dimensions, where the same small variation in bacterial dimensions gives rise to a strongly diversified optimal tumble bias and an order of magnitude change in surface diffusivity. | arxiv:2501.17477 |
typically, binary classification lens - finding schemes are used to discriminate between lens candidates and non - lenses. however, these models often suffer from substantial false - positive classifications. such false positives frequently occur due to images containing objects such as crowded sources, galaxies with arms, and also images with a central source and smaller surrounding sources. therefore, a model might confuse the stated circumstances with an einstein ring. it has been proposed that by allowing such commonly misclassified image types to constitute their own classes, machine learning models will more easily be able to learn the difference between images that contain real lenses, and images that contain lens imposters. using hubble space telescope ( hst ) images, in the f814w filter, we compare the usage of binary and multi - class classification models applied to the lens finding task. from our findings, we conclude there is not a significant benefit to using the multi - class model over a binary model. we will also present the results of a simple lens search using a multi - class machine learning model, and potential new lens candidates. | arxiv:2002.11849 |
a semicoherent system can be described by its structure function or, equivalently, by a lattice polynomial function expressing the system lifetime in terms of the component lifetimes. in this paper we point out the parallelism between the two descriptions and use the natural connection of lattice polynomial functions and relevant random events to collect exact formulas for the system reliability. we also discuss the equivalence between calculating the reliability of semicoherent systems and calculating the distribution function of a lattice polynomial function of random variables. | arxiv:0809.1332 |
deep convolutional neural networks ( cnn ) are widely used in modern artificial intelligence ( ai ) and smart vision systems but also limited by computation latency, throughput, and energy efficiency on a resource - limited scenario, such as mobile devices, internet of things ( iot ), unmanned aerial vehicles ( uav ), and so on. a hardware streaming architecture is proposed to accelerate convolution and pooling computations for state - of - the - art deep cnns. it is optimized for energy efficiency by maximizing local data reuse to reduce off - chip dram data access. in addition, image and feature decomposition techniques are introduced to optimize memory access pattern for an arbitrary size of image and number of features within limited on - chip sram capacity. a prototype accelerator was implemented in tsmc 65 nm cmos technology with 2. 3 mm x 0. 8 mm core area, which achieves 144 gops peak throughput and 0. 8 tops / w peak energy efficiency. | arxiv:1709.05116 |
this report summarizes iros 2019 - lifelong robotic vision competition ( lifelong object recognition challenge ) with methods and results from the top $ 8 $ finalists ( out of over ~ $ 150 $ teams ). the competition dataset ( l ) ifel ( o ) ng ( r ) obotic v ( is ) ion ( openloris ) - object recognition ( openloris - object ) is designed for driving lifelong / continual learning research and application in robotic vision domain, with everyday objects in home, office, campus, and mall scenarios. the dataset explicitly quantifies the variants of illumination, object occlusion, object size, camera - object distance / angles, and clutter information. rules are designed to quantify the learning capability of the robotic vision system when faced with the objects appearing in the dynamic environments in the contest. individual reports, dataset information, rules, and released source code can be found at the project homepage : " https : / / lifelong - robotic - vision. github. io / competition / ". | arxiv:2004.14774 |
the ability to control light polarization state is critically important for diverse applications in information processing, telecommunications, and spectroscopy. here, we propose that a stack of anisotropic van der waals materials can facilitate the building of optical elements with jones matrices of unitary, hermitian, non - normal, singular, degenerate, and defective classes. we show that the twisted stack with electrostatic control can function as arbitrary - birefringent wave - plate or arbitrary polarizer with tunable degree of non - normality, which in turn give access to plethora of polarization transformers including rotators, pseudorotators, symmetric and ambidextrous polarizers. moreover, we discuss an electrostatic - reconfigurable stack which can be tuned to operate as four different polarizers and be used for stokes polarimetry. | arxiv:2110.02299 |
the design of amorphous entangled systems, specifically from soft and active materials, has the potential to open exciting new classes of active, shape - shifting, and task - capable ' smart ' materials. however, the global emergent mechanics that arises from the local interactions of individual particles are not well understood. in this study, we examine the emergent properties of amorphous entangled systems in three different examples : an in - silico " smarticle " collection, its robophysical chain, and living entangled aggregate of worm blobs ( l. variegatus ). in simulations, we examine how material properties change for a collective composed of dynamic three - link robots. we compare three methods of controlling entanglement in a collective : externally oscillations, shape - changes, and internal oscillations. we find that large - amplitude changes of the particle ' s shape using the shape - change procedure produced the highest average number of entanglements, with respect to the aspect ratio ( l / w ), improving the tensile strength of the collective. we demonstrate application of these simulations in two experimental systems : robotic chains and entangled worm blobs. in the robophysical models, we find emergent auxeticity behavior upon straining the confined collective. and finally, we show how the individual worm activity in a blob can be controlled through the ambient dissolved oxygen in water, leading to complex emergent properties of the living entangled collective, such as solid - like entanglement and tumbling. taken together, our work reveals principles by which future shape - modulating, potentially soft robotic systems may dynamically alter their material properties, advancing our understanding of living entangled materials, while inspiring new classes of synthetic emergent super - materials. | arxiv:2207.03665 |
we propose an explanation of several experimental features related to the ` ` pseudogap ' ' in hts cuprates in terms of a spin - charge gauge theory approach to the t - j model. the metal - insulator crossover as temperature decreases is explained from the competition between antiferromagnetism and dissipative charge dynamics. we show that gauge interactions bind spinon and holon into an electron resonance, whose recombination time shows up in the out - of - plane resistivity. the theoretical results are sistematically compared with experimental data, finding a very good agreement. | arxiv:cond-mat/0304408 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.