text
stringlengths
1
3.65k
source
stringlengths
15
79
we continue our efforts to understand, within the framework of the quantum mechanics of the universe as a whole, the quasiclassical realm of familiar experience as a feature emergent from the hamiltonian of the elementary particles and the initial condition of the universe. quantum mechanics assigns probabilities to exhaustive sets of alternative decoherent histories of the universe. we introduce and define the notion of strong decoherence. we replace the notion of maximal sets of alternative decohering histories by defining the more useful concept of " full " sets of alternative strongly decohering histories. these full sets fall into equivalence classes each of which is characterized by a basis in hilbert space. finally we describe our continuing efforts to find measures of classicality - - - measures that could be applied to such full sets of alternative strongly decohering so as to characterize a quasiclassical realm.
arxiv:1905.05859
probabilistic forecasting in combination with stochastic programming is a key tool for handling the growing uncertainties in future energy systems. derived from a general stochastic programming formulation for the optimal scheduling and bidding in energy markets we examine several common special instances containing uncertain loads, energy prices, and variable renewable energies. we analyze for each setup whether only an expected value forecast, marginal or bivariate predictive distributions, or the full joint predictive distribution is required. for market schedule optimization, we find that expected price forecasts are sufficient in almost all cases, while the marginal distributions of renewable energy production and demand are often required. for bidding curve optimization, pairwise or full joint distributions are necessary except for specific cases. this work helps practitioners choose the simplest type of forecast that can still achieve the best theoretically possible result for their problem and researchers to focus on the most relevant instances.
arxiv:2203.13159
on average. we conclude redshift weighting can bring us closer to the cosmological goal of the final quasar sample.
arxiv:1801.03038
neutrino - nucleus cross section uncertainties are expected to be a dominant systematic in future accelerator neutrino experiments. the cross sections are determined by the linear response of the nucleus to the weak interactions of the neutrino, and are dominated by energy and distance scales of the order of the separation between nucleons in the nucleus. these response functions are potentially an important early physics application of quantum computers. here we present an analysis of the resources required and their expected scaling for scattering cross section calculations. we also examine simple small - scale neutrino - nucleus models on modern quantum hardware. in this paper, we use variational methods to obtain the ground state of a three nucleon system ( the triton ) and then implement the relevant time evolution. in order to tame the errors in present - day nisq devices, we explore the use of different error - mitigation techniques to increase the fidelity of the calculations.
arxiv:1911.06368
thermoelectric effects in magnetic nanostructures and the so - called spin caloritronics are attracting much interest. indeed it provides a new way to control and manipulate spin currents which are key elements of spin - based electronics. here we report on giant magnetothermoelectric effect in al2o3 magnetic tunnel junctions. the thermovoltage in this geometry can reach 1 mv. moreover a magneto - thermovoltage effect could be measured with ratio similar to the tunnel magnetoresistance ratio. the seebeck coefficient can then be tuned by changing the relative magnetization orientation of the two magnetic layers in the tunnel junction. therefore our experiments extend the range of spintronic devices application to thermoelectricity and provide a crucial piece of information for understanding the physics of thermal spin transport.
arxiv:1109.3421
the balance of the linear photon momentum in multiphoton ionization is studied experimentally. in the experiment argon and neon atoms are singly ionized by circularly polarized laser pulses with a wavelength of 800 nm and 1400 nm in the intensity range of 10 ^ { 14 } - 10 ^ { 15 } w / cm ^ 2. the photoelectrons are measured using velocity map imaging. we find that the photoelectrons carry linear momentum corresponding to the photons absorbed above the field free ionization threshold. our finding has implications for concurrent models of the generation of terahertz radiation in filaments.
arxiv:1102.1881
the success of vlms often relies on the dynamic high - resolution schema that adaptively augments the input images to multiple crops, so that the details of the images can be retained. however, such approaches result in a large number of redundant visual tokens, thus significantly reducing the efficiency of the vlms. to improve the vlms ' efficiency without introducing extra training costs, many research works are proposed to reduce the visual tokens by filtering the uninformative visual tokens or aggregating their information. some approaches propose to reduce the visual tokens according to the self - attention of vlms, which are biased, to result in inaccurate responses. the token reduction approaches solely rely on visual cues are text - agnostic, and fail to focus on the areas that are most relevant to the question, especially when the queried objects are non - salient to the image. in this work, we first conduct experiments to show that the original text embeddings are aligned with the visual tokens, without bias on the tailed visual tokens. we then propose a self - adaptive cross - modality attention mixture mechanism that dynamically leverages the effectiveness of visual saliency and text - to - image similarity in the pre - llm layers to select the visual tokens that are informative. extensive experiments demonstrate that the proposed approach achieves state - of - the - art training - free vlm acceleration performance, especially when the reduction rate is sufficiently large.
arxiv:2501.09532
in this talk i describe how to discover or rule out the existence of w ^ { prime } bosons at the cern large hadron collider as a function of arbitrary couplings and w ^ { prime } masses. if w ^ { prime } bosons are not found, i demonstrate the 95 % confidence - level exclusions that can be reached for several classes of models. in particular, w ^ { prime } bosons in the entire reasonable parameter space of little higgs models can be discovered or excluded in 1 year at the lhc.
arxiv:hep-ph/0306266
cyclopentadiene ( cpd ) is an important intermediate in the combustion of fuel and the formation of aromatics. in the present study, the kinetics and thermodynamic properties for hydrogen abstraction and addition with cpd, and the related reactions including the isomerization and decomposition on the c5h7 potential energy surface were systematically investigated by theoretical calculations. high - level ab initio calculations were adopted to obtain the stationary points on the potential energy surfaces of cpd + h. phenomenological rate coefficients for temperature - and pressure - dependent reactions in the full potential energy surface were calculated by solving the time - dependent multiple - well rrkm / master equation, and the hydrogen abstraction reactions were based on the conventional transition state theory. in terms of the hydrogen abstraction reactions, the hydrogen abstraction from the saturated carbon atom in cpd is found to be the dominant channel. for the hydrogen addition and the associated reactions on the c5h7 pes, the allylic and vinylic cyclopentenyl radicals and c2h2 + c3h5 were found to be the most important channels and reactivity - promoting products, respectively. the previously neglected role of open - chain intermediates in the evaluation of the reaction kinetics has been suggested and the corresponding rate constants have been recommended for inclusion in the modeling of the h + c - c5h6 reaction. results indicate that the transformation from to straight - chain c5h7 is kinetically unfavorable due to the high strain energy of the 3 - membered ring structure of the isomerization transition state. moreover, the thermodynamic data and the calculated rate coefficients for both h atom abstraction and addition were incorporated into the kinetics model to examine the impact of the computed pressure - dependent kinetics of c5h6 + h reactions on model predictions.
arxiv:1910.10970
we have conducted a deep multi - color imaging survey of 0. 2 degrees ^ 2 centered on the hubble deep field north ( hdf - n ). we shall refer to this region as the hawaii - hdf - n. deep data were collected in u, b, v, r, i, and z ' bands over the central 0. 2 degrees ^ 2 and in hk ' over a smaller region covering the chandra deep field north ( cdf - n ). the data were reduced to have accurate relative photometry and astrometry across the entire field to facilitate photometric redshifts and spectroscopic followup. we have compiled a catalog of 48, 858 objects in the central 0. 2 degrees ^ 2 detected at 5 sigma significance in a 3 " aperture in either r or z ' band. number counts and color - magnitude diagrams are presented and shown to be consistent with previous observations. using color selection we have measured the density of objects at 3 < z < 7. our multi - color data indicates that samples selected at z > 5. 5 using the lyman break technique suffer from more contamination by low redshift objects than suggested by previous studies.
arxiv:astro-ph/0312635
in this article, we analyze how changing the underlying 3d shape of the base identity in face images can distort their overall appearance, especially from the perspective of deep face recognition. as done in popular training data augmentation schemes, we graphically render real and synthetic face images with randomly chosen or best - fitting 3d face models to generate novel views of the base identity. we compare deep features generated from these images to assess the perturbation these renderings introduce into the original identity. we perform this analysis at various degrees of facial yaw with the base identities varying in gender and ethnicity. additionally, we investigate if adding some form of context and background pixels in these rendered images, when used as training data, further improves the downstream performance of a face recognition model. our experiments demonstrate the significance of facial shape in accurate face matching and underpin the importance of contextual data for network training.
arxiv:2208.02991
the decay of the standard model higgs boson into four leptons via a virtual w - boson or z - boson pair is one of the most important decay modes in the higgs - boson search at the lhc. we present the complete electroweak radiative corrections of o ( \ alpha ) to these processes, including improvements beyond o ( \ alpha ) originating from heavy - higgs effects and final - state radiation. the intermediate w - and z - boson resonances are described ( without any expansion or on - shell approximation ) by consistently employing complex mass parameters for the gauge bosons ( complex - mass scheme ). the corrections to partial decay widths typically amount to some per cent and increase with growing higgs mass m _ h, reaching about 8 % at m _ h \ sim 500 gev. for not too large higgs masses ( m _ h < \ sim 400 gev ) the corrections to the partial decay widths can be reproduced within < \ sim 2 % by simple approximations. for angular distributions the corrections are somewhat larger and distort the shapes. for invariant - mass distributions of fermion pairs they can reach several tens of per cent depending on the treatment of photon radiation. the discussed corrections have been implemented in a monte carlo event generator called prophecy4f.
arxiv:hep-ph/0604011
we develop a method for calculating the norm and the spectrum of the modulus of a foguel operator. in many cases, the norm can be computed exactly. in others, sharp upper bounds are obtained. in particular, we observe several connections between foguel operators and the golden ratio.
arxiv:0908.0479
noise can induce time order in the dynamics of nonlinear dynamical systems. for example, coherence resonance occurs in various neuron models driven by a noise. in studies of coherence resonance, ensemble - averaged measures of the coherence are often used. in the present study, we examine coherence resonance for time - averaged measures. for the examination, we use a hodgkin - huxley neuron model driven by a constant current and a noise. we firstly show that for large times, the neuron is in a stationary state irrespective of initial conditions of the neuron. we then show numerical evidence that in the stationary state, a given noise sample path uniquely determines the dynamics of the neuron. we then present numerical evidence suggesting that time - averaged coherence measures of the dynamics is independent of noise sample paths and is equal to ensemble - averaged coherence measures. on the basis of this property, we show that coherence resonance is not only a phenomenon related to ensemble - averaged measures but also a phenomenon that holds for time - averaged measures.
arxiv:2308.09342
in this study, we investigate prediction methods for an early warning system for a large stem undergraduate course. recent studies have provided evidence in favour of adopting early warning systems as a means of identifying at - risk students. many of these early warning systems rely on data from students ' engagement with learning management systems ( lmss ). our study examines eight prediction methods, and investigates the optimal time in a course to apply an early warning system. we present findings from a statistics university course which has a large proportion of resources on the lms blackboard and weekly continuous assessment. we identify weeks 5 - 6 of our course ( half way through the semester ) as an optimal time to implement an early warning system, as it allows time for the students to make changes to their study patterns whilst retaining reasonable prediction accuracy. using detailed ( fine - grained ) variables, clustering and our final prediction method of bart ( bayesian additive regressive trees ) we are able to predict students ' final grade by week 6 based on mean absolute error ( mae ) to 6. 5 percentage points. we provide our r code for implementation of the prediction methods used in a github repository.
arxiv:1612.05735
this paper studies physical layer security in a wireless ad hoc network with numerous legitimate transmitter - receiver pairs and eavesdroppers. a hybrid full - / half - duplex receiver deployment strategy is proposed to secure legitimate transmissions, by letting a fraction of legitimate receivers work in the full - duplex ( fd ) mode sending jamming signals to confuse eavesdroppers upon their information receptions, and letting the other receivers work in the half - duplex mode just receiving their desired signals. the objective of this paper is to choose properly the fraction of fd receivers for achieving the optimal network security performance. both accurate expressions and tractable approximations for the connection outage probability and the secrecy outage probability of an arbitrary legitimate link are derived, based on which the area secure link number, network - wide secrecy throughput and network - wide secrecy energy efficiency are optimized respectively. various insights into the optimal fraction are further developed and its closed - form expressions are also derived under perfect self - interference cancellation or in a dense network. it is concluded that the fraction of fd receivers triggers a non - trivial trade - off between reliability and secrecy, and the proposed strategy can significantly enhance the network security performance.
arxiv:1703.08941
performance during perceptual decision - making exhibits an inverted - u relationship with arousal, but the underlying network mechanisms remain unclear. here, we recorded from auditory cortex ( a1 ) of behaving mice during passive tone presentation, while tracking arousal via pupillometry. we found that tone discriminability in a1 ensembles was optimal at intermediate arousal, revealing a population - level neural correlate of the inverted - u relationship. we explained this arousal - dependent coding using a spiking network model with a clustered architecture. specifically, we show that optimal stimulus discriminability is achieved near a transition between a multi - attractor phase with metastable cluster dynamics ( low arousal ) and a single - attractor phase ( high arousal ). additional signatures of this transition include arousal - induced reductions of overall neural variability and the extent of stimulus - induced variability quenching, which we observed in the empirical data. our results elucidate computational principles underlying interactions between pupil - linked arousal, sensory processing, and neural variability, and suggest a role for phase transitions in explaining nonlinear modulations of cortical computations.
arxiv:2404.03902
in arxiv : 1604. 00338 [ math. qa ] we gave a complete combinatorial characterization of homogeneous quadratic identities for minors of quantum matrices. it was obtained as a consequence of results on minors of matrices of a special sort, the so - called path matrices $ path _ g $ generated by paths in special planar directed graphs $ g $. in this paper we prove two assertions that were stated but left unproved in arxiv : 1604. 00338 [ math. qa ]. the first one says that any minor of $ path _ g $ is determined by a system of disjoint paths, called a flow, in $ g $ ( generalizing a similar result of lindstr \ " om ' s type for the path matrices of cauchon graphs by casteels ). the second, more sophisticated, assertion concerns certain transformations of pairs of flows in $ g $.
arxiv:1611.00302
recently we have evaluated the matrix elements $ < or ^ { p } > $, $ where $ o $ $ = { 1, \ beta, i \ mathbf { \ alpha n } \ beta } $ are the standard dirac matrix operators and the angular brackets denote the quantum - mechanical average for the relativistic coulomb problem, in terms of generalized hypergeometric functions $ _ { 3 } f _ { 2 } ( 1 ) $ for all suitable powers and established two sets of pasternack - type matrix identities for these integrals. the corresponding kramers - - pasternack three - term vector recurrence relations are derived here.
arxiv:0908.3021
we present a unifying framework in which both the $ \ nu $ - tamari lattice, introduced by pr \ ' eville - ratelle and viennot, and principal order ideals in young ' s lattice indexed by lattice paths $ \ nu $, are realized as the dual graphs of two combinatorially striking triangulations of a family of flow polytopes which we call the $ \ nu $ - caracol flow polytopes. the first triangulation gives a new geometric realization of the $ \ nu $ - tamari complex introduced by ceballos, padrol and sarmiento. we use the second triangulation to show that the $ h ^ * $ - vector of the $ \ nu $ - caracol flow polytope is given by the $ \ nu $ - narayana numbers, extending a result of m \ ' esz \ ' aros when $ \ nu $ is a staircase lattice path. our work generalizes and unifies results on the dual structure of two subdivisions of a polytope studied by pitman and stanley.
arxiv:2101.10425
apparently, some form of local superconducting pairing persists to temperatures well above the maximum observed $ t _ c $ in underdoped cuprates, \ textit { i. e. } $ t _ c $ is suppressed due to the small phase stiffness. with this in mind, we consider the following question - - given a system with a high pairing scale $ \ delta _ 0 $ but with $ t _ c $ reduced by phase fluctuations, can one design a composite system in which $ t _ c $ approaches its mean - field value, $ t _ c \ to t _ { mf } \ approx \ delta _ 0 / 2 \ $? here, we study a simple two component model in which a " metallic layer " with $ \ delta _ 0 = 0 $ is coupled by single - particle tunneling to a " pairing layer " with $ \ delta _ 0 > 0 $ but zero phase stiffness. we show that in the limit that the bandwidth of the metal is much larger than $ \ delta _ 0 $, $ t _ c $ of the composite system can reach the upper limit $ t _ c \ approx \ delta _ 0 / 2 $.
arxiv:0805.3737
we present our observational results of am cvn star cr boo in the ubvr bands. our observational campaign includes data obtained over 5 nights with the national astronomical observatory rozhen, belogradchik and the as vidojevica telescopes. during the whole time of our observations the brightness of the system varied between $ 13. 95 - 17. 23 $ in b band. we report the appearance of humps during the period of quiescence and superhumps during the active state of the object, ( where the latter are detected in two nights ). we obtain the superhumps periodicity for two nights, $ p _ { sh } \ approx 24. 76 - 24. 92 $ min. the color during maximum brightness is estimated as $ - 0. 107 < ( b - v ) _ { 0 } < 0. 257 $ and the corresponding temperature is in the range as $ 7700 [ k ] < t ( b - v ) _ { 0 } < 11700 [ k ] $. we found that cr boo varies from bluer to redder in the nights with outbursts activity. the star becomes bluer during the times of superhumps.
arxiv:2212.07189
bgp communities are a popular mechanism used by network operators for traffic engineering, blackholing, and to realize network policies and business strategies. in recent years, many research works have contributed to our understanding of how bgp communities are utilized, as well as how they can reveal secondary insights into real - world events such as outages and security attacks. however, one fundamental question remains unanswered : " which ases tag announcements with bgp communities and which remove communities in the announcements they receive? " a grounded understanding of where bgp communities are added or removed can help better model and predict bgp - based actions in the internet and characterize the strategies of network operators. in this paper we develop, validate, and share data from the first algorithm that can infer bgp community tagging and cleaning behavior at the as - level. the algorithm is entirely passive and uses bgp update messages and snapshots, e. g. from public route collectors, as input. first, we quantify the correctness and accuracy of the algorithm in controlled experiments with simulated topologies. to validate in the wild, we announce prefixes with communities and confirm that more than 90 % of the ases that we classify behave as our algorithm predicts. finally, we apply the algorithm to data from four sets of bgp collectors : ripe, routeviews, isolario, and pch. tuned conservatively, our algorithm ascribes community tagging and cleaning behaviors to more than 13k ases, the majority of which are large networks and providers. we make our algorithm and inferences available as a public resource to the bgp research community.
arxiv:2110.03816
the elapsed time equation is an age - structured model that describes the dynamics of interconnected spiking neurons through the elapsed time since the last discharge, leading to many interesting questions on the evolution of the system from a mathematical and biological point of view. in this work, we first deal with the case when transmission after a spike is instantaneous and the case when there exists a distributed delay that depends on the previous history of the system, which is a more realistic assumption. then we revisit the well - posedness in order to make a numerical analysis by adapting the classical upwind scheme through a fixed - point approach. we improve the previous results on well - posedness by relaxing some hypotheses on the non - linearity for instantaneous transmission, including the strongly excitatory case, while for the numerical analysis we prove that the approximation given by the explicit upwind scheme converges to the solution of the non - linear problem through bv - estimates. we also show some numerical simulations to compare the behavior of the system in the case of instantaneous transmission with the case of distributed delay under different parameters, leading to solutions with different asymptotic profiles.
arxiv:2310.02068
on the basis of lagrangian formalism of relativistic field theory post - newtonian equations of motion for a rotating body are derived in the frame of feynman ' s quantum field gravity theory ( fgt ) and compared with corresponding geodesic equations in general relativity ( gr ). it is shown that in fgt the trajectory of a rotating test body does not depend on a choice of a coordinate system. the equation of translational motion of a gyroscope is applied to description of laboratory experiments with free falling rotating bodies and rotating bodies on a balance scale. post - newtonian relativistic effect of periodical modulation of the orbital motion of a rotating body is discussed for the case of planets of the solar system and for binary pulsars psr b1913 + 16 and psr b1259 - 63. in the case of binary pulsars with known spin orientations this effect gives a possibility to measure radiuses of neutron stars.
arxiv:gr-qc/0010056
we study the overparametrization bounds required for the global convergence of stochastic gradient descent algorithm for a class of one hidden layer feed - forward neural networks, considering most of the activation functions used in practice, including relu. we improve the existing state - of - the - art results in terms of the required hidden layer width. we introduce a new proof technique combining nonlinear analysis with properties of random initializations of the network. first, we establish the global convergence of continuous solutions of the differential inclusion being a nonsmooth analogue of the gradient flow for the mse loss. second, we provide a technical result ( working also for general approximators ) relating solutions of the aforementioned differential inclusion to the ( discrete ) stochastic gradient descent sequences, hence establishing linear convergence towards zero loss for the stochastic gradient descent iterations.
arxiv:2201.12052
the hard x - ray emission from cygnus x - 1 has been monitored continually by batse since the launch of cgro in april 1991. we present the hard x - ray intensity and spectral history of the source covering a period of more than five years. power spectral analysis shows a significant peak at the binary orbital period. the 20 - 100 kev orbital light curve is roughly sinusoidal with a minimum near superior conjunction of the x - ray source and an rms modulation fraction of approximately 1. 7 %. no longer - term periodicities are evident in the power spectrum. we compare our results with other observations and discuss the implications for models of the source geometry.
arxiv:astro-ph/9712072
millimeter wave ( mmwave ) communication systems can provide high data rates but the system performance may degrade significantly due to mobile blockers and the user ' s own body. a high frequency of interruptions and long duration of blockage may degrade the quality of experience. for example, delays of more than about 10ms cause nausea to vr viewers. macro - diversity of base stations ( bss ) has been considered a promising solution where the user equipment ( ue ) can handover to other available bss, if the current serving bs gets blocked. however, an analytical model for the frequency and duration of dynamic blockage events in this setting is largely unknown. in this paper, we consider an open park - like scenario and obtain closed - form expressions for the blockage probability, expected frequency and duration of blockage events using stochastic geometry. our results indicate that the minimum density of bs that is required to satisfy the quality of service ( qos ) requirements of ar / vr and other low latency applications is largely driven by blockage events rather than capacity requirements. placing the bs at a greater height reduces the likelihood of blockage. we present a closed - form expression for the bs density - height trade - off that can be used for network planning.
arxiv:1808.01228
recent findings show that deep convolutional neural networks ( dcnns ) do not generalize well under partial occlusion. inspired by the success of compositional models at classifying partially occluded objects, we propose to integrate compositional models and dcnns into a unified deep model with innate robustness to partial occlusion. we term this architecture compositional convolutional neural network. in particular, we propose to replace the fully connected classification head of a dcnn with a differentiable compositional model. the generative nature of the compositional model enables it to localize occluders and subsequently focus on the non - occluded parts of the object. we conduct classification experiments on artificially occluded images as well as real images of partially occluded objects from the ms - coco dataset. the results show that dcnns do not classify occluded objects robustly, even when trained with data that is strongly augmented with partial occlusions. our proposed model outperforms standard dcnns by a large margin at classifying partially occluded objects, even when it has not been exposed to occluded objects during training. additional experiments demonstrate that compositionalnets can also localize the occluders accurately, despite being trained with class labels only. the code used in this work is publicly available.
arxiv:2003.04490
current, realistic numerical simulations of the solar atmosphere reproduce observations in a statistical sense ; they do not replicate observations such as a movie of solar granulation. inversions on the other hand reproduce observations by design, but the resulting models are often not physically self - consistent. physics - informed neural networks ( pinns ) offer a new approach to solving the time - dependent radiative hydrodynamics equations and matching observations as boundary conditions. pinns approximate the solution of the integro - differential equations with a deep neural network. the parameters of this network are determined by minimizing the residuals with respect to the physics equations and the observations. the resulting models are continuous in all dimensions, can zoom into local areas of interest in space and time, and provide information on physical parameters that are not necessarily directly observed such as horizontal velocities. here we present the first proof of concept of this novel approach, explain the underlying methodology in detail, and provide an outlook to the many applications that pinns enable.
arxiv:2505.04865
the successful application of titanium oxide - graphene hybrids in the fields of photocatalysis, photovoltaics and photodetection strongly depends on the interfacial contact between both materials. the need to provide a good coupling between the enabling conductor and the photoactive phase prompted us to directly grow conducting graphenic structures on tio2 crystals. we here report on the direct synthesis of tailored graphenic structures by using plasma assisted chemical vapour deposition that present a clean junction with the prototypical titanium oxide ( 110 ) surface. chemical analysis of the interface indicates chemical bonding between both materials. photocurrent measurements under uv light illumination manifest that the charge transfer across the interface is efficient. moreover, the influence of the synthesis atmosphere, gas precursor ( c2h2 ) and diluents ( ar, o2 ), on the interface and on the structure of the as - grown graphenic material is assessed. the inclusion of o2 promotes vertical growth of partially oxidized carbon nanodots / rods with controllable height and density. the deposition with ar results in continuous graphenic films with low resistivity ( 6. 8x10 - 6 ohm x m ). the synthesis protocols developed here are suitable to produce tailored carbon - semiconductor structures on a variety of practical substrates as thin films, pillars or nanoparticles.
arxiv:1910.12667
purpose : we propose a fully unsupervised method to learn latent disease networks directly from unstructured biomedical text corpora. this method addresses current challenges in unsupervised knowledge extraction, such as the detection of long - range dependencies and requirements for large training corpora. methods : let c be a corpus of n text chunks. let v be a set of p disease terms occurring in the corpus. let x indicate the occurrence of v in c. gextext identifies disease similarities by positively correlated occurrence patterns. this information is combined to generate a graph on which geodesic distance describes dissimilarity. diseasomes were learned by gextext and glove on corpora of 100 - 1000 pubmed abstracts. similarity matrix estimates were validated against biomedical semantic similarity metrics and gene profile similarity. results : geodesic distance on gextext - inferred diseasomes correlated inversely with external measures of semantic similarity. gene profile similarity also correlated significant with proximity on the inferred graph. gextext outperformed glove in our experiments. the information contained on the gextext graph exceeded the explicit information content within the text. conclusions : gextext extracts latent relationships from unstructured text, enabling fully unsupervised modelling of diseasome graphs from pubmed abstracts.
arxiv:1911.02562
the object of the present article is a 1d lattice - gas system comprised of soft - particles, wherein particles interact only if they occupy the same or a neighboring site, as a simple representation of penetrable particles of soft condensed matter. to represent different scenarios, two different realizations of the lattice model are considered, a one - component and a two - component system, where in the two - component case particles of the same species repel and those of opposite species attract each other. the systems are analyzed entirely within the transfer matrix framework. special attention is paid to the criterion devised in ref. [ phys. rev. e 63, 031206 ( 2001 ) ], which serves to separate two classes of behavior encountered in a one - component penetrable particle systems. in addition to confirm the existence of a similar criterion for the one - component lattice - gas model, we find that the same criterion can be applied to the two - component system to provide conditions for thermodynamic catastrophe.
arxiv:1812.01165
livestock in the pursuit for engagement and profit, and the industry has a dark side characterized by capital - driven practices that alienate players and degrade their experiences. in a 2024 interview with china central television, feng ji discussed this perspective, explaining that game developers such as himself and his colleagues should focus on gameplay and storytelling to captivate players but must remain cautious not to fall into capital - driven practices, emphasizing that a reasonable question to ask yourself — his standard for their products — is whether you would recommend your children, friends, and relatives to play your games with confidence. = = = black myth : wukong ( 2018 – present ) = = = after the mobile games 100 heroes and art of war : red tides, game science started the development of black myth : wukong in 2018. the decision to develop an aaa game, according to operations director lan weiyi, came after the realization that there were more steam users from china than the united states. before the development on the game began, game science conducted a company - wide survey that revealed that action role - playing games were the games with the longest playtimes on steam among the staff, which led to a focus on action role - playing games for both the studio and the black myth project. feng ji said that this approach would allow them to better understand and empathize with players, because they themselves would be players of the types of games they were creating. game science decided to have a team focused on mobile games and a team focused on single - player games. considering the differences in development cycles between these two kinds of games, feng ji and yang ji sought to find a new environment appropriate for a team working on single - player games. ultimately, the black myth development team moved from shenzhen to hangzhou due to its " slower pace and lower living costs ". in august 2020, game science released the first trailer of black myth : wukong as a way to recruit more talent for the company. at the time, the game ' s development team had 30 members. due to the trailer going viral, game science received over 10, 000 resumes. some were from aaa gaming companies with candidates even from outside of china who were willing to apply for a chinese working visa at their own cost. a day after the trailer ' s release, there were people showing up at the door of the company asking for a job. the development team expanded to 140 employees according to the game ' s credit list. the south china morning post reports that hero games acquired a 19 % stake in game science through
https://en.wikipedia.org/wiki/Game_Science
inscriptis provides a library, command line client and web service for converting html to plain text. its development has been triggered by the need to obtain accurate text representations for knowledge extraction tasks that preserve the spatial alignment of text without drawing upon heavyweight, browser - based solutions such as selenium. in contrast to related software packages, inscriptis ( i ) provides a layout - aware conversion of html that more closely resembles the rendering obtained from standard web browsers ; and ( ii ) supports annotation rules, i. e., user - provided mappings that allow for annotating the extracted text based on structural and semantic information encoded in html tags and attributes. these unique features ensure that downstream knowledge extraction components can operate on accurate text representations, and may even use information on the semantics and structure of the original html document.
arxiv:2108.01454
the ongoing progress in networking security, together with the growing range of robot applications in many fields of everyday life, makes robotics tangible reality in our near future. accordingly, new advanced services, depends on the interplay between the robotics and cyber security, are being an important role in robotics world. this paper addresses technological implications of security enhancement to the internet of thing ( iot ) - aided robotics domain, where networked robots are expected to work in complex environments. the security enhancement suggested by the nist ( national institute of standards and technology ) creates a security template for secure communications over the network are also discussed.
arxiv:1505.07593
we show that by annealing ga1 - xmnxas thin films at temperatures significantly lower than in previous studies, and monitoring the resistivity during growth, an unprecedented high curie temperature tc and conductivity can be obtained. tc is unambiguously determined to be 118 k for mn concentration x = 0. 05, 140 k for x = 0. 06, and 120 k for x = 0. 08. we also identify a clear correlation between tc and the room temperature conductivity. the results indicate that curie temperatures significantly in excess of the current values are achievable with improvements in growth and post - growth annealing conditions.
arxiv:cond-mat/0209554
ground - based telescopes equipped with state - of - the - art spectrographs are able to obtain high - resolution transmission and emission spectra of exoplanets that probe the structure and composition of their atmospheres. various atomic and molecular species, such as na, co, h2o have been already detected. molecular species have been observed only in the near - infrared while atomic species have been observed in the visible. in particular, the detection and abundance determination of water vapor bring important constraints to the planet formation process. we search for water vapor in the atmosphere of the exoplanet hd189733b using a high - resolution transmission spectrum in the visible obtained with harps. we use molecfit to correct for telluric absorption features. then we compute the high - resolution transmission spectrum of the planet using 3 transit datasets. we finally search for water vapor absorption using a cross - correlation technique that combines the signal of 800 individual lines. telluric features are corrected to the noise level. we place a 5 - sigma upper limit of 100 ppm on the strength of the 6500 a water vapor band. the 1 - sigma precision of 20 ppm on the transmission spectrum demonstrates that space - like sensitivity can be achieved from the ground. this approach opens new perspectives to detect various atomic and molecular species with future instruments such as espresso at the vlt. extrapolating from our results, we show that only 1 transit with espresso would be sufficient to detect water vapor on hd189733b - like hot jupiter with a cloud - free atmosphere. upcoming near - ir spectrographs will be even more efficient and sensitive to a wider range of molecular species. moreover, the detection of the same molecular species in different bands ( e. g. visible and ir ) is key to constrain the structure and composition of the atmosphere, such as the presence of rayleigh scattering or aerosols.
arxiv:1706.00027
cash payment is still king in several markets, accounting for more than 90 \ of the payments in almost all the developing countries. the usage of mobile phones is pretty ordinary in this present era. mobile phones have become an inseparable friend for many users, serving much more than just communication tools. every subsequent person is heavily relying on them due to multifaceted usage and affordability. every person wants to manage his / her daily transactions and related issues by using his / her mobile phone. with the rise and advancements of mobile - specific security, threats are evolving as well. in this paper, we provide a survey of various security models for mobile phones. we explore multiple proposed models of the mobile payment system ( mps ), their technologies and comparisons, payment methods, different security mechanisms involved in mps, and provide analysis of the encryption technologies, authentication methods, and firewall in mps. we also present current challenges and future directions of mobile phone security.
arxiv:2105.12097
[ the 7th ] generation ", saying : " rage ' s strengths are many. its ability to handle large streaming worlds, complex a. i. arrangements, weather effects, fast network code and a multitude of gameplay styles will be obvious to anyone who has played gta iv. " since the release of max payne 3, the engine supports directx 11 and stereoscopic 3d rendering for personal computers. max payne 3 also marked the first time in which rage was capable of rendering the same 720p resolution on a game, both on playstation 3 and xbox 360. this benefit has been achieved also in grand theft auto v, which renders at a 720p resolution on both consoles. for the remastered versions of grand theft auto v, rage was reworked for the eighth generation of video game consoles, with 1080p resolution support for both the playstation 4 and xbox one. the pc version of the game, released in 2015, showed rage supporting 4k resolution and frame rates at 60 frames per second, as well as more powerful draw distances, texture filtering, and improved shadow mapping and tessellation quality. rage would later be further refined with the release of red dead redemption 2 in 2018, supporting physically based rendering, volumetric clouds and fog values, pre - calculated global illumination as well as a vulkan renderer in the windows version in addition to directx 12. the euphoria engine was overhauled to create advanced ai as well as enhanced physics and animations for the game. hdr support was added in may 2019. support for nvidia ' s deep learning super sampling ( dlss ) and amd ' s fidelityfx super resolution ( fsr ) were added in july 2021 and september 2022 respectively. the 2022 release of grand theft auto v for the ninth generation of video game consoles introduced several enhancements, incorporating features from later rage titles. raytraced reflections, native 4k resolution on the playstation 5 and xbox series x, upscaled 4k on the xbox series s, as well as hdr support were added. = = games using rage = = = = references = =
https://en.wikipedia.org/wiki/Rockstar_Advanced_Game_Engine
the acoustic wave equation is solved in time domain with a boundary element formulation. the time discretisation is performed with the generalised convolution quadrature method and for the spatial approximation standard lowest order elements are used. collocation and galerkin methods are applied. in the interest of increasing the efficiency of the boundary element method, a low - rank approximation such as the adaptive cross approximation ( aca ) is carried out. we discuss about a generalisation of the aca to approximate a three - dimensional array of data, i. e., usual boundary element matrices at several complex frequencies. this method is used within the generalised convolution quadrature ( gcq ) method to obtain a real time domain formulation. the behaviour of the proposed method is studied with three examples, a unit cube, a unit cube with a reentrant corner, and a unit ball. the properties of the method are preserved in the data sparse representation and a significant reduction in storage is obtained.
arxiv:2312.11219
for any kumjian - pask algebra $ kp _ r ( \ lambda ) $ defined over a $ k $ - graph $ \ lambda $ of a special kind ( a " standard $ k $ - graph " ), we obtain an $ r $ - basis.
arxiv:1801.00722
let $ a $ be a $ d \ times d $ matrix with rational entries which has no eigenvalue $ \ lambda \ in \ mathbb { c } $ of absolute value $ | \ lambda | < 1 $ and let $ \ mathbb { z } ^ d [ a ] $ be the smallest nontrivial $ a $ - invariant $ \ mathbb { z } $ - module. we lay down a theoretical framework for the construction of digit systems $ ( a, \ mathcal { d } ) $, where $ \ mathcal { d } \ subset \ mathbb { z } ^ d [ a ] $ finite, that admit finite expansions of the form \ [ \ mathbf { x } = \ mathbf { d } _ 0 + a \ mathbf { d } _ 1 + \ cdots + a ^ { \ ell - 1 } \ mathbf { d } _ { \ ell - 1 } \ qquad ( \ ell \ in \ mathbb { n }, \ ; \ mathbf { d } _ 0, \ ldots, \ mathbf { d } _ { \ ell - 1 } \ in \ mathcal { d } ) \ ] for every element $ \ mathbf { x } \ in \ mathbb { z } ^ d [ a ] $. we put special emphasis on the explicit computation of small digit sets $ \ mathcal { d } $ that admit this property for a given matrix $ a $, using techniques from matrix theory, convex geometry, and the smith normal form. moreover, we provide a new proof of general results on this finiteness property and recover analogous finiteness results for digit systems in number fields a unified way.
arxiv:2107.14168
in this work we propose a novel data - driven, real - time power system voltage control method based on the physics - informed guided meta evolutionary strategy ( es ). the main objective is to quickly provide an adaptive control strategy to mitigate the fault - induced delayed voltage recovery ( fidvr ) problem. reinforcement learning methods have been developed for the same or similar challenging control problems, but they suffer from training inefficiency and lack of robustness for " corner or unseen " scenarios. on the other hand, extensive physical knowledge has been developed in power systems but little has been leveraged in learning - based approaches. to address these challenges, we introduce the trainable action mask technique for flexibly embedding physical knowledge into rl models to rule out unnecessary or unfavorable actions, and achieve notable improvements in sample efficiency, control performance and robustness. furthermore, our method leverages past learning experience to derive surrogate gradient to guide and accelerate the exploration process in training. case studies on the ieee 300 - bus system and comparisons with other state - of - the - art benchmark methods demonstrate effectiveness and advantages of our method.
arxiv:2111.14352
in this talk we discuss rare b decays ( b - > s gamma, b - > s g, b - > s l ^ + l ^ - ), b - \ bar { b } oscillations and cp violation in b physics in the context of low - energy susy. we outline the variety of predictions that arise according to the choice of the susy extension ranging from what we call the " minimal " version of the mssm to models without flavour universality or with broken r - parity. in particular, we provide a model - independent parameterization of the susy fcnc and cp - violating effects which is useful in tackling the problem in generic low - energy susy. we show how rare b decays and cp violation in b - decay amplitudes may be complementary to direct susy searches at colliders, in particular for what concerns extensions of the most restrictive version of the mssm.
arxiv:hep-ph/9709244
a $ t $ - intersecting constant dimension subspace code $ c $ is a set of $ k $ - dimensional subspaces in a projective space pg ( n, q ), where distinct subspaces intersect in a $ t $ - dimensional subspace. a classical example of such a code is the sunflower, where all subspaces pass through the same $ t $ - space. the sunflower bound states that such a code is a sunflower if $ | c | > \ left ( \ frac { q ^ { k + 1 } - q ^ { t + 1 } } { q - 1 } \ right ) ^ 2 + \ left ( \ frac { q ^ { k + 1 } - q ^ { t + 1 } } { q - 1 } \ right ) + 1 $. in this article we will look at the case $ t = 0 $ and we will improve this bound for $ q \ geq 9 $ : a set $ \ mathcal { s } $ of $ k $ - spaces in pg ( n, q ), $ q \ geq 9 $, pairwise intersecting in a point is a sunflower if $ | \ mathcal { s } | > \ left ( \ frac { 2 } { \ sqrt [ 6 ] { q } } + \ frac { 4 } { \ sqrt [ 3 ] { q } } - \ frac { 5 } { \ sqrt { q } } \ right ) \ left ( \ frac { q ^ { k + 1 } - 1 } { q - 1 } \ right ) ^ 2 $.
arxiv:2008.06372
we investigate the dynamics of two bosons trapped in an infinite one - dimensional optical lattice potential within the framework of the bose - hubbard model and derive an exact expression for the wavefunction at finite time. as initial condition we chose localized atoms that are separated by a distance of $ d $ lattice sites and carry a center of mass quasi - momentum. an initially localized pair ( $ d = 0 $ ) is found to be more stable as quantified by the pair probability ( probability to find two atoms at the same lattice site ) when the interaction and / or the center of mass quasi - momentum is increased. for initially separated atoms ( $ d \ neq 0 $ ) there exists an optimal interaction strength for pair formation. simple expressions for the wavefunction, the pair probability and the optimal interaction strength for pair formation are computed in the limit of infinite time. whereas the time - dependent wavefunction differs for values of the interaction strength that differ only by the sign, important observables like the density and the pair probability do not. with a symmetry analysis this behavior is shown to extend to the $ n $ - particle level and to fermionic systems. our results provide a complementary understanding of the recently observed [ winkler \ textit { et al. }, nature ( london ) \ textbf { 441 }, 853 ( 2006 ) ] dynamical stability of atom pairs in a repulsively interacting lattice gas.
arxiv:1202.4111
we propose a method for computing n - time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. for spinorial and fermionic systems, the reconstruction of arbitrary n - time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
arxiv:1401.2430
in recent years, neural network approaches have been widely adopted for machine learning tasks, with applications in computer vision. more recently, unsupervised generative models based on neural networks have been successfully applied to model data distributions via low - dimensional latent spaces. in this paper, we use generative adversarial networks ( gans ) to impose structure in compressed sensing problems, replacing the usual sparsity constraint. we propose to train the gans in a task - aware fashion, specifically for reconstruction tasks. we also show that it is possible to train our model without using any ( or much ) non - compressed data. finally, we show that the latent space of the gan carries discriminative information and can further be regularized to generate input features for general inference tasks. we demonstrate the effectiveness of our method on a variety of reconstruction and classification problems.
arxiv:1802.01284
in view of the lhc upgrade for the high luminosity phase ( hl - lhc ), the atlas experiment is planning to replace the inner detector with an all - silicon system. the n - in - p bulk technology represents a valid solution for the modules of most of the layers, given the significant radiation hardness of this option and the reduced cost. the large area necessary to instrument the outer layers will demand to tile the sensors, a solution for which the inefficient region at the border of each sensor needs to be reduced to the minimum size. this paper reports on a joint r & d project by the atlas lpnhe paris group and fbk trento on a novel n - in - p edgeless planar pixel design, based on the deep - trench process available at ftk.
arxiv:1310.5752
basing on the density functional theory of fermion condensation, we analyze the non - fermi liquid behavior of strongly correlated fermi - systems such as heavy - fermion metals. when deriving equations for the effective mass of quasiparticles, we consider solids with a lattice and homogeneous systems. we show that the low - temperature thermodynamic and transport properties are formed by quasiparticles, while the dependence of the effective mass on temperature, number density, magnetic fields, etc gives rise to the non - fermi liquid behavior. our theoretical study of the heat capacity, magnetization, energy scales, the longitudinal magnetoresistance and magnetic entropy are in good agreement with the remarkable recent facts collected on the heavy - fermion metal ybrh2si2.
arxiv:0904.1799
comparing two images in terms of commonalities and differences ( cad ) is a fundamental human capability that forms the basis of advanced visual reasoning and interpretation. it is essential for the generation of detailed and contextually relevant descriptions, performing comparative analysis, novelty detection, and making informed decisions based on visual data. however, surprisingly, little attention has been given to these fundamental concepts in the best current mimic of human visual intelligence - large multimodal models ( lmms ). we develop and contribute a new two - phase approach cad - vi for collecting synthetic visual instructions, together with an instruction - following dataset cad - inst containing 349k image pairs with cad instructions collected using cad - vi. our approach significantly improves the cad spotting capabilities in lmms, advancing the sota on a diverse set of related tasks by up to 17. 5 %. it is also complementary to existing difference - only instruction datasets, allowing automatic targeted refinement of those resources increasing their effectiveness for cad tuning by up to 10 %. additionally, we propose an evaluation benchmark with 7. 5k open - ended qas to assess the cad understanding abilities of lmms.
arxiv:2406.09240
we report on the demonstration of broadband squeezed laser beams that show a frequency dependent orientation of the squeezing ellipse. carrier frequency as well as quadrature angle were stably locked to a reference laser beam at 1064nm. this frequency dependent squeezing was characterized in terms of noise power spectra and contour plots of wigner functions. the later were measured by quantum state tomography. our tomograph allowed a stable lock to a local oscillator beam for arbitrary quadrature angles with one degree precision. frequency dependent orientations of the squeezing ellipse are necessary for squeezed states of light to provide a broadband sensitivity improvement in third generation gravitational wave interferometers. we consider the application of our system to long baseline interferometers such as a future squeezed light upgraded geo600 detector.
arxiv:0706.4479
observations of outflows associated with pre - main - sequence stars reveal details about morphology, binarity and evolutionary states of young stellar objects. we present molecular line data from the berkeley - illinois - maryland association array and five colleges radio astronomical observatory toward the regions containing the herbig ae / be stars lkha 198 and lkha 225s. single dish observations of 12co 1 - 0, 13co 1 - 0, n2h + 1 - 0 and cs 2 - 1 were made over a field of 4. 3 ' x 4. 3 ' for each species. 12co data from fcrao were combined with high resolution bima array data to achieve a naturally - weighted synthesized beam of 6. 75 ' ' x 5. 5 ' ' toward lkha 198 and 5. 7 ' ' x 3. 95 ' ' toward lkha 225s, representing resolution improvements of factors of approximately 10 and 5 over existing data. by using uniform weighting, we achieved another factor of two improvement. the outflow around lkha 198 resolves into at least four outflows, none of which are centered on lkha 198 - ir, but even at our resolution, we cannot exclude the possibility of an outflow associated with this source. in the lkha 225s region, we find evidence for two outflows associated with lkha 225s itself and a third outflow is likely driven by this source. identification of the driving sources is still resolution - limited and is also complicated by the presence of three clouds along the line of sight toward the cygnus molecular cloud. 13co is present in the environments of both stars along with cold, dense gas as traced by cs and ( in lkha 225s ) n2h +. no 2. 6 mm continuum is detected in either region in relatively shallow maps compared to existing continuum observations.
arxiv:0708.1775
automated guided vehicles ( agvs ) are essential in various industries for their efficiency and adaptability. however, planning trajectories for agvs in obstacle - dense, unstructured environments presents significant challenges due to the nonholonomic kinematics, abundant obstacles, and the scenario ' s nonconvex and constrained nature. to address this, we propose an efficient trajectory planning framework for agvs by formulating the problem as an optimal control problem. our framework utilizes the fast safe rectangular corridor ( fsrc ) algorithm to construct rectangular convex corridors, representing avoidance constraints as box constraints. this eliminates redundant obstacle influences and accelerates the solution speed. additionally, we employ the modified visibility graph algorithm to speed up path planning and a boundary discretization strategy to expedite fsrc construction. experimental results demonstrate the effectiveness and superiority of our framework, particularly in computational efficiency. compared to advanced frameworks, our framework achieves computational efficiency gains of 1 to 2 orders of magnitude. notably, fsrc significantly outperforms other safe convex corridor - based methods regarding computational efficiency.
arxiv:2309.07979
a neural network is locally specialized to the extent that parts of its computational graph ( i. e. structure ) can be abstractly represented as performing some comprehensible sub - task relevant to the overall task ( i. e. functionality ). are modern deep neural networks locally specialized? how can this be quantified? in this paper, we consider the problem of taking a neural network whose neurons are partitioned into clusters, and quantifying how functionally specialized the clusters are. we propose two proxies for this : importance, which reflects how crucial sets of neurons are to network performance ; and coherence, which reflects how consistently their neurons associate with features of the inputs. to measure these proxies, we develop a set of statistical methods based on techniques conventionally used to interpret individual neurons. we apply the proxies to partitionings generated by spectrally clustering a graph representation of the network ' s neurons with edges determined either by network weights or correlations of activations. we show that these partitionings, even ones based only on weights ( i. e. strictly from non - runtime analysis ), reveal groups of neurons that are important and coherent. these results suggest that graph - based partitioning can reveal local specialization and that statistical methods can be used to automatedly screen for sets of neurons that can be understood abstractly.
arxiv:2110.08058
it is significant to employ multiple autonomous underwater vehicles ( auvs ) to execute the underwater target tracking task collaboratively. however, it ' s pretty challenging to meet various prerequisites utilizing traditional control methods. therefore, we propose an effective two - stage learning from demonstrations training framework, fisher, to highlight the adaptability of reinforcement learning ( rl ) methods in the multi - auv underwater target tracking task, while addressing its limitations such as extensive requirements for environmental interactions and the challenges in designing reward functions. the first stage utilizes imitation learning ( il ) to realize policy improvement and generate offline datasets. to be specific, we introduce multi - agent discriminator - actor - critic based on improvements of the generative adversarial il algorithm and multi - agent il optimization objective derived from the nash equilibrium condition. then in the second stage, we develop multi - agent independent generalized decision transformer, which analyzes the latent representation to match the future states of high - quality samples rather than reward function, attaining further enhanced policies capable of handling various scenarios. besides, we propose a simulation to simulation demonstration generation procedure to facilitate the generation of expert demonstrations in underwater environments, which capitalizes on traditional control methods and can easily accomplish the domain transfer to obtain demonstrations. extensive simulation experiments from multiple scenarios showcase that fisher possesses strong stability, multi - task performance and capability of generalization.
arxiv:2412.03959
a simple model which can explain the observed vertical distribution and size spectrum of atmospheric aerosol has been proposed. the model is based on a new physical hypothesis for the vertical mass exchange between the troposphere and the stratosphere. the vertical mass excange takes place through a gravity wave feedback mechanism. there is a close agreement between the model predicted aerosol distribution and size spectrum and the observed distributions.
arxiv:physics/9912014
this paper advocates for guiding an effective system implementation approach at a business process level. it details a case study of a food product manufacturer that transitioned to a new local information system. 41 units ' data ( 10160 cases ) over the pre - maturity phase of the system were then structured into event logs and analyzed. this analysis identified deviant process paths, questioning whether the new system efficiently supports procurement operations immediately post - implementation. the findings reveal critical implementation risks with conformance - checking of the as - is process with the to - be process model ; this includes incomplete cases, unauthorized activities, irregular payment practices, stemming from organizational bottlenecks, or violating internal control regulations. challenges are attributed to technical shortcomings in system design and cultural misalignments, necessitating immediate interventions or longer - term cultural and training solutions. this study ' s contribution is its demonstration of a transparent, process - driven approach to system governance, highlighting the strategic benefits of this integration for organizational management.
arxiv:2407.20088
we have developed an efficient tensor network algorithm for spin ladders, which generates ground - state wave functions for infinite - size quantum spin ladders. the algorithm is able to efficiently compute the ground - state fidelity per lattice site, a universal phase transition marker, thus offering a powerful tool to unveil quantum many - body physics underlying spin ladders. to illustrate our scheme, we consider the two - leg and three - leg heisenberg spin ladders with staggering dimerization. the ground - state phase diagram thus yielded is reliable, compared with the previous studies based on the density matrix renormalization group. our results indicate that the ground - state fidelity per lattice site successfully captures quantum criticalities in spin ladders.
arxiv:1105.3016
we use seven machine learning algorithms for one task : identifying base noun phrases. the results have been processed by different system combination methods and all of these outperformed the best individual result. we have applied the seven learners with the best combinator, a majority vote of the top five systems, to a standard data set and managed to improve the best published result for this data set.
arxiv:cs/0008012
demonstrations provide insight into relevant state or action space regions, bearing great potential to boost the efficiency and practicality of reinforcement learning agents. in this work, we propose to leverage demonstration datasets by combining skill learning and sequence modeling. starting with a learned joint latent space, we separately train a generative model of demonstration sequences and an accompanying low - level policy. the sequence model forms a latent space prior over plausible demonstration behaviors to accelerate learning of high - level policies. we show how to acquire such priors from state - only motion capture demonstrations and explore several methods for integrating them into policy learning on transfer tasks. our experimental results confirm that latent space priors provide significant gains in learning speed and final performance. we benchmark our approach on a set of challenging sparse - reward environments with a complex, simulated humanoid, and on offline rl benchmarks for navigation and object manipulation. videos, source code and pre - trained models are available at the corresponding project website at https : / / facebookresearch. github. io / latent - space - priors.
arxiv:2210.14685
galaxy clusters are assembled via merging of smaller structures, in a process that generates shocks and turbulence in the intra cluster medium and produces radio diffuse emission in the form of halos and relics. the cluster pair a399 - a401 represents a special case : both clusters host a radio halo. recent low frequency array ( lofar ) observations at 140 mhz revealed the presence of a radio bridge connecting the two clusters along with two relic candidates. these relics include one south of a399 and the other in between the two clusters, in proximity of a shock front detected in x - ray observations. in this paper we present observations of the a399 - a401 cluster pair at 1. 7, 1. 4, 1. 2 ghz and 346 mhz from the westerbork synthesis radio telescope ( wsrt ). we detect the radio halo in the a399 cluster at 346 mhz, extending up to $ \ sim 650 $ kpc and with a $ 125 \ pm 6 $ mjy flux density. its spectral index between 140 mhz and 346 mhz is $ \ alpha = 1. 75 \ pm 0. 14 $. the two candidate relics are also seen at 346 mhz and we determine their spectral indices to be $ \ alpha = 1. 10 \ pm 0. 14 $ and $ \ alpha = 1. 46 \ pm 0. 14 $. the low surface brightness bridge connecting the two clusters is below the noise level at 346 mhz, therefore we constrain the bridge average spectral index to be steep, i. e. $ \ alpha > 1. 5 $ at 95 % confidence level. this result favours the scenario where dynamically - induced turbulence is a viable mechanism to reaccelerate a population of mildly relativistic particles and amplify magnetic fields on scales of a few mpcs. key words : galaxies : clusters : general - galaxies : clusters : individual : abell 399 - radio continuum : general
arxiv:2102.02900
if e is a non - isotrivial elliptic curve over a global function field f of odd characteristic we show that certain mordell - weil groups of e have 1 - dimensional eigenspace relative to a fixed complex ring class character provided that the projection onto this eigenspace of a suitable drinfeld - heegner point is nonzero. this represents the analogue in the function field setting of a theorem for rational elliptic curves due to bertolini and darmon, and at the same time is a generalization of the main result proved by brown in his monograph on heegner modules. as in the number field case, our proof employs kolyvagin - type arguments, and the cohomological machinery is started up by the control on the galois structure of the torsion of e provided by classical results of igusa in positive characteristic.
arxiv:0804.1658
a classical decision tree is completely based on splitting measures, which utilize the occurrence of random events in correspondence to its class labels in order to optimally segregate datasets. however, the splitting measures are based on greedy strategy, which leads to construction of an imbalanced tree and hence decreases the prediction accuracy of the classical decision tree algorithm. an intriguing approach is to utilize the foundational aspects of quantum computing for enhancing decision tree algorithm. therefore, in this work, we propose to use fidelity as a quantum splitting criterion to construct an efficient and balanced quantum decision tree. for this, we construct a quantum state using the occurrence of random events in a feature and its corresponding class. the quantum state is further utilized to compute fidelity for determining the splitting attribute among all features. using numerical analysis, our results clearly demonstrate that the proposed algorithm cooperatively ensures the construction of a balanced tree. we further compared the efficiency of our proposed quantum splitting criterion to different classical splitting criteria on balanced and imbalanced datasets. our simulation results show that the proposed splitting criterion exceeds all classical splitting criteria for all possible evaluation metrics.
arxiv:2310.18243
in the work a nonlinear duffing oscillator is considered under impulse excitation with two ways of introduction of the random additive term simulating noise, - with help of amplitude modulation and modulation of period of impulses sequence. the scaling properties both in the feigenbaum scenario and in the tricritical case are shown.
arxiv:nlin/0611040
we present a technique to directly excite luttinger liquid collective modes in carbon nanotubes at ghz frequencies. by modeling the nanotube as a nano - transmission line with distributed kinetic and magnetic inductance as well as distributed quantum and electrostatic capacitance, we calculate the complex, frequency dependent impedance for a variety of measurement geometries. exciting voltage waves on the nano - transmission line is equivalent to directly exciting the yet - to - be observed one dimensional plasmons, the low energy excitation of a luttinger liquid. our technique has already been applied to 2d plasmons and should work well for 1d plasmons. tubes of length 100 microns must be grown for ghz resonance frequencies. ohmic contact is not necessary with our technique ; capacitive contacts can work.
arxiv:cond-mat/0204262
we prove the convergence of the law of grid - valued random walks, which can be seen as time - space markov chains, to the law of a general diffusion process. this includes processes with sticky features, reflecting or absorbing boundaries and skew behavior. we prove that the convergence occurs at any rate strictly inferior to $ ( 1 / 4 ) \ wedge ( 1 / p ) $ in terms of the maximum cell size of the grid, for any $ p $ - wasserstein distance. we also show that it is possible to achieve any rate strictly inferior to $ ( 1 / 2 ) \ wedge ( 2 / p ) $ if the grid is adapted to the speed measure of the diffusion, which is optimal for $ p \ le 4 $. this result allows us to set up asymptotically optimal approximation schemes for general diffusion processes. last, we experiment numerically on diffusions that exhibit various features.
arxiv:2206.03713
users giving relevance feedback in exploratory search are often uncertain about the correctness of their feedback, which may result in noisy or even erroneous feedback. additionally, the search intent of the user may be volatile as the user is constantly learning and reformulating her search hypotheses during the search. this may lead to a noticeable concept drift in the feedback. we formulate a bayesian regression model for predicting the accuracy of each individual user feedback and thus find outliers in the feedback data set. additionally, we introduce a timeline interface that visualizes the feedback history to the user and gives her suggestions on which past feedback is likely in need of adjustment. this interface also allows the user to adjust the feedback accuracy inferences made by the model. simulation experiments demonstrate that the performance of the new user model outperforms a simpler baseline and that the performance approaches that of an oracle, given a small amount of additional user interaction. a user study shows that the proposed modelling technique, combined with the timeline interface, makes it easier for the users to notice and correct mistakes in their feedback, and to discover new items.
arxiv:1603.02609
we performed an x - ray diffraction experiment while palladium bulk absorbed and desorbed hydrogen to investigate the behavior of the crystalline lattice during the phase transition between the $ \ alpha $ phase and $ \ beta $ phase. fast growth of $ \ beta $ phase was observed around x = 0. 1 and x = 0. 45 of pdh $ _ x $. in addition, slight compression of the lattice at high hydrogen concentration and increase in the lattice constant and the line width of the $ \ alpha $ phase after a cycle of absorption and desorption of hydrogen was observed. these behavior correlated with the change in the sample length, which may infer that the change in shape was related to the phase transition.
arxiv:1505.00441
recently spatial pyramid matching ( spm ) with scale invariant feature transform ( sift ) descriptor has been successfully used in image classification. unfortunately, the codebook generation and feature quantization procedures using sift feature have the high complexity both in time and space. to address this problem, in this paper, we propose an approach which combines local binary patterns ( lbp ) and three - patch local binary patterns ( tplbp ) in spatial pyramid domain. the proposed method does not need to learn the codebook and feature quantization processing, hence it becomes very efficient. experiments on two popular benchmark datasets demonstrate that the proposed method always significantly outperforms the very popular spm based sift descriptor method both in time and classification accuracy.
arxiv:1210.0386
supervised learning is often computationally easy in practice. but to what extent does this mean that other modes of learning, such as reinforcement learning ( rl ), ought to be computationally easy by extension? in this work we show the first cryptographic separation between rl and supervised learning, by exhibiting a class of block mdps and associated decoding functions where reward - free exploration is provably computationally harder than the associated regression problem. we also show that there is no computationally efficient algorithm for reward - directed rl in block mdps, even when given access to an oracle for this regression problem. it is known that being able to perform regression in block mdps is necessary for finding a good policy ; our results suggest that it is not sufficient. our separation lower bound uses a new robustness property of the learning parities with noise ( lpn ) hardness assumption, which is crucial in handling the dependent nature of rl data. we argue that separations and oracle lower bounds, such as ours, are a more meaningful way to prove hardness of learning because the constructions better reflect the practical reality that supervised learning by itself is often not the computational bottleneck.
arxiv:2404.03774
we provide a new algorithm for the treatment of inverse problems which combines the traditional svd inversion with an appropriate thresholding technique in a well chosen new basis. our goal is to devise an inversion procedure which has the advantages of localization and multiscale analysis of wavelet representations without losing the stability and computability of the svd decompositions. to this end we utilize the construction of localized frames ( termed " needlets " ) built upon the svd bases. we consider two different situations : the " wavelet " scenario, where the needlets are assumed to behave similarly to true wavelets, and the " jacobi - type " scenario, where we assume that the properties of the frame truly depend on the svd basis at hand ( hence on the operator ). to illustrate each situation, we apply the estimation algorithm respectively to the deconvolution problem and to the wicksell problem. in the latter case, where the svd basis is a jacobi polynomial basis, we show that our scheme is capable of achieving rates of convergence which are optimal in the $ l _ 2 $ case, we obtain interesting rates of convergence for other $ l _ p $ norms which are new ( to the best of our knowledge ) in the literature, and we also give a simulation study showing that the need - d estimator outperforms other standard algorithms in almost all situations.
arxiv:0705.0274
in the domain of autonomic and organic computing, the entities of a distributed system are variable as well as the efficiency and the intention of their work. therefore, a scalable mechanism to incentivise / sanction entities which contribute towards / against the system goal is needed. trust is a suited metric to find benevolent entities. in this paper, we focus for one on the simtrust model which introduces trust on entities when they share interest and opinions using tagging information. the second model is the weighted simple exponential smoothing trust metric ( wses ) which functions on explicitly rated items. wses follows two basic rules which ensure a logic rating mechanism. when putting these two models in context, simtrust has advantages on items that have not been rated yet or can not easily be rated. wses is a trust metric which returns good results on explicit rank values. we propose concepts on combining both approaches and state in which cases they are incompatible.
arxiv:2101.09715
we performed numerical simulations of stellar occultations by extra - solar cometary tails. we find that extra - solar comets can be detected by the apparent photometric variations of the central stars. in most cases, the light curve shows a very peculiar ` ` rounded triangular ' ' shape. however, in some other cases, the curve can mimic a planetary occultation. photometric variations due to comet occultations are mainly achromatic. nevertheless, if comets with small periastrons have smaller particles, these occultations could be chromatic with a larger extinction in the blue by few percents. we also estimate the number of detections expected in a large photometric survey at high accuracy. by the observation of several tens of thousand of stars, it should be possible to detect several hundreds of occultation per year. we thus conclude that a spatial photometric survey would detect a large number of extra - solar comets. this would allow to explore the time evolution of cometary activity, and consequently would probe structure and evolution of extra - solar planetary systems.
arxiv:astro-ph/9812381
in the aiops ( artificial intelligence for it operations ) era, accurately forecasting system states is crucial. in microservices systems, this task encounters the challenge of dynamic and complex spatio - temporal relationships among microservice instances, primarily due to dynamic deployments, diverse call paths, and cascading effects among instances. current time - series forecasting methods, which focus mainly on intrinsic patterns, are insufficient in environments where spatial relationships are critical. similarly, spatio - temporal graph approaches often neglect the nature of temporal trend, concentrating mostly on message passing between nodes. moreover, current research in microservices domain frequently underestimates the importance of network metrics and topological structures in capturing the evolving dynamics of systems. this paper introduces stmformer, a model tailored for forecasting system states in microservices environments, capable of handling multi - node and multivariate time series. our method leverages dynamic network connection data and topological information to assist in modeling the intricate spatio - temporal relationships within the system. additionally, we integrate the patchcrossattention module to compute the impact of cascading effects globally. we have developed a dataset based on a microservices system and conducted comprehensive experiments with stmformer against leading methods. in both short - term and long - term forecasting tasks, our model consistently achieved a 8. 6 % reduction in mae ( mean absolute error ) and a 2. 2 % reduction in mse ( mean squared error ). the source code is available at https : / / github. com / xuyifeiiie / stmformer.
arxiv:2408.07894
biological rhythms are generated by pacemaker organs, such as the heart pacemaker organ ( the sinoatrial node ) and the master clock of the circadian rhythms ( the suprachiasmatic nucleus ), which are composed of a network of autonomously oscillatory cells. such biological rhythms have notable periodicity despite the internal and external noise present in each cell. previous experimental studies indicate that the regularity of oscillatory dynamics is enhanced when noisy oscillators interact and become synchronized. this effect, called the collective enhancement of temporal precision, has been studied theoretically using particular assumptions. in this study, we propose a general theoretical framework that enables us to understand the dependence of temporal precision on network parameters including size, connectivity, and coupling intensity ; this effect has been poorly understood to date. our framework is based on a phase oscillator model that is applicable to general oscillator networks with any coupling mechanism if coupling and noise are sufficiently weak. in particular, we can manage general directed and weighted networks. we quantify the precision of the activity of a single cell and the mean activity of an arbitrary subset of cells. we find that, in general undirected networks, the standard deviation of cycle - to - cycle periods scales with the system size $ n $ as $ 1 / \ sqrt { n } $, but only up to a certain system size $ n ^ * $ that depends on network parameters. enhancement of temporal precision is ineffective when $ n > n ^ * $. we also reveal the advantage of long - range interactions among cells to temporal precision.
arxiv:1108.4790
event - driven architecture has been widely adopted in the software industry, emerging as an alternative to the development of enterprise applications based on the rest architectural style. however, little is known about the effects of event - driven architecture on modularity while enterprise applications evolve. consequently, practitioners end up adopting it without any empirical evidence about its impacts on essential indicators, including separation of concerns, coupling, cohesion, complexity and size. this article, therefore, reports an exploratory study comparing event - driven architecture and rest style in terms of modularity. a real - world application was developed using an event - driven architecture and rest through five evolution scenarios. in each scenario, a feature was added. the generated versions were compared using ten metrics. the initial results suggest that the event - driven architecture improved the separation of concerns, but was outperformed considering the metrics of coupling, cohesion, complexity and size. the findings are encouraging and can be seen as a first step in a more ambitious agenda to empirically evaluate the benefits of event - driven architecture against the rest style.
arxiv:2110.14699
the switching probability of a single - domain ferromagnet under spin - current excitation is evaluated using the fokker - planck equation ( fpe ). in the case of uniaxial anisotropy, the fpe reduces to an ordinary differential equation in which the lowest eigenvalue $ \ lambda _ 1 $ determines the slowest switching events. we have calculated $ \ lambda _ 1 $ by using both analytical and numerical methods. it is found that the previous model based on thermally distributed initial magnetization states \ cite { sun1 } can be accurately justified in some useful limiting conditions.
arxiv:cond-mat/0612334
the following notes are intended to provide a brief primer in plasma physics, introducing common definitions, basic properties and processes typically found in plasmas. these concepts are inherent in contemporary plasma - based accelerator schemes, and thus build foundation for the more advanced lectures which follow in this volume. no prior knowledge of plasma physics is required, but the reader is assumed to be familiar with basic electrodynamics and fluid mechanics.
arxiv:2007.04783
we present the irreversibility generated by a stationary cavity magnomechanical system composed of a yttrium iron garnet ( yig ) sphere with a diameter of a few hundred micrometers inside a microwave cavity. in this system, the magnons, i. e., collective spin excitations in the sphere, are coupled to the cavity photon mode via magnetic dipole interaction and to the phonon mode via magnetostrictive force ( optomechanical - like ). we employ the quantum phase space formulation of the entropy change to evaluate the steady - state entropy production rate and associated quantum correlation in the system. we find that the behavior of the entropy flow between the cavity photon mode and the phonon mode is determined by the magnon - photon coupling and the cavity photon dissipation rate. interestingly, the entropy production rate can increase / decrease depending on the strength of the magnon - photon coupling and the detuning parameters. we further show that the amount of correlations between the magnon and phonon modes is linked to the irreversibility generated in the system for small magnon - photon coupling. our results demonstrate the possibility of exploring irreversibility in driven magnon - based hybrid quantum systems and open a promising route for quantum thermal applications.
arxiv:2401.16857
we clarify that metamagnetic transitions in three dimensions show unusual properties as quantum phase transitions if they are accompanied by changes in fermi - surface topology. an unconventional universality deeply affected by the topological nature of lifshitz - type transitions emerges around the marginal quantum critical point ( mqcp ). here the mqcp is defined by the meeting point of the finite temperature critical line and a quantum critical line running on the zero temperature plane. the mqcp offers a marked contrast with the ising universality and the gas - liquid - type criticality satisfied for conventional metamagnetic transitions. at the mqcp, the inverse magnetic susceptibility chi ^ - 1 has diverging slope as a function of the magnetization m ( namely, | d chi ^ - 1 / d m | - > infty ) in one side of the transition, which should not occur in any conventional quantum critical phenomena. the exponent of the divergence can be estimated even at finite temperatures. we propose that such an unconventional universality indeed accounts for the metamagnetic transition in zrzn _ 2.
arxiv:cond-mat/0703441
considerable work has focused on the use of epitaxial strain to engineer domain structures in ferroic materials. here, we revisit the observed reduction of domain variants in rhombohedral bifeo3 films on rare - earth scandate substrates. prior work has attributed the reduction of domain variants to anisotropic in - plane strain, but our findings suggest that the monoclinic distortion of the substrate, resulting from oxygen octahedral rotation, is the driving force for variant selection. we study epitaxial bifeo3 dysco3 ( 110 ) o heterostructures with and without ultrathin, cubic srtio3 buffer layers as a means to isolate the effect of symmetry mismatch on the domain formation. two variant stripe domains are observed in films grown directly on dysco3, while four - variant domains are observed in films grown on srtio3 buffered dysco3 when the buffer layer is > 2 nm thick. this work provides insights into the role of the substrate beyond just lattice mismatch in manipulating and controlling domain structure evolution in materials.
arxiv:1509.03709
we consider effects of coulomb interaction in a granular normal metal at not very low temperatures suppressing weak localization effects. in this limit calculations with the initial electron hamiltonian are reduced to integrations over a phase variable with an effective action, which can be considered as a bosonization for the granular metal. conditions of the applicability of the effective action are considered in detail and importance of winding numbers for the phase variables is emphasized. explicit calculations are carried out for the conductivity and the tunneling density of states in the limits of large $ g \ gg 1 $ and small $ g \ ll 1 $ tunnelling conductances. it is demonstrated for any dimension of the array of the grains that at small $ g $ the conductivity and the tunnelling density of states decay with temperature exponentially. at large $ g $ the conductivity also decays with decreasing the temparature and its temperature dependence is logarithmic independent of dimensionality and presence of a magnetic field. the tunnelling density of states for $ g \ gg 1 $ is anomalous in any dimension but the anomaly is stronger than logarithmic in low dimensions and is similar to that for disordered systems. the formulae derived are compared with existing experiments. the logarithmic behavior of the conductivity at large $ g $ obtained in our model can explain numerous experiments on systems with a granular structure including some high $ t _ { c } $ materials.
arxiv:cond-mat/0302257
a cs fountain electron electric dipole moment ( edm ) experiment using electric - field quantization is demonstrated. with magnetic fields reduced to 200 pt or less, the electric field lifts the degeneracy between hyperfine levels of different | mf | and, along with the slow beam and fountain geometry, suppresses systematics from motional magnetic fields. transitions are induced and the atoms polarized and analyzed in field - free regions. the feasibility of reaching a sensitivity to an electron edm of 2 x 10 exp ( - 50 ) c - m [ 1. 3 x 10 exp ( - 29 ) e - cm ] in a cesium fountain experiment is discussed.
arxiv:physics/0602011
modeling real - world distributions can often be challenging due to sample data that are subjected to perturbations, e. g., instrumentation errors, or added random noise. since flow models are typically nonlinear algorithms, they amplify these initial errors, leading to poor generalizations. this paper proposes a framework to construct normalizing flows ( nf ), which demonstrates higher robustness against such initial errors. to this end, we utilize bernstein - type polynomials inspired by the optimal stability of the bernstein basis. further, compared to the existing nf frameworks, our method provides compelling advantages like theoretical upper bounds for the approximation error, higher interpretability, suitability for compactly supported densities, and the ability to employ higher degree polynomials without training instability. we conduct a thorough theoretical analysis and empirically demonstrate the efficacy of the proposed technique using experiments on both real - world and synthetic datasets.
arxiv:2102.03509
information - centric networking ( icn ) is a recent paradigm that claims to mitigate some limitations of the current ip - based internet architecture. the centerpiece of icn is named and addressable content, rather than hosts or interfaces. content - centric networking ( ccn ) is a prominent icn instance that shares the fundamental architectural design with its equally popular academic sibling named - data networking ( ndn ). ccn eschews source addresses and creates one - time virtual circuits for every content request ( called an interest ). as an interest is forwarded it creates state in intervening routers and the requested content back is delivered over the reverse path using that state. although a stateful forwarding plane might be beneficial in terms of efficiency, and resilience to certain types of attacks, this has not been decisively proven via realistic experiments. since keeping per - interest state complicates router operations and makes the infrastructure susceptible to router state exhaustion attacks ( e. g., there is currently no effective defense against interest flooding attacks ), the value of the stateful forwarding plane in ccn should be re - examined. in this paper, we explore supposed benefits and various problems of the stateful forwarding plane. we then argue that its benefits are uncertain at best and it should not be a mandatory ccn feature. to this end, we propose a new stateless architecture for ccn that provides nearly all functionality of the stateful design without its headaches. we analyze performance and resource requirements of the proposed architecture, via experiments.
arxiv:1512.07755
competition is a major force in structuring ecological communities. the strength of competition can be measured using the concept of a niche. a niche comprises the set of requirements of an organism in terms of habitat, environment and functional role. the more niches overlap, the stronger competition is. the niche breadth is a measure of specialization : the smaller the niche space of an organism, the more specialized the organism is. it follows that, everything else being equal, generalists tend to be more competitive than specialists. in this paper, we compare the outcome of competition among generalists and specialists in a spatial versus a nonspatial habitat in a heterogeneous environment. generalists can utilize the entire habitat, whereas specialists are restricted to their preferred habitat type. we find that although competitiveness decreases with specialization, specialists are more competitive in a spatial than in a nonspatial habitat as patchiness increases.
arxiv:math/0610227
this correspondence investigates a reconfigurable intelligent surface ( ris ) - assisted wireless communication system with security threats. the ris is deployed to enhance the secrecy outage probability ( sop ) of the data sent to a legitimate user. by deriving the distributions of the received signal - to - noise - ratios ( snrs ) at the legitimate user and the eavesdropper, we formulate, in a closed - form expression, a tight bound for the sop under the constraint of discrete phase control at the ris. the sop is characterized as a function of the number of antenna elements, $ n $, and the number of discrete phase choices, $ 2 ^ b $. it is revealed that the performance loss in terms of sop due to the discrete phase control is ignorable for large $ n $ when $ b \! \ geq \! 3 $. in addition, we explicitly quantify this sop loss when binary phase shifts with $ b \! = \! 1 $ is utilized. it is identified that increasing the ris antenna elements by $ 1. 6 $ times can achieve the same sop with binary phase shifts as that by the ris with ideally continuous phase shifts. numerical simulations are conducted to verify the accuracy of these theoretical observations.
arxiv:2210.17084
deep networks allow to obtain outstanding results in semantic segmentation, however they need to be trained in a single shot with a large amount of data. continual learning settings where new classes are learned in incremental steps and previous training data is no longer available are challenging due to the catastrophic forgetting phenomenon. existing approaches typically fail when several incremental steps are performed or in presence of a distribution shift of the background class. we tackle these issues by recreating no longer available data for the old classes and outlining a content inpainting scheme on the background class. we propose two sources for replay data. the first resorts to a generative adversarial network to sample from the class space of past learning steps. the second relies on web - crawled data to retrieve images containing examples of old classes from online databases. in both scenarios no samples of past steps are stored, thus avoiding privacy concerns. replay data are then blended with new samples during the incremental steps. our approach, recall, outperforms state - of - the - art methods.
arxiv:2108.03673
although atrial fibrillation ( af ), a common arrhythmia, frequently presents in patients with underlying valvular disease, its hemodynamic contributions are not fully understood. the present work aimed to computationally study how physical conditions imposed by pathologic valvular anatomy act on af hemodynamics. we simulated af with different severity grades of left - sided valvular diseases and compared the cardiovascular effects that they exert during af, compared to lone af. the fluid dynamics model used here has been recently validated for lone af and relies on a lumped parameterization of the four heart chambers, together with the systemic and pulmonary circulation. three different grades of severity ( mild, moderate, severe ) were analyzed for each of the four valvulopathies ( aortic stenosis, mitral stenosis, aortic regurgitation, mitral regurgitation ). regurgitation was hemodynamically more relevant than stenosis, as the latter led to inefficient cardiac flow, while the former introduced more drastic fluid dynamics variation. moreover, mitral valvulopathies were more significant than aortic ones. in case of aortic valve diseases, proper mitral functioning damps out changes at atrial and pulmonary levels. in the case of mitral valvulopathy, the mitral valve lost its regulating capability, thus hemodynamic variations almost equally affected regions upstream and downstream of the valve. the present study revealed that both mitral and aortic regurgitation strongly affect hemodynamics, followed by mitral stenosis, while aortic stenosis has the least impact among the analyzed valvular diseases. the proposed approach can provide new mechanistic insights as to which valvular pathologies merit more aggressive treatment of af. present findings, if clinically confirmed, hold the potential to impact af management ( e. g., adoption of a rhythm control strategy ) in specific valvular diseases.
arxiv:1607.07608
we construct a cohomology theory controlling the deformations of a general drinfel ' d algebra. the picture presented here has two sides - - the combinatorial one related with the fact of the existence of a graded lie algebra structure on the simplicial cochain complex of the associahedra, and the algebraic one related with the algebra of derivations on the bar construction.
arxiv:hep-th/9312196
we consider generalizations of rational convexity to stein manifolds and prove related results
arxiv:2310.07066
a kahler - type form is a symplectic form compatible with an integrable complex structure. let m be either a torus or a k3 - surface equipped with a kahler - type form. we show that the homology class of any maslov - zero lagrangian torus in m has to be non - zero and primitive. this extends previous results of abouzaid - smith ( for tori ) and sheridan - smith ( for k3 - surfaces ) who proved it for particular kahler - type forms on m. in the k3 case our proof uses dynamical properties of the action of the diffeomorphism group of m on the space of the kahler - type forms. these properties are obtained using shah ' s arithmetic version of ratner ' s orbit closure theorem.
arxiv:2105.05971
we have introduced a new sequence space $ l ( r, s, t, p ; \ delta ^ { ( m ) } ) $ combining by using generalized means and difference operator of order $ m $. we have shown that the space $ l ( r, s, t, p ; \ delta ^ { ( m ) } ) $ is complete under some suitable paranorm and it has schauder basis. furthermore, the $ \ alpha $ -, $ \ beta $ -, $ \ gamma $ - duals of this space is computed and also obtained necessary and sufficient conditions for some matrix transformations from $ l ( r, s, t, p ; \ delta ^ { ( m ) } ) $ to $ l _ { \ infty }, l _ 1 $. finally, we obtained some identities or estimates for the operator norms and the hausdorff measure of noncompactness of some matrix operators on the bk space $ l _ { p } ( r, s, t ; \ delta ^ { ( m ) } ) $ by applying the hausdorff measure of noncompactness.
arxiv:1308.2667
we present a new approach for neural optimal transport ( not ) training procedure, capable of accurately and efficiently estimating optimal transportation plan via specific regularization on dual kantorovich potentials. the main bottleneck of existing not solvers is associated with the procedure of finding a near - exact approximation of the conjugate operator ( i. e., the c - transform ), which is done either by optimizing over non - convex max - min objectives or by the computationally intensive fine - tuning of the initial approximated prediction. we resolve both issues by proposing a new, theoretically justified loss in the form of expectile regularisation which enforces binding conditions on the learning process of dual potentials. such a regularization provides the upper bound estimation over the distribution of possible conjugate potentials and makes the learning stable, completely eliminating the need for additional extensive fine - tuning. proposed method, called expectile - regularised neural optimal transport ( enot ), outperforms previous state - of - the - art approaches on the established wasserstein - 2 benchmark tasks by a large margin ( up to a 3 - fold improvement in quality and up to a 10 - fold improvement in runtime ). moreover, we showcase performance of enot for varying cost functions on different tasks such as image generation, showing robustness of proposed algorithm. ott - jax library includes our implementation of enot algorithm https : / / ott - jax. readthedocs. io / en / latest / tutorials / enot. html
arxiv:2403.03777
cluster - weighted modeling ( cwm ) is a flexible mixture approach for modeling the joint probability of data coming from a heterogeneous population as a weighted sum of the products of marginal distributions and conditional distributions. in this paper, we introduce a wide family of cluster weighted models in which the conditional distributions are assumed to belong to the exponential family with canonical links which will be referred to as generalized linear gaussian cluster weighted models. moreover, we show that, in a suitable sense, mixtures of generalized linear models can be considered as nested in generalized linear gaussian cluster weighted models. the proposal is illustrated through many numerical studies based on both simulated and real data sets.
arxiv:1211.1171
for the first time, a group of cab6 - typed cubic rare earth high - entropy hexaborides have been successfully fabricated into dense bulk pellets ( > 98. 5 % in relative densities ). the specimens are prepared from elemental precursors via in - situ metal - boron reactive spark plasma sintering. the sintered bulk pellets are determined to be single - phase without any detectable oxides or other secondary phases. the homogenous elemental distributions have been confirmed at both microscale and nanoscale. the vickers microhardness are measured to be 16 - 18 gpa at a standard indentation load of 9. 8 n. the nanoindentation hardness and young ' s moduli have been measured to be 19 - 22 gpa and 190 - 250 gpa, respectively, by nanoindentation test using a maximum load of 500 mn. the material work functions are determined to be 3. 7 - 4. 0 ev by ultraviolet photoelectron spectroscopy characterizations, which are significantly higher than that of lab6.
arxiv:2104.12549
a generic feature of string - derived models is the appearance of an anomalous abelian u ( 1 ) _ a symmetry which, among other properties, constrains the yukawa couplings and distinguishes the three families from each other. in this paper, we discuss in a model - independent way the general constraints imposed by such a u ( 1 ) _ a symmetry on fermion masses, r - violating couplings and proton - decay operators in a generic flipped su ( 5 ) x u ( 1 ) ' model. we construct all possible viable fermion mass textures and give various examples of effective low - energy models which are distinguished from each other by their different predictions for b -, l - and r - violating effects. we pay particular attention to predictions for neutrino masses, in the light of the recent super - kamiokande data.
arxiv:hep-ph/0002263
during the run in the year 2000, with data collected at collision energies up to 209 gev, the lep experiments have possibly unearthed the first evidence of a higgs boson signal at mh = 115 gev / c2. the preliminary combined results prepared immediately after the end of the data - taking, in november 2000, are presented here. overall, a 2. 9 sigma excess over the background is found, consistent with a standard model higgs boson signal with mh = 115. 0 gev / c2.
arxiv:hep-ex/0108002