text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
owing to the recent, rapid development of computer technology, the resolution of atmospheric numerical models has increased substantially. with the use of next - generation supercomputers, atmospheric simulations using horizontal grid intervals of o ( 100 ) m or less will gain popularity. at such high resolution more of the steep gradients in mountainous terrain will be resolved, which may result in large truncation errors in those models using terrain - following coordinates. in this study, a new 3d cartesian coordinate non - hydrostatic atmospheric model is developed. a cut - cell representation of topography based on finite - volume discretization is combined with a cell - merging approach, in which small cut - cells are merged with neighboring cells either vertically or horizontally. in addition, a block - structured mesh - refinement technique is introduced to achieve a variable resolution on the model grid with the finest resolution occurring close to the terrain surface. the model successfully reproduces a flow over a 3d bell - shaped hill that shows a good agreement with the flow predicted by the linear theory. the ability of the model to simulate flows over steep terrain is demonstrated using a hemisphere - shaped hill where the maximum slope angle is resolved at 71 degrees. the advantage of a locally refined grid around a 3d hill, with cut - cells at the terrain surface, is also demonstrated using the hemisphere - shaped hill. the model reproduces smooth mountain waves propagating over varying grid resolution without introducing large errors associated with the change of mesh resolution. at the same time, the model shows a good scalability on a locally refined grid with the use of openmp. | arxiv:1509.02572 |
we study the process of primordial black hole ( pbh ) formation at the beginning of radiation era for the cosmological scenario in which the inflaton is a pseudo - nambu - goldstone boson ( axion ) and there is a coupling of the inflaton with some gauge field. in this model inflation is accompanied by the gauge quanta production and a strong rise of the curvature power spectrum amplitude at small scales ( along with non - gaussianity ) is predicted. we show that data on pbh searches can be used for a derivation of essential constraints on the model parameters in such an axion inflation scenario. we compare our numerical results with the similar results published earlier, in the work by linde et al. | arxiv:1312.7435 |
data tampering is often considered a severe problem in industrial applications as it can lead to inaccurate financial reports or even a corporate security crisis. a correct representation of data is essential for companies ' core business processes and is demanded by investors and customers. traditional data audits are performed through third - party auditing services ; however, these services are expensive and can be untrustworthy in some cases. blockchain and smart contracts provide a decentralized mechanism to achieve secure and trustworthy data integrity verification ; however, existing solutions present challenges in terms of scalability, privacy protection, and compliance with data regulations. in this paper, we propose the automated and decentralized integrity verification model ( auditem ) to assist business stakeholders in verifying data integrity in a trustworthy and automated manner. to address the challenges in existing integrity verification processes, our model uses carefully designed smart contracts and a distributed file system to store integrity verification attributes and uses blockchain to enhance the authenticity of data certificates. a sub - module called data integrity verification tool ( divt ) is also developed to support easy - to - use interfaces and customizable verification operations. this paper presents a detailed implementation and designs experiments to verify the proposed model. the experimental and analytical results demonstrate that our model is feasible and efficient to meet various business requirements for data integrity verification. | arxiv:2207.00370 |
the early detection of drowsiness has become vital to ensure the correct and safe development of several industries ' tasks. due to the transient mental state of a human subject between alertness and drowsiness, automated drowsiness detection is a complex problem to tackle. the electroencephalography signals allow us to record variations in an individual ' s brain ' s electrical potential, where each of them gives specific information about a subject ' s mental state. however, due to this type of signal ' s nature, its acquisition, in general, is complex, so it is hard to have a large volume of data to apply techniques of deep learning for processing and classification optimally. nevertheless, capsule neural networks are a brand - new deep learning algorithm proposed for work with reduced amounts of data. it is a robust algorithm to handle the data ' s hierarchical relationships, which is an essential characteristic for work with biomedical signals. therefore, this paper presents a deep learning - based method for drowsiness detection with capsnet by using a concatenation of spectrogram images of the electroencephalography signals channels. the proposed capsnet model is compared with a convolutional neural network, which is outperformed by the proposed model, which obtains an average accuracy of 86, 44 % and 87, 57 % of sensitivity against an average accuracy of 75, 86 % and 79, 47 % sensitivity for the cnn, showing that capsnet is more suitable for this kind of datasets and tasks. | arxiv:2204.01666 |
segmentation of retinal vessel images is critical to the diagnosis of retinopathy. recently, convolutional neural networks have shown significant ability to extract the blood vessel structure. however, it remains challenging to refined segmentation for the capillaries and the edges of retinal vessels due to thickness inconsistencies and blurry boundaries. in this paper, we propose a novel deep neural network for retinal vessel segmentation based on shared decoder and pyramid - like loss ( spnet ) to address the above problems. specifically, we introduce a decoder - sharing mechanism to capture multi - scale semantic information, where feature maps at diverse scales are decoded through a sequence of weight - sharing decoder modules. also, to strengthen characterization on the capillaries and the edges of blood vessels, we define a residual pyramid architecture which decomposes the spatial information in the decoding phase. a pyramid - like loss function is designed to compensate possible segmentation errors progressively. experimental results on public benchmarks show that the proposed method outperforms the backbone network and the state - of - the - art methods, especially in the regions of the capillaries and the vessel contours. in addition, performances on cross - datasets verify that spnet shows stronger generalization ability. | arxiv:2202.09515 |
in the rapidly advancing field of neuromorphic computing, integrating biologically - inspired models like the leaky integrate - and - fire astrocyte ( lifa ) into spiking neural networks ( snns ) enhances system robustness and performance. this paper introduces the lifa model in snns, addressing energy efficiency, memory management, routing mechanisms, and fault tolerance. our core architecture consists of neurons, synapses, and astrocyte circuits, with each astrocyte supporting multiple neurons for self - repair. this clustered model improves fault tolerance and operational efficiency, especially under adverse conditions. we developed a routing methodology to map the lifa model onto a fault - tolerant, many - core design, optimizing network functionality and efficiency. our model features a fault tolerance rate of 81. 10 \ % and a resilience improvement rate of 18. 90 \ %, significantly surpassing other implementations. the results validate our approach in memory management, highlighting its potential as a robust solution for advanced neuromorphic computing applications. the integration of astrocytes represents a significant advancement, setting the stage for more resilient and adaptable neuromorphic systems. | arxiv:2502.20492 |
in this letter we show how the topological number of a static hamiltonian can be measured from a dynamical quench process. we focus on a two - band chern insulator in two - dimension, for instance, the haldane model, whose dynamical process can be described by a mapping from the $ [ k _ x, k _ y, t ] $ space to the bloch sphere, characterized by the hopf invariant. such a mapping has been constructed experimentally by measurements in cold atom systems. we show that, taking any two constant vectors on the bloch sphere, their inverse images of this mapping are two trajectories in the $ [ k _ x, k _ y, t ] $ space, and the linking number of these two trajectories exactly equals to the chern number of the static hamiltonian. applying this result to a recent experiment from the hamburg group, we show that the linking number of the trajectories of the phase vortices determines the phase boundary of the static hamiltonian. | arxiv:1611.03304 |
learning representations which remain invariant to a nuisance factor has a great interest in domain adaptation, transfer learning, and fair machine learning. finding such representations becomes highly challenging in nlp tasks since the nuisance factor is entangled in a raw text. to our knowledge, a major issue is also that only few nlp datasets allow assessing the impact of such factor. in this paper, we introduce two generalization metrics to assess model robustness to a nuisance factor : \ textit { generalization under target bias } and \ textit { generalization onto unknown }. we combine those metrics with a simple data filtering approach to control the impact of the nuisance factor on the data and thus to build experimental biased datasets. we apply our method to standard datasets of the literature ( \ textit { amazon } and \ textit { yelp } ). our work shows that a simple text classification baseline ( i. e., sentiment analysis on reviews ) may be badly affected by the \ textit { product id } ( considered as a nuisance factor ) when learning the polarity of a review. the method proposed is generic and applicable as soon as the nuisance variable is annotated in the dataset. | arxiv:1907.12305 |
we present the clustering properties of a complete sample of 968 radio sources detected at 1. 4 ghz by the vla - cosmos survey with radio fluxes brighter than 0. 15 mjy. 92 % have redshift determinations from the laigle et al. ( 2016 ) catalogue. based on their radio - luminosity, these objects have been divided into two populations of 644 agn and 247 star - forming galaxies. by fixing the slope of the auto - correlation function to gamma = 2, we find r _ 0 = 11. 7 ^ { + 1. 0 } _ { - 1. 1 } mpc for the clustering length of the whole sample, while r _ 0 = 11. 2 ^ { + 2. 5 } _ { - 3. 3 } mpc and r _ 0 = 7. 8 ^ { + 1. 6 } _ { - 2. 1 } mpc ( r _ 0 = 6. 8 ^ { + 1. 4 } _ { - 1. 8 } mpc if we restrict our analysis to z < 0. 9 ) are respectively obtained for agn and star - forming galaxies. these values correspond to minimum masses for dark matter haloes of m _ min = 10 ^ [ 13. 6 ^ { + 0. 3 } _ { - 0. 6 } ] m _ sun for radio - selected agn and m _ min = 10 ^ [ 13. 1 ^ { + 0. 4 } _ { - 1. 6 } ] m _ sun for radio - emitting star - forming galaxies ( m _ min = 10 ^ [ 12. 7 ^ { + 0. 7 } _ { - 2. 2 } ] m _ sun for z < 0. 9 ). comparisons with previous works imply an independence of the clustering properties of the agn population with respect to both radio luminosity and redshift. we also investigate the relationship between dark and luminous matter in both populations. we obtain < m * > / m _ halo < ~ 10 ^ { - 2. 7 } for agn, and < m * > / m _ halo < ~ 10 ^ { - 2. 4 } in the case of star - forming galaxies. furthermore, if we restrict to z < ~ 0. 9 star - forming galaxies, we derive < m * > / m _ halo < ~ 10 ^ { - 2. 1 }, result which clearly indicates the cosmic process of stellar build - up as one moves towards the more local universe. | arxiv:1606.08286 |
an electron - phonon system at commensurate filling often displays charge order ( co ) in the ground state. such a system subject to a laser pulse shows a wide variety of behaviour. a weak pulse sets up low amplitude oscillations in the order parameter, with slow decay to a slightly suppressed value. a strong pulse leads to the destruction of the charge order with the order parameter showing rapid, oscillatory, decay to zero. the regime in between, separating the weak pulse co sustained state from the strong pulse co destroyed state, shows complex dynamics characterised by multiple, pulse strength dependent, time scales. it involves an initial rapid decay of the order parameter, followed by a low amplitude quiescent state, and the power - law rise to a steady - state over a timescale $ \ tau _ { cr } $. we provide a complete characterisation of the dynamics in this nonequilibrium problem for varying electron - phonon coupling and pulse strength, examine the possibility of an effective " thermal " description of the long time state, and present results on the multiple insulator - metal transitions that show up. | arxiv:2205.14710 |
in the sub - tev regime, the most widely used hadronic interaction models disagree significant lying their predictions for post - first interaction and ground - level particle spectra from cosmic ray induced air showers. these differences generate an important source of systematic uncertainty in their experimental use. we investigate the nature and impact of model uncertainties through a simultaneous analysis of ground level particles and first interaction scenarios. we focus on air shower primaries with energies close to the transition between high and low energy hadronic interaction models, where the dissimilarities have been shown to be the largest and well within the range of accelerator measurements. interaction models are shown to diverge as several shower scenarios are compared, reflecting intrinsic differences in the model theoretical frameworks. finally, we discuss the importance of interactions in the energy regime where the switching between models occurs ( < 1 tev ) and the effect of the choice of model on the number of hadronic interactions within cosmic ray induced air showers of higher energies. | arxiv:2104.11034 |
radar and spacecraft observations show the permanently shadowed regions around mercury ' s north pole to contain water ice and complex organic material. one possible source of this material are impacts by interplanetary dust particles ( idps ), asteroids, and comets. we have performed numerical simulations of the dynamical evolution of asteroids and comets over the few myr and checked for their impacts with mercury. we use the n - body integrator rmvs / swifter to propagate the sun and the eight planets from their current positions. we add comets and asteroids to the simulations as massless test particles, based on their current orbital distributions. asteroid impactors are assigned a probability of being water - rich ( c - class ) based on the measured distribution of taxonomic types. for comets, we assume a constant water fraction. for idps, we use a dynamical meteoroid model to compute the dust flux on mercury. relative to previous work on asteroid and comet impacts ( moses et al. 1999 ), we leverage 20 years of progress in minor body surveys. immediate post - impact ejection of impactor material into outer space is taken into account as is the migration efficiency of water across mercury ' s surface to the polar cold traps. we find that asteroids deliver $ \ sim 1 \ times 10 ^ { 3 } $ kg / yr of water to mercury, comets deliver $ \ sim 1 \ times 10 ^ { 3 } $ kg / yr and idps deliver $ \ sim 16 \ times 10 ^ { 3 } $ kg / yr within a factor of several. over a timescale of $ \ sim 1 $ gyr, this is enough to deliver the minimum amount of water required by the radar and messenger observations. while other sources of water on mercury are not ruled out by our analysis, we show that they are not required to explain the currently available observational lower limits. | arxiv:2204.11825 |
we present three different functional interpretations of intuitionistic linear logic ill and show how these correspond to well - known functional interpretations of intuitionistic logic il via embeddings of il into ill. the main difference from previous work of the second author is that in intuitionistic linear logic ( as opposed to classical linear logic ) the interpretations of! a are simpler and simultaneous quantifiers are no longer needed for the characterisation of the interpretations. we then compare our approach in developing these three proof interpretations with the one of de paiva around the dialectica category model of linear logic. | arxiv:1012.1174 |
we study the most famous example of a large financial market : the arbitrage pricing model, where investors can trade in a one - period setting with countably many assets admitting a factor structure. we consider the problem of maximising expected utility in this setting. besides establishing the existence of optimizers under weaker assumptions than previous papers, we go on studying the relationship between optimal investments in finite market segments and those in the whole market. we show that certain natural ( but nontrivial ) continuity rules hold : maximal satisfaction, reservation prices and ( convex combinations of ) optimizers computed in small markets converge to their respective counterparts in the big market. | arxiv:1907.05593 |
foot temperature profiling is of utmost importance is mitigating the adverse effects due to foot complications especially due to diabetes. contactless temperature monitoring methods could be used effectively in large scale for patient screening. near - infrared thermography has proven to be convenient and accurate for temperature profiling. the objective of this study is to develop a diagnostic device using the said imaging technology to detect as well as progress monitoring of foot complications. the device we have developed is capable of scanning the foot plantar and the periphery and it is also accompanied by a semi - supervised thermal image analysis algorithm which is convenient to the clinician. preliminary clinical testing conducted using 6 diabetic subjects out of which 2 had ulcers in either foot and 9 non - diabetic subjects 2 of which had wounds on the plantar. the system was able to detect the ulcerated areas and wounds with the algorithm developed specifically for thermal image analysis. | arxiv:1901.05302 |
this paper presents multi - view labelling object detector ( mlod ). the detector takes an rgb image and a lidar point cloud as input and follows the two - stage object detection framework. a region proposal network ( rpn ) generates 3d proposals in a bird ' s eye view ( bev ) projection of the point cloud. the second stage projects the 3d proposal bounding boxes to the image and bev feature maps and sends the corresponding map crops to a detection header for classification and bounding - box regression. unlike other multi - view based methods, the cropped image features are not directly fed to the detection header, but masked by the depth information to filter out parts outside 3d bounding boxes. the fusion of image and bev features is challenging, as they are derived from different perspectives. we introduce a novel detection header, which provides detection results not just from fusion layer, but also from each sensor channel. hence the object detector can be trained on data labelled in different views to avoid the degeneration of feature extractors. mlod achieves state - of - the - art performance on the kitti 3d object detection benchmark. most importantly, the evaluation shows that the new header architecture is effective in preventing image feature extractor degeneration. | arxiv:1909.04163 |
we will extend a recent result of b. choi, p. daskalopoulos and j. king. for any $ n \ ge 3 $, $ 0 < m < \ frac { n - 2 } { n + 2 } $ and $ \ gamma > 0 $, we will construct subsolutions and supersolutions of the fast diffusion equation $ u _ t = \ frac { n - 1 } { m } \ delta u ^ m $ in $ \ mathbb { r } ^ n \ times ( t _ 0, t ) $, $ t _ 0 < t $, which decay at the rate $ ( t - t ) ^ { \ frac { 1 + \ gamma } { 1 - m } } $ as $ t \ nearrow t $. as a consequence we obtain the existence of unique solution of the cauchy problem $ u _ t = \ frac { n - 1 } { m } \ delta u ^ m $ in $ \ mathbb { r } ^ n \ times ( t _ 0, t ) $, $ u ( x, t _ 0 ) = u _ 0 ( x ) $ in $ \ mathbb { r } ^ n $, which decay at the rate $ ( t - t ) ^ { \ frac { 1 + \ gamma } { 1 - m } } $ as $ t \ nearrow t $ when $ u _ 0 $ satisfies appropriate decay condition. | arxiv:1902.09165 |
deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements. this success can be attributed in part to their ability to represent and generate natural images well. contrary to classical tools such as wavelets, image - generating deep neural networks have a large number of parameters - - - typically a multiple of their output dimension - - - and need to be trained on large datasets. in this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters. the deep decoder has a simple architecture with no convolutions and fewer weight parameters than the output dimensionality. this underparameterization enables the deep decoder to compress images into a concise set of network weights, which we show is on par with wavelet - based thresholding. further, underparameterization provides a barrier to overfitting, allowing the deep decoder to have state - of - the - art performance for denoising. the deep decoder is simple in the sense that each layer has an identical structure that consists of only one upsampling unit, pixel - wise linear combination of channels, relu activation, and channelwise normalization. this simplicity makes the network amenable to theoretical analysis, and it sheds light on the aspects of neural networks that enable them to form effective signal representations. | arxiv:1810.03982 |
we demonstrate a novel strong law of large numbers for branching processes, with a simple proof via measure - theoretic manipulations and spine theory. roughly speaking, any sequence of events that eventually occurs almost surely for the spine entails the almost sure convergence of a certain sum over particles in the population. | arxiv:1302.7199 |
b \ " ohm and \ c { s } tefan have expressed cyclic homology as an invariant that assigns homology groups $ \ mathrm { hc } ^ \ chi _ i ( \ mathrm n, \ mathrm m ) $ to right and left coalgebras $ \ mathrm n $ respectively $ \ mathrm m $ over a distributive law $ \ chi $ between two comonads. for the key example associated to a bialgebra $ h $, right $ \ chi $ - coalgebras have a description in terms of modules and comodules over $ h $. the present article formulates conditions under which such a description is simultaneously possible for the left $ \ chi $ - coalgebras. in the above example, this is the case when the bialgebra $ h $ is a hopf algebra with bijective antipode. we also discuss how the generalized hopf module theorem by mesablishvili and wisbauer features both in theory and examples. | arxiv:2501.14561 |
much effort has been put into developing theories for dense fluids, as a result of these efforts many theories work for a certain type of particle or in a certain concentration regime. rosenfeld proposed a dependence of the self - diffusion coefficient on the excess entropy. our proposal is similar to rosenfeld ' s in that it also attempts to describe diffusion in terms of a thermodynamic function but, instead of the excess entropy, we use the thermodynamic factor, or the excess chemical potential. simulations were taken for hard spheres and our model was fitted with two free parameters. simulations were then carried out for a lennard jones gas and our model correctly described the new data with the value of the free parameters that we had obtained for hard spheres. this is a feature of our model that we wish to emphasize, since the usual situation is that parameters have to be re - adjusted for different interaction potentials. an experimental xenon self - diffusion data set was used as an example where the model can be applied, especially in the high - density regime. | arxiv:2312.13843 |
career opportunities for phds in the mathematical sciences have never been better. traditional faculty positions in mathematics departments in colleges and universities range from all teaching to combined teaching and research responsibilities. beyond those, a wide array of careers has now opened up to freshly minted graduates, in academics, industry, business, and government. it is well - understood that these all require somewhat different preparations for phds to be competitive. this commentary compares and contrasts mathematics graduate programs with ph. d. programs in the life and biomedical sciences, which are structured in a way that allows considerable customization around each student ' s career goals. while these programs may not be appropriate templates for the mathematical sciences, they have some features that might be informative. this commentary is intended to add perspective to the ongoing discussion around phd training in the mathematical sciences. it also provides some concrete proposals for changes. | arxiv:2109.07661 |
in this work, we study the formation and evolution of dark matter halos by means of the spherical infall model with shell - crossing. we present a framework to tackle this effect properly based on the numerical follow - up, with time, of that individual shell of matter that contains always the same fraction of mass with respect to the total mass. in this first step, we do not include angular momentum, velocity dispersion or triaxiality. within this framework - named as the spherical shell tracker ( sst ) - we investigate the dependence of the evolution of the halo with virial mass, with the adopted mass fraction of the shell, and for different cosmologies. we find that our results are very sensitive to a variation of the halo virial mass or the mass fraction of the shell that we consider. however, we obtain a negligible dependence on cosmology. furthermore, we show that the effect of shell - crossing plays a crucial role in the way that the halo reaches the stabilization in radius and the virial equilibrium. we find that the values currently adopted in the literature for the actual density contrast at the moment of virialization, delta _ vir, may not be accurate enough. in this context, we stress the problems related to the definition of a virial mass and a virial radius for the halo. the question of whether the results found here may be obtained by tracking the shells with an analytic approximation remains to be explored. | arxiv:astro-ph/0609479 |
generative models for structure - based drug design ( sbdd ) have shown promising results in recent years. existing works mainly focus on how to generate molecules with higher binding affinity, ignoring the feasibility prerequisites for generated 3d poses and resulting in false positives. we conduct thorough studies on key factors of ill - conformational problems when applying autoregressive methods and diffusion to sbdd, including mode collapse and hybrid continuous - discrete space. in this paper, we introduce molcraft, the first sbdd model that operates in the continuous parameter space, together with a novel noise reduced sampling strategy. empirical results show that our model consistently achieves superior performance in binding affinity with more stable 3d structure, demonstrating our ability to accurately model interatomic interactions. to our best knowledge, molcraft is the first to achieve reference - level vina scores ( - 6. 59 kcal / mol ) with comparable molecular size, outperforming other strong baselines by a wide margin ( - 0. 84 kcal / mol ). code is available at https : / / github. com / algomole / molcraft. | arxiv:2404.12141 |
we report precision measurements of the casimir interaction at larger separation distances between the au - coated surfaces of a sphere and a plate in ultrahigh vacuum using a much softer cantilever of the dynamic atomic force microscope - based setup and two - step cleaning procedure of the vacuum chamber and test body surfaces by means of uv light and ar - ion bombardment. compared to the previously performed experiment, two more measurement sets for the gradient of the casimir force are provided which confirmed and slightly improved the results. next, additional measurements have been performed with a factor of two larger oscillation amplitude of the cantilever. this allowed obtaining meaningful results at much larger separation distances. the comparison of the measurement data with theoretical predictions of the lifshitz theory using the dissipative drude model to describe the response of au to the low - frequency electromagnetic field fluctuations shows that this theoretical approach is experimentally excluded over the distances from 250 to 1100nm ( i. e., a major step forward has been made as compared to the previous work where it was excluded up to only 820nm ). the theoretical approach using the dissipationless plasma model at low frequencies is shown to be consistent with the data over the entire measurement range from 250 to 1300nm. the possibilities to explain these puzzling results are discussed. | arxiv:1911.00703 |
the performance of face recognition system degrades when the variability of the acquired faces increases. prior work alleviates this issue by either monitoring the face quality in pre - processing or predicting the data uncertainty along with the face feature. this paper proposes magface, a category of losses that learn a universal feature embedding whose magnitude can measure the quality of the given face. under the new loss, it can be proven that the magnitude of the feature embedding monotonically increases if the subject is more likely to be recognized. in addition, magface introduces an adaptive mechanism to learn a wellstructured within - class feature distributions by pulling easy samples to class centers while pushing hard samples away. this prevents models from overfitting on noisy low - quality samples and improves face recognition in the wild. extensive experiments conducted on face recognition, quality assessments as well as clustering demonstrate its superiority over state - of - the - arts. the code is available at https : / / github. com / irvingmeng / magface. | arxiv:2103.06627 |
hadrons formed in heavy - ion collisions are not point - like objects, they cannot occupy too close space - time points. when the two bosons are too close to each other, their constituents start to mix and they cannot be considered as bosons subjected to bose - einstein statistics, this effect is called excluded volume effect. we study the volume effect on hbt for the sources with various sizes. the effect on hbt was shown in out, side and long directions, and it is more obvious for the source with a narrow space - time distribution. the correlation functions for high transverse momenta are more suppressed by the volume effect. hence the incoherence parameter may be more suppressed by the volume effect for high transverse momenta in small collision systems. | arxiv:1906.09754 |
small - $ x $ resummation has been proven recently to be a crucial ingredient for describing small - $ x $ hera data, and the inclusion of small - $ x $ resummation in parton distribution function ( pdf ) determination has a sizeable effect on the pdfs even at the electroweak scale. in this work we explore the implications of small - $ x $ resummation at the large hadron collider ( lhc ) and at a future circular collider ( fcc ). we construct the theoretical machinery for resumming physical inclusive observables at hadron colliders, and describe its implementation in the public code hell 3. 0. we focus on higgs production in gluon fusion as a prototypical example, both because it is sensitive to small - $ x $ gluons and because of its importance for the lhc physics programme. we find that adding small - $ x $ resummation to the n $ ^ 3 $ lo higgs production cross section can lead to an increase of up to 10 % at fcc, while the effect is smaller ( + 1 % ) at lhc but still important to achieve a high level of precision. | arxiv:1805.08785 |
ryabinkin - kohut - staroverov ( rks ) theory builds a bridge between wave function theory and density functional theory by using quantities from the former to produce accurate exchange - correlation potentials needed by the latter. in this work, the rks method is developed and tested alongside slater atomic orbital basis functions for the first time. to evaluate this approach, full configuration interaction computations in the slater orbitals are employed to give quality input to rks method, allowing full correlation to be present along with correct nuclei cusps and asymptotic decay of the wavefunction. the rks method will be shown to be an efficient algorithm to arrive at exchange correlation potentials without unphysical artifacts in moderately - sized basis sets. furthermore, enforcement of the nuclear cusp conditions will be shown to be vital for the success of the slater - basis rks method. examples of weakly and strongly correlated molecular systems will demonstrate the main features of slater rks. | arxiv:2302.11999 |
the success of contrastive learning is well known to be dependent on data augmentation. although the degree of data augmentations has been well controlled by utilizing pre - defined techniques in some domains like vision, time - series data augmentation is less explored and remains a challenging problem due to the complexity of the data generation mechanism, such as the intricate mechanism involved in the cardiovascular system. moreover, there is no widely recognized and general time - series augmentation method that can be applied across different tasks. in this paper, we propose a novel data augmentation method for quasi - periodic time - series tasks that aims to connect intra - class samples together, and thereby find order in the latent space. our method builds upon the well - known mixup technique by incorporating a novel approach that accounts for the periodic nature of non - stationary time - series. also, by controlling the degree of chaos created by data augmentation, our method leads to improved feature representations and performance on downstream tasks. we evaluate our proposed method on three time - series tasks, including heart rate estimation, human activity recognition, and cardiovascular disease detection. extensive experiments against state - of - the - art methods show that the proposed approach outperforms prior works on optimal data generation and known data augmentation techniques in the three tasks, reflecting the effectiveness of the presented method. source code : https : / / github. com / eth - siplab / finding _ order _ in _ chaos | arxiv:2309.13439 |
in the present paper a review and numerical comparison of a special class of multi - phase traffic theories based on microscopic, kinetic and macroscopic traffic models is given. macroscopic traffic equations with multi - valued fundamental diagrams are derived from different microscopic and kinetic models. numerical experiments show similarities and differences of the models, in particular, for the appearance and structure of stop and go waves for highway traffic in dense situations. for all models, but one, phase transitions can appear near bottlenecks depending on the local density and velocity of the flow. | arxiv:1208.4546 |
stack overflow is often viewed as the most influential software question answer ( sqa ) website with millions of programming - related questions and answers. tags play a critical role in efficiently structuring the contents in stack overflow and are vital to support a range of site operations, e. g., querying relevant contents. poorly selected tags often introduce extra noise and redundancy, which leads to tag synonym and tag explosion problems. thus, an automated tag recommendation technique that can accurately recommend high - quality tags is desired to alleviate the problems mentioned above. inspired by the recent success of pre - trained language models ( ptms ) in natural language processing ( nlp ), we present ptm4tag, a tag recommendation framework for stack overflow posts that utilize ptms with a triplet architecture, which models the components of a post, i. e., title, description, and code with independent language models. to the best of our knowledge, this is the first work that leverages ptms in the tag recommendation task of sqa sites. we comparatively evaluate the performance of ptm4tag based on five popular pre - trained models : bert, roberta, albert, codebert, and bertoverflow. our results show that leveraging the software engineering ( se ) domain - specific ptm codebert in ptm4tag achieves the best performance among the five considered ptms and outperforms the state - of - the - art deep learning ( convolutional neural network - based ) approach by a large margin in terms of average $ precision @ k $, $ recall @ k $, and $ f1 $ - $ score @ k $. we conduct an ablation study to quantify the contribution of a post ' s constituent components ( title, description, and code snippets ) to the performance of ptm4tag. our results show that title is the most important in predicting the most relevant tags, and utilizing all the components achieves the best performance. | arxiv:2203.10965 |
this article addresses the problem of reconstructing the topology of a network of agents interacting via linear dynamics, while being excited by exogenous stochastic sources that are possibly correlated across the agents, from time - series measurements alone. it is shown, under the assumption that the correlations are affine in nature, such network of nodal interactions is equivalent to a network with added agents, represented by nodes that are latent, where no corresponding time - series measurements are available ; however, here all exogenous excitements are spatially ( that is, across agents ) uncorrelated. generalizing affine correlations, it is shown that, under polynomial correlations, the latent nodes in the expanded networks can be excited by clusters of noise sources, where the clusters are uncorrelated with each other. the clusters can be replaced with a single noise source if the latent nodes are allowed to have non - linear interactions. finally, using the sparse plus low - rank matrix decomposition of the imaginary part of the inverse power spectral density matrix ( ipsdm ) of the time - series data, the topology of the network is reconstructed. under non conservative assumptions, the correlation graph is retrieved. | arxiv:2012.04175 |
the two main functions of the nlc extraction line include : 1 ) transmission of the outgoing disrupted beam and secondary particles to the dump with minimal losses ; and 2 ) beam diagnostics and control. in this report, we describe the extraction line optics, present the results of tracking studies, and discuss the extraction line instrumentation. | arxiv:physics/0106062 |
engineer. " only a licensed engineer, for instance, may prepare, sign, seal and submit engineering plans and drawings to a public authority for approval, or to seal engineering work for public and private clients. " this requirement can be written into state and provincial legislation, such as in the canadian provinces, for example the ontario or quebec ' s engineer act. in other countries, such as the uk, no such legislation exists ; however, practically all certifying bodies maintain a code of ethics independent of legislation, that they expect all members to abide by or risk expulsion. = = = salaries and workforce statistics = = = the total number of engineers employed in the u. s. in 2015 was roughly 1. 6 million. of these, 278, 340 were mechanical engineers ( 17. 28 % ), the largest discipline by size. in 2012, the median annual income of mechanical engineers in the u. s. workforce was $ 80, 580. the median income was highest when working for the government ( $ 92, 030 ), and lowest in education ( $ 57, 090 ). in 2014, the total number of mechanical engineering jobs was projected to grow 5 % over the next decade. as of 2009, the average starting salary was $ 58, 800 with a bachelor ' s degree. = = subdisciplines = = the field of mechanical engineering can be thought of as a collection of many mechanical engineering science disciplines. several of these subdisciplines which are typically taught at the undergraduate level are listed below, with a brief explanation and the most common application of each. some of these subdisciplines are unique to mechanical engineering, while others are a combination of mechanical engineering and one or more other disciplines. most work that a mechanical engineer does uses skills and techniques from several of these subdisciplines, as well as specialized subdisciplines. specialized subdisciplines, as used in this article, are more likely to be the subject of graduate studies or on - the - job training than undergraduate research. several specialized subdisciplines are discussed in this section. = = = mechanics = = = mechanics is, in the most general sense, the study of forces and their effect upon matter. typically, engineering mechanics is used to analyze and predict the acceleration and deformation ( both elastic and plastic ) of objects under known forces ( also called loads ) or stresses. subdisciplines of mechanics include statics, the study of non - moving bodies under known loads, how forces | https://en.wikipedia.org/wiki/Mechanical_engineering |
we show that the fifth and the eighth busemann - petty problems have positive solutions for bodies that are sufficiently close to the euclidean ball in the banach - mazur distance. | arxiv:2101.08384 |
besides the magnetic lorentz force familiar from the hall effect in metals and semiconductors, there exists a mechanism for charging peculiar to superconductors that is caused by the pair - potential gradient ( ppg ). we incorporate it in the augmented quasiclassical equations of superconductivity with the lorentz force to study charging of an isolated vortex in an equilibrium s - wave type - ii superconductor. it is found that the ppg mechanism gives rise to charging concentrated within the core whose magnitude at the core center can be 10 to 100 times larger than that caused by the lorentz force. our detailed calculations on the spatial, temperature, and magnetic - penetration - depth dependences of the vortex - core charge reveal that the ppg mechanism contributes dominantly to the core charging of the isolated vortex over a wide parameter range. the two mechanisms are also found to work additively at the core center for the present model with an isotropic fermi surface. | arxiv:1706.02449 |
we obtain a variety of series and integral representations of the digamma function $ \ psi ( a ) $. these in turn provide representations of the evaluations $ \ psi ( p / q ) $ at rational argument and for the polygamma function $ \ psi ^ { ( j ) } $. the approach is through a limit definition of the zeroth stieltjes constant $ \ gamma _ 0 ( a ) = - \ psi ( a ) $. several other results are obtained, including product representations for $ \ exp [ \ gamma _ 0 ( a ) ] $ and for the gamma function $ \ gamma ( a ) $. in addition, we present series representations in terms of trigonometric integrals ci and si for $ \ psi ( a ) $ and the euler constant $ \ gamma = - \ psi ( 1 ) $. | arxiv:1008.0040 |
we investigate atmospheric properties of 35 stable rrab stars that possess the full ranges of period, light amplitude, and metal abundance found in galactic rr lyrae stars. our results are derived from several thousand echelle spectra obtained over several years with the du pont telescope of las campanas observatory. radial velocities of metal lines and the halpha line were used to construct curves of radial velocity versus pulsation phase. from these we estimated radial velocity amplitudes for metal lines ( formed near the photosphere ) and halpha doppler cores ( formed at small optical depths ). we also measured halpha emission fluxes when they appear during primary light rises. spectra shifted to rest wavelengths, binned into small phase intervals, and coadded were used to perform model atmospheric and abundance analyses. the derived metallicities and those of some previous spectroscopic surveys were combined to produce a new calibration of the layden abundance scale. we then divided our rrab sample into metal - rich ( disk ) and metal - poor ( halo ) groups at [ fe / h ] = - 1. 0. the atmospheres of rrab families, so defined, differ with respect to ( a ) peak strength of halpha emission flux, ( b ) halpha radial velocity amplitude, ( c ) dynamical gravity, ( d ) stellar radius variation, ( e ) secondary acceleration during the photometric " bump " that precedes minimum light, and ( g ) duration of halpha line - doubling. we also detected halpha line - doubling during the bump in the metal - poor family, but not in the metal - rich one. though all rrab probably are core helium - burning horizontal branch stars, the metal - rich group appear to be a species sui generis. | arxiv:1611.02368 |
we examine a chain of periodic arrays of 4 quantum spins with magnitudes of 1 / 2, 1, 3 / 2 and 1. there are four kinds of nearest - neighbour exchange parameters among them. we choose two independent parameters for concreteness : one represents the ratio of typical exchange parameters, and the other represents a distortion. we determine the phase diagram of the ground state in the parameter space. the phase boundaries appear as gapless lines which separate gapful disordered phases. they are determined by the gapless equation which was previously derived by mapping a general periodic spin chain to the nonlinear $ \ sigma $ model. | arxiv:cond-mat/0003274 |
artificial intelligence explanations make complex predictive models more comprehensible. effective explanations, however, should also anticipate and mitigate possible misinterpretations, e. g., arising when users infer incorrect information that is not explicitly conveyed. to this end, we propose complementary explanations - - a novel method that pairs explanations to compensate for their respective limitations. a complementary explanation adds insights that clarify potential misconceptions stemming from the primary explanation while ensuring their coherence and avoiding redundancy. we also introduce a framework for designing and evaluating complementary explanation pairs based on pertinent qualitative properties and quantitative metrics. applying our approach allows to construct complementary explanations that minimise the chance of their misinterpretation. | arxiv:2503.00303 |
node embedding learns a low - dimensional representation for each node in the graph. recent progress on node embedding shows that proximity matrix factorization methods gain superb performance and scale to large graphs with millions of nodes. existing approaches first define a proximity matrix and then learn the embeddings that fit the proximity by matrix factorization. most existing matrix factorization methods adopt the same proximity for different tasks, while it is observed that different tasks and datasets may require different proximity, limiting their representation power. motivated by this, we propose { \ em lemane }, a framework with trainable proximity measures, which can be learned to best suit the datasets and tasks at hand automatically. our method is end - to - end, which incorporates differentiable svd in the pipeline so that the parameters can be trained via backpropagation. however, this learning process is still expensive on large graphs. to improve the scalability, we train proximity measures only on carefully subsampled graphs, and then apply standard proximity matrix factorization on the original graph using the learned proximity. note that, computing the learned proximities for each pair is still expensive for large graphs, and existing techniques for computing proximities are not applicable to the learned proximities. thus, we present generalized push techniques to make our solution scalable to large graphs with millions of nodes. extensive experiments show that our proposed solution outperforms existing solutions on both link prediction and node classification tasks on almost all datasets. | arxiv:2106.05476 |
entanglement entropy is a statistical entropy measuring information loss due to coarse - graining corresponding to a spatial division of a system. in this paper we construct a thermodynamics ( entanglement thermodynamics ) which includes the entanglement entropy as the entropy variable, for a massless scalar field in minkowski, schwarzschild and reissner - nordstr { \ " o } m spacetimes to understand the statistical origin of black - hole thermodynamics. it is shown that the entanglement thermodynamics in minkowski spacetime differs significantly from black - hole thermodynamics. on the contrary, the entanglement thermodynamics in schwarzschild and reissner - nordstr { \ " o } m spacetimes has close relevance to black - hole thermodynamics. | arxiv:gr-qc/9802028 |
this work is motivated by the problem of finding locally compact group topologies for piecewise full groups ( a. k. a. ~ topological full groups ). we determine that any piecewise full group that is locally compact in the compact - open topology on the group of self - homeomorphisms of the cantor set must be uniformly discrete, in a precise sense that we introduce here. uniformly discrete groups of self - homeomorphisms of the cantor set are in particular countable, locally finite, residually finite and discrete in the compact - open topology. the resulting piecewise full groups form a subclass of the ample groups introduced by krieger. we determine the structure of these groups by means of their bratteli diagrams and associated dimension ranges ( $ k _ 0 $ groups ). we show through an example that not all uniformly discrete piecewise full groups are subgroups of the ` ` obvious ' ' ones, namely, piecewise full groups of finite groups. | arxiv:2005.08167 |
motivated by gray ' s work on tube formulae for complex submanifolds of complex projective space equipped with the fubini - study metric, riemannian foliations of projective space are studied. we prove that there are no complex riemannian foliations of any open subset of $ \ mathbb { p } ^ n $ of codimension one. as a consequence there is no riemannian foliation of the projective plane by riemann surfaces, even locally. we determine how a complex submanifold may arise as an exceptional leaf of a non - trivial singular riemannian foliation of maximal dimension. gray ' s tube formula is applied to obtain a volume bound for certain holomorphic curves of complex quadrics. | arxiv:1202.5989 |
we introduce a logic for reasoning about evidence, that essentially views evidence as a function from prior beliefs ( before making an observation ) to posterior beliefs ( after making the observation ). we provide a sound and complete axiomatization for the logic, and consider the complexity of the decision problem. although the reasoning in the logic is mainly propositional, we allow variables representing numbers and quantification over them. this expressive power seems necessary to capture important properties of evidence | arxiv:1407.7185 |
accurate and efficient object detection is crucial for safe and efficient operation of earth - moving equipment in mining. traditional 2d image - based methods face limitations in dynamic and complex mine environments. to overcome these challenges, 3d object detection using point cloud data has emerged as a comprehensive approach. however, training models for mining scenarios is challenging due to sensor height variations, viewpoint changes, and the need for diverse annotated datasets. this paper presents novel contributions to address these challenges. we introduce a synthetic dataset simmining 3d [ 1 ] specifically designed for 3d object detection in mining environments. the dataset captures objects and sensors positioned at various heights within mine benches, accurately reflecting authentic mining scenarios. an automatic annotation pipeline through ros interface reduces manual labor and accelerates dataset creation. we propose evaluation metrics accounting for sensor - to - object height variations and point cloud density, enabling accurate model assessment in mining scenarios. real data tests validate our models effectiveness in object prediction. our ablation study emphasizes the importance of altitude and height variation augmentations in improving accuracy and reliability. the publicly accessible synthetic dataset [ 1 ] serves as a benchmark for supervised learning and advances object detection techniques in mining with complimentary pointwise annotations for each scene. in conclusion, our work bridges the gap between synthetic and real data, addressing the domain shift challenge in 3d object detection for mining. we envision robust object detection systems enhancing safety and efficiency in mining and related domains. | arxiv:2312.06113 |
this article has been withdrawn | arxiv:cond-mat/0203122 |
with the ever increasing demand for screening millions of prospective " novel coronavirus " or covid - 19 cases, and due to the emergence of high false negatives in the commonly used pcr tests, the necessity for probing an alternative simple screening mechanism of covid - 19 using radiological images ( like chest x - rays ) assumes importance. in this scenario, machine learning ( ml ) and deep learning ( dl ) offer fast, automated, effective strategies to detect abnormalities and extract key features of the altered lung parenchyma, which may be related to specific signatures of the covid - 19 virus. however, the available covid - 19 datasets are inadequate to train deep neural networks. therefore, we propose a new concept called domain extension transfer learning ( detl ). we employ detl, with pre - trained deep convolutional neural network, on a related large chest x - ray dataset that is tuned for classifying between four classes \ textit { viz. } $ normal $, $ pneumonia $, $ other \ _ disease $, and $ covid - 19 $. a 5 - fold cross validation is performed to estimate the feasibility of using chest x - rays to diagnose covid - 19. the initial results show promise, with the possibility of replication on bigger and more diverse data sets. the overall accuracy was measured as $ 90. 13 \ % \ pm 0. 14 $. in order to get an idea about the covid - 19 detection transparency, we employed the concept of gradient class activation map ( grad - cam ) for detecting the regions where the model paid more attention during the classification. this was found to strongly correlate with clinical findings, as validated by experts. | arxiv:2004.10507 |
the massive deployment of machine learning ( ml ) models raises serious concerns about data protection. privacy - enhancing technologies ( pets ) offer a promising first step, but hard challenges persist in achieving confidentiality and differential privacy in distributed learning. in this paper, we describe a novel, regulation - compliant data protection technique for the distributed training of ml models, applicable throughout the ml life cycle regardless of the underlying ml architecture. designed from the data owner ' s perspective, our method protects both training data and ml model parameters by employing a protocol based on a quantized multi - hash data representation hash - comb combined with randomization. the hyper - parameters of our scheme can be shared using standard secure multi - party computation protocols. our experimental results demonstrate the robustness and accuracy - preserving properties of our approach. | arxiv:2406.19418 |
this paper proposes deep hyperalignment ( dha ) as a regularized, deep extension, scalable hyperalignment ( ha ) method, which is well - suited for applying functional alignment to fmri datasets with nonlinearity, high - dimensionality ( broad roi ), and a large number of subjects. unlink previous methods, dha is not limited by a restricted fixed kernel function. further, it uses a parametric approach, rank - $ m $ singular value decomposition ( svd ), and stochastic gradient descent for optimization. therefore, dha has a suitable time complexity for large datasets, and dha does not require the training data when it computes the functional alignment for a new subject. experimental studies on multi - subject fmri analysis confirm that the dha method achieves superior performance to other state - of - the - art ha algorithms. | arxiv:1710.03923 |
we continue the study of the computational complexity of differentially private pac learning and how it is situated within the foundations of machine learning. a recent line of work uncovered a qualitative equivalence between the private pac model and littlestone ' s mistake - bounded model of online learning, in particular, showing that any concept class of littlestone dimension $ d $ can be privately pac learned using $ \ mathrm { poly } ( d ) $ samples. this raises the natural question of whether there might be a generic conversion from online learners to private pac learners that also preserves computational efficiency. we give a negative answer to this question under reasonable cryptographic assumptions ( roughly, those from which it is possible to build indistinguishability obfuscation for all circuits ). we exhibit a concept class that admits an online learner running in polynomial time with a polynomial mistake bound, but for which there is no computationally - efficient differentially private pac learner. our construction and analysis strengthens and generalizes that of bun and zhandry ( tcc 2016 - a ), who established such a separation between private and non - private pac learner. | arxiv:2402.11119 |
we study the unknown coupling constants that appear at order $ p ^ 4 $ in the chiral perturbation theory analysis of $ k \ to \ pi \ gamma ^ * \ to \ pi l ^ + l ^ - $, $ k ^ { + - } \ to \ pi ^ { + - } \ gamma \ gamma $ and $ k \ to \ pi \ pi \ gamma $ decays. to that end, we compute the chiral realization of the $ \ delta s \, = \, 1 $ hamiltonian in the framework of the $ 1 / n _ c $ - expansion of the low - energy action. the phenomenological implications are also discussed. | arxiv:hep-ph/9209231 |
in this paper, we describe all finite wajsberg algebras of order n < = 9. | arxiv:1905.05755 |
text - to - sql models can generate a list of candidate sql queries, and the best query is often in the candidate list, but not at the top of the list. an effective re - rank method can select the right sql query from the candidate list and improve the model ' s performance. previous studies on code generation automatically generate test cases and use them to re - rank candidate codes. however, automatic test case generation for text - to - sql is an understudied field. we propose an automatic test case generation method that first generates a database and then uses llms to predict the ground truth, which is the expected execution results of the ground truth sql query on this database. to reduce the difficulty for llms to predict, we conduct experiments to search for ways to generate easy databases for llms and design easy - to - understand prompts. based on our test case generation method, we propose a re - rank method to select the right sql query from the candidate list. given a candidate list, our method can generate test cases and re - rank the candidate list according to their pass numbers on these test cases and their generation probabilities. the experiment results on the validation dataset of spider show that the performance of some state - of - the - art models can get a 3. 6 \ % improvement after applying our re - rank method. | arxiv:2401.02115 |
the desert of the real = = references = = = = external links = = full text, translated to english by andrew hurley spanish audio " on rigor in science ", read by j. l. borges | https://en.wikipedia.org/wiki/On_Exactitude_in_Science |
franz lemmermeyer ' s previous work laid the framework for a description of the arithmetic of pell conics, which is analogous to that of elliptic curves. he describes a group law on conics and conjectures the existence of an analogous tate - - shafarevich group with order the squared ideals of the narrow class group. in this article, we provide a cohomological definition of the tate - - shafarevich group and show that its order is as lemmermeyer conjectured. | arxiv:1712.08251 |
we establish krylov - safonov type h \ " older regularity theory for solutions to quite general discrete dynamic programming equations or equivalently discrete stochastic processes on random geometric graphs. such graphs arise for example from data clouds in graph - based machine learning. the results actually hold to functions satisfying pucci - type extremal inequalities, and thus we cover many examples including tug - of - war games on random geometric graphs. as an application we show that under suitable assumptions when the number of data points increases, the graph functions converge to a solution of a partial differential equation. | arxiv:2410.01642 |
we study quantum protocols among two distrustful parties. under the sole assumption of correctness - guaranteeing that honest players obtain their correct outcomes - we show that every protocol implementing a non - trivial primitive necessarily leaks information to a dishonest player. this extends known impossibility results to all non - trivial primitives. we provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. our framework also covers the case where the two players are helped by a trusted third party. we show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. all our results hold even against quantum honest - but - curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. as concrete examples, we establish lower bounds on the leakage of standard universal two - party primitives such as oblivious transfer. | arxiv:0902.4036 |
we study the relation between instantons and monopoles in the abelian gauge. first, we investigate the monopole in the multi - instanton solution in the continuum yang - mills theory using the polyakov gauge. at a large instanton density, the monopole trajectory becomes highly complicated, which can be regarded as a signal of monopole condensation. second, we study instantons and monopoles in the su ( 2 ) lattice gauge theory both in the maximally abelian ( ma ) gauge and in the polyakov gauge. using the $ 16 ^ 3 \ times 4 $ lattice, we find monopole dominance for instantons in the confinement phase even at finite temperatures. a linear - type correlation is found between the total monopole - loop length and the integral of the absolute value of the topological density ( the total number of instantons and anti - instantons ) in the ma gauge. we conjecture that instantons enhance the monopole - loop length and promote monopole condensation. | arxiv:hep-lat/9609033 |
as the development and use of artificial intelligence ( ai ) continues to grow, policymakers are increasingly grappling with the question of how to regulate this technology. the most far - reaching international initiative is the european union ( eu ) ai act, which aims to establish the first comprehensive, binding framework for regulating ai. in this article, we offer the first systematic analysis of non - state actor preferences toward international regulation of ai, focusing on the case of the eu ai act. theoretically, we develop an argument about the regulatory preferences of business actors and other non - state actors under varying conditions of ai sector competitiveness. empirically, we test these expectations using data from public consultations on european ai regulation. our findings are threefold. first, all types of non - state actors express concerns about ai and support regulation in some form. second, there are nonetheless significant differences across actor types, with business actors being less concerned about the downsides of ai and more in favor of lax regulation than other non - state actors. third, these differences are more pronounced in countries with stronger commercial ai sectors. our findings shed new light on non - state actor preferences toward ai regulation and point to challenges for policymakers balancing competing interests in society. | arxiv:2305.11523 |
recent studies to finalize the systematic error estimates on the measurement of the mass of the w boson at lep2 are reviewed. results including a new preliminary value from aleph are updated together with the world average which is now 80. 426 + - 0. 034gev / c2. the updated electroweak fit gives a 95 % c. l. upper limit on the mass of the higgs boson of 211gev / c2. | arxiv:hep-ex/0305061 |
the x - ray emission from clusters of galaxies is one of the best observational probe to investigate the distribution of dark matter at intermediate and high redshifts. since the disposition of the intracluster plasma ( icp ) responsible of the emission is crucial to link x - ray properties to the global properties of the dark matter halos, we propose a semi - - analytical approach for the diffuse baryons. this comprises the following blocks : monte carlo merging histories to describe the dynamics of dark matter halos ; the central hydrostatic disposition for the icp ; conditions of shock, or of closely adiabatic compression at the boundary with the external gas, preheated by stellar energy feedbacks. from our model we predict the $ l - t $ correlation, consistent with the data as for shape and scatter. | arxiv:astro-ph/9804026 |
most algorithms for representation learning and link prediction in relational data have been designed for static data. however, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. this is also the case for knowledge bases, which contain facts such as ( us, has president, b. obama, [ 2009 - 2017 ] ) that are valid only at certain points in time. for the problem of link prediction under temporal constraints, i. e., answering queries such as ( us, has president,?, 2012 ), we propose a solution inspired by the canonical decomposition of tensors of order 4. we introduce new regularization schemes and present an extension of complex ( trouillon et al., 2016 ) that achieves state - of - the - art performance. additionally, we propose a new dataset for knowledge base completion constructed from wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non - temporal link prediction methods. | arxiv:2004.04926 |
we describe a method, based on jennifer nado ' s proposal for classification procedures as targets of conceptual engineering, that implements such procedures by prompting a large language model. we apply this method, using data from the wikidata knowledge graph, to evaluate stipulative definitions related to two paradigmatic conceptual engineering projects : the international astronomical union ' s redefinition of planet and haslanger ' s ameliorative analysis of woman. our results show that classification procedures built using our approach can exhibit good classification performance and, through the generation of rationales for their classifications, can contribute to the identification of issues in either the definitions or the data against which they are being evaluated. we consider objections to this method, and discuss implications of this work for three aspects of theory and practice of conceptual engineering : the definition of its targets, empirical methods for their investigation, and their practical roles. the data and code used for our experiments, together with the experimental results, are available in a github repository. | arxiv:2312.03749 |
x ) \ vert _ b \ leq c _ 1 ( m ) { \ rm hc } _ m ^ { \ frac { 1 } { m } } ( x ) $ ; ( 2 ) $ { \ rm hc } _ m ( h ( x \ times [ 0, 1 ] ) ) \ leq c _ 2 ( m ) { \ rm hc } _ m ( x ) $. a similar theorem can also be proven in the case when $ b $ is a metric space with a linear contractibility function and applies to all compact sets $ x $ with a controllably small $ { \ rm hc } _ m $ in riemannian manifolds $ m ^ n $ with the sectional curvature bounded below, the volume bounded below by a positive number, and the diameter bounded above. | arxiv:2304.02709 |
understanding urban form is crucial for sustainable urban planning and enhancing quality of life. this study presents a data - driven framework to systematically identify and compare urban typologies across geographically and culturally distinct cities. using open - source geospatial data from openstreetmap, we extracted multidimensional features related to topography, multimodality, green spaces, and points of interest for the cities of lausanne, switzerland, and philadelphia, usa. a grid - based approach was used to divide each city into basic spatial units ( bsu ), and gaussian mixture models ( gmm ) were applied to cluster bsus based on their urban characteristics. the results reveal coherent and interpretable urban typologies within each city, with some cluster types emerging across both cities despite their differences in scale, density, and cultural context. comparative analysis showed that adapting the grid size to each city ' s morphology improves the detection of shared typologies. simplified clustering based solely on network degree centrality further demonstrated that meaningful structural patterns can be captured even with minimal feature sets. our findings suggest the presence of functionally convergent urban forms across continents and highlight the importance of spatial scale in cross - city comparisons. the framework offers a scalable and transferable approach for urban analysis, providing valuable insights for planners and policymakers aiming to enhance walkability, accessibility, and well - being. limitations related to data completeness and feature selection are discussed, and directions for future work - - including the integration of additional data sources and human - centered validation - - are proposed. | arxiv:2505.02938 |
in the past few years, " metaverse " and " non - fungible tokens ( nft ) " have become buzzwords, and the prices of related assets have exhibited large fluctuations. are those characteristic of a speculative bubble? in this paper, we attempt to answer this question, and better understand the underlying economic dynamics. we look at decentraland, a virtual world platform where land parcels are sold as nft collections. we find that initially, land prices followed traditional real estate pricing models - in particular, value decreased with distance from the most desirable areas - suggesting decentraland behaved much like a virtual city. however, these real estate pricing models stopped applying when both the metaverse and nfts gained increased popular attention and enthusiasm in 2021, suggesting a new driving force for the underlying asset prices. at that time, following a substantial rise in nft market values, short - term holders of multiple parcels began to take major selling positions in the decentraland market, which hints that, rather than building a metaverse community, early decentraland investors preferred to cash out when land valuations became inflated. our analysis also shows that while the majority of buyers are new entrants to the market ( many of whom joined during the bubble ), liquidity ( i. e., parcels ) was mostly provided by early adopters selling, which caused stark differences in monetary gains. early adopters made money - more than 10, 000 usd on average per parcel sold - but users who joined later typically made no profit or even incurred losses in the order of 1, 000 usd per parcel. unlike established markets such as financial and real estate markets, newly emergent digital marketplaces are mostly self - regulated. as a result, the significant financial risks we identify indicate a strong need for establishing appropriate standards of business conduct and improving user awareness. | arxiv:2501.09601 |
considering the importance of building a good visual dialog ( vd ) questioner, many researchers study the topic under a q - bot - a - bot image - guessing game setting, where the questioner needs to raise a series of questions to collect information of an undisclosed image. despite progress has been made in supervised learning ( sl ) and reinforcement learning ( rl ), issues still exist. firstly, previous methods do not provide explicit and effective guidance for questioner to generate visually related and informative questions. secondly, the effect of rl is hampered by an incompetent component, i. e., the guesser, who makes image predictions based on the generated dialogs and assigns rewards accordingly. to enhance vd questioner : 1 ) we propose a related entity enhanced questioner ( reeq ) that generates questions under the guidance of related entities and learns entity - based questioning strategy from human dialogs ; 2 ) we propose an augmented guesser ( augg ) that is strong and is optimized for the vd setting especially. experimental results on the visdial v1. 0 dataset show that our approach achieves state - of - theart performance on both image - guessing task and question diversity. human study further proves that our model generates more visually related, informative and coherent questions. | arxiv:2109.02297 |
in this paper, we propose a new neural network architecture based on the h2 matrix. even though networks with h2 - inspired architecture already exist, and our approach is designed to reduce memory costs and improve performance by taking into account the sparsity template of the h2 matrix. in numerical comparison with alternative neural networks, including the known h2 - based ones, our architecture showed itself as beneficial in terms of performance, memory, and scalability. | arxiv:2212.12899 |
in this work, we study a natural nonparametric estimator of the transition probability matrices of a finite controlled markov chain. we consider an offline setting with a fixed dataset, collected using a so - called logging policy. we develop sample complexity bounds for the estimator and establish conditions for minimaxity. our statistical bounds depend on the logging policy through its mixing properties. we show that achieving a particular statistical risk bound involves a subtle and interesting trade - off between the strength of the mixing properties and the number of samples. we demonstrate the validity of our results under various examples, such as ergodic markov chains, weakly ergodic inhomogeneous markov chains, and controlled markov chains with non - stationary markov, episodic, and greedy controls. lastly, we use these sample complexity bounds to establish concomitant ones for offline evaluation of stationary markov control policies. | arxiv:2211.07092 |
in 1952, j. h. braun claimed to have established a formula giving a lower bound for certain partitions of sets of integers into weakly sum - free classes. however, no proof or supporting construction was published at that time. in today ' s terminology, that claim was equivalent to giving a formulaic lower bound for the weak schur number $ ws ( s ) $. $ ws ( s ) $ is the maximum number such that there exists a weak schur partition of the integers from 1 to $ ws ( s ) $, into $ s $ subsets. in a weak schur partition of a set of integers, there can be no three distinct members $ a $, $ b $ and $ c $ in any subset, such that $ a + b = c $. an iterative construction described in this paper results in a similar formulaic lower bound. although different from that given by braun, it reproduces the result $ ws ( 6 ) \ ge 554 $ implied by his formula, and exceeds it for all larger values of $ s $. various starting points can be used as a basis for the iterations. this result itself is no longer remarkable : it has been proven elsewhere that $ ws ( 6 ) \ ge 642 $. even so, it is hoped that the formula and its underlying construction may nevertheless be of interest to those interested in weak schur partitions and / or the closely - related linear ramsey graphs. | arxiv:2005.11707 |
for the d = 5 majorana neutrino mass operator to have a see - saw ultraviolet completion that is viable up to the planck scale, the see - saw scale is bounded above due to triviality limits on the see - saw couplings. for supersymmetric see - saw models, with realistic neutrino mass textures, we compare constraints on the see - saw scale from triviality bounds, with those arising from experimental limits on induced charged - lepton flavour violation, for both the cmssm and for models with split supersymmetry. | arxiv:hep-ph/0605144 |
accreting black holes commonly exhibit hard x - ray emission, originating from a region of hot plasma near the central engine referred to as the corona. the origin and geometry of the corona are poorly understood, and models invoking either inflowing or outflowing material ( or both ) can successfully explain only parts of the observed phenomenology. in particular, recent works indicate that the time - averaged and variability property might originate in different regions of the corona. in this paper we present a model designed to move beyond the lamp post paradigm, with the goal of accounting for the vertical extent of the corona. in particular, we highlight the impact of including self consistently a second lamp post, mimicking for example an extended jet base. we fully include the effect that the second source has on the time - dependent disk ionization, reflection spectrum, and reverberation lags. we also present an application of this new model to nicer observations of the x - ray binary maxi j1820 + 070 near its hard - to - soft state transition. we demonstrate that in these observations, a vertically extended corona can capture both spectral and timing properties, while a single lamp post model can not. in this scenario, the illumination responsible for the time - averaged spectrum originates close to the black hole, while the variability is likely associated with the ballistic jet. | arxiv:2305.05039 |
the spatio - temporal aspects of the transition to turbulence are considered in the case of a boundary layer flow developing above a flat plate exposed to free - stream turbulence. combining results on the receptivity to free - stream turbulence with the nonlinear concept of a transition threshold, a physically motivated model suggests a spatial distribution of spot nucleation events. to describe the evolution of turbulent spots a probabilistic cellular automaton is introduced, with all parameters directly fitted from numerical simulations of the boundary layer. the nucleation rates are then combined with the cellular automaton model, yielding excellent quantitative agreement with the statistical characteristics for different free - stream turbulence levels. we thus show how the recent theoretical progress on transitional wall - bounded flows can be extended to the much wider class of spatially developing boundary - layer flows. | arxiv:1604.07235 |
we study a schwarz - pick type inequality for the schur - agler class $ sa ( b _ { \ delta } ) $. in our operator theoretical approach, von neumann ' s inequality for a class of generic tuples of $ 2 \ times 2 $ matrices plays an important role rather than holomorphy. in fact, the class $ s _ { 2, gen } ( b _ { \ delta } ) $ consisting of functions that satisfy the inequality for those matrices enjoys \ begin { equation * } d _ { \ mathbb { d } } ( f ( z ), f ( w ) ) \ le d _ { \ delta } ( z, w ) \ ; \ ; ( z, w \ in b _ { \ delta }, f \ in s _ { 2, gen } ( b _ { \ delta } ) ). \ end { equation * } here, $ d _ { \ delta } $ is a function defined by a matrix $ \ delta $ of abstract functions. later, we focus on the case when $ \ delta $ is a matrix of holomorphic functions. we use the pseudo - distance $ d _ { \ delta } $ to give a sufficient condition on a diagonalizable commuting tuple $ t $ acting on $ \ mathbb { c } ^ 2 $ for $ b _ { \ delta } $ to be a complete spectral domain for $ t $. we apply this sufficient condition to generalizing von neumann ' s inequalities studied by drury and by hartz - richter - shalit. | arxiv:2306.08694 |
we investigate the dynamics of bipartite entanglement after the sudden junction of two leads in interacting integrable models. by combining the quasiparticle picture for the entanglement spreading with generalised hydrodynamics we derive an analytical prediction for the dynamics of the entanglement entropy between a finite subsystem and the rest. we find that the entanglement rate between the two leads depends only on the physics at the interface and differs from the rate of exchange of thermodynamic entropy. this contrasts with the behaviour in free or homogeneous interacting integrable systems, where the two rates coincide. | arxiv:1903.00467 |
we investigate the classification of quasihomogeneous polynomials in two variables with real coefficients under semialgebraic bi - lipschitz equivalence in a neighborhood of the origin in $ { \ mathbb r } ^ 2 $. building on the work of birbrair, fernandes, and panazzolo, our approach is based on reducing the problem to the lipschitz classification of associated single - variable polynomial functions, called height functions. we establish conditions under which semialgebraic bi - lipschitz equivalence of quasihomogeneous polynomials corresponds to the lipschitz equivalence of their height functions. to achieve this, we develop the theory of $ \ beta $ - transforms and inverse $ \ beta $ - transforms. as an application, we examine a family of quasihomogeneous polynomials previously used by henry and parusi \ ' nski to show that the bi - lipschitz equivalence of analytic function germs $ ( { \ mathbb r } ^ 2, 0 ) \ rightarrow ( { \ mathbb r }, 0 ) $ admits continuous moduli. our results show that semialgebraic bi - lipschitz equivalence of real quasihomogeneous polynomials in two variables also admits continuous moduli. | arxiv:2503.06022 |
we analyze the theory and phenomenology of anomalous global chiral symmetries in the presence of an extra dimension. we propose a simple extension of the standard model in 5d whose signatures closely resemble those of supersymmetry with gauge mediation, and we suggest a novel scalar dark matter candidate. | arxiv:0901.2933 |
we prove theoretically that certain strongly correlated kondo insulators are topological crystalline insulators with nontrivial topology protected by crystal symmetries. in particular, we find that smb $ _ 6 $ is such a material. in addition to a nontrivial z $ _ 2 $ topological index protected by time reversal symmetry, smb $ _ 6 $ also has nontrival mirror chern numbers protected by mirror symmetries. on the $ ( 100 ) $ surface of smb $ _ 6 $, the nontrivial mirror chern numbers do not generate additional surface states beyond those predicted by the z $ _ 2 $ topological index. however, on the $ ( 110 ) $ surface, two more surface dirac points are predicted. remarkably, we find that for smb $ _ 6 $ both the z $ _ 2 $ topological index and the mirror chern numbers are independent of microscopic details, which enables us to obtain surface state properties that are universal. | arxiv:1307.7191 |
the ads ( 4 ) / cft ( 3 ) duality is a new example of an integrable and exactly solvable ads / cft system. there is, however, a puzzling mismatch between the number of degrees of freedom used in the exact solution ( 4b + 4f scattering states ) and 8b + 8f transverse oscillation modes of critical superstring theory. we offer a resolution of this puzzle by arguing that half of the string modes dissolve in the continuum of two - particle states once alpha ' corrections are taken into account. we also check that the conjectured exact s - matrix of ads ( 4 ) / cft ( 3 ) agrees with the tree - level worldsheet calculation. | arxiv:0903.1747 |
with the widespread use of communication technologies, cryptosystems are therefore critical to guarantee security over open networks as the internet. pseudo - random number generators ( prngs ) are fundamental in cryptosystems and information hiding schemes. one of the existing chaos - based prngs is using chaotic iterations schemes. in prior literature, the iterate function is just the vectorial boolean negation. in this paper, we propose a method using graph with strongly connected components as a selection criterion for chaotic iterate function. in order to face the challenge of using the proposed chaotic iterate functions in prng, these prngs are subjected to a statistical battery of tests, which is the well - known nist in the area of cryptography. | arxiv:1112.0950 |
we present a framework for generating natural language description from structured data such as tables ; the problem comes under the category of data - to - text natural language generation ( nlg ). modern data - to - text nlg systems typically employ end - to - end statistical and neural architectures that learn from a limited amount of task - specific labeled data, and therefore, exhibit limited scalability, domain - adaptability, and interpretability. unlike these systems, ours is a modular, pipeline - based approach, and does not require task - specific parallel data. it rather relies on monolingual corpora and basic off - the - shelf nlp tools. this makes our system more scalable and easily adaptable to newer domains. our system employs a 3 - staged pipeline that : ( i ) converts entries in the structured data to canonical form, ( ii ) generates simple sentences for each atomic entry in the canonicalized representation, and ( iii ) combines the sentences to produce a coherent, fluent and adequate paragraph description through sentence compounding and co - reference replacement modules. experiments on a benchmark mixed - domain dataset curated for paragraph description from tables reveals the superiority of our system over existing data - to - text approaches. we also demonstrate the robustness of our system in accepting other popular datasets covering diverse data types such as knowledge graphs and key - value maps. | arxiv:1810.02889 |
in this paper, we address an instance of uniquely solvable mean - field game with a common noise whose corresponding counterpart without common noise has several equilibria. we study the selection problem for this mean - field game without common noise via three approaches. a common approach is to select, amongst all the equilibria, those yielding the minimal cost for the representative player. another one is to select equilibria that are included in the support of the zero noise limit of the mean - field game with common noise. a last one is to select equilibria supported by the limit of the mean - field component of the corresponding $ n $ - player game as the number of players goes to infinity. the contribution of this paper is to show that, for the class under study, the last two approaches select the same equilibria, but the first approach selects another one. | arxiv:1808.09137 |
this paper focuses on the problem of minimizing a locally lipschitz continuous function. motivated by the effectiveness of bregman gradient methods in training nonsmooth deep neural networks and the recent progress in stochastic subgradient methods for nonsmooth nonconvex optimization problems \ cite { bolte2021conservative, bolte2022subgradient, xiao2023adam }, we investigate the long - term behavior of stochastic bregman subgradient methods in such context, especially when the objective function lacks clarke regularity. we begin by exploring a general framework for bregman - type methods, establishing their convergence by a differential inclusion approach. for practical applications, we develop a stochastic bregman subgradient method that allows the subproblems to be solved inexactly. furthermore, we demonstrate how a single timescale momentum can be integrated into the bregman subgradient method with slight modifications to the momentum update. additionally, we introduce a bregman proximal subgradient method for solving composite optimization problems possibly with constraints, whose convergence can be guaranteed based on the general framework. numerical experiments on training nonsmooth neural networks are conducted to validate the effectiveness of our proposed methods. | arxiv:2404.17386 |
segregation encodes information about society, such as social cohesion, mixing, and inequality. however, most past and current studies tackled socioeconomic ( se ) segregation by analyzing static aggregated mobility networks, often without considering further individual features beyond income and, most importantly, without distinguishing individual - level from location - based income. accessing individual - level income may help mapping macroscopic behavior into more granular mobility patterns, hence impacting segregation estimates. here we combine a mobile phone dataset of daily mobility flows across spanish districts stratified and adjusted by age, gender and income with census data of districts median income. we build mobility - based se assortativity matrices for multiple demographics and observe mobility patterns of three income groups with respect to location - based se classes. we find that se assortativity differs when isolating the mobility of specific income groups : we observe that groups prefer to visit areas with higher average income than their own, which we call preferential mobility. our analysis suggests substantial differences between weekdays and weekends se assortativity by age class, with weekends characterized by higher se assortativity. our modeling approach shows that the radiation model, which typically performs best at reproducing inter - municipal population mobility, best fits middle income and middle - aged flows, while performing worse on young and low income groups. our double - sided approach, focusing on assortativity patterns and mobility modeling, suggests that state of the art mobility models fail at capturing preferential mobility behavior. overall, our work indicates that mobility models considering the interplay of se preferential behavior, age and gender gaps may sensibly improve the state of the art models performance. | arxiv:2407.01799 |
new advancements in radio data post - processing are underway within the ska precursor community, aiming to facilitate the extraction of scientific results from survey images through a semi - automated approach. several of these developments leverage deep learning ( dl ) methodologies for diverse tasks, including source detection, object or morphology classification, and anomaly detection. despite substantial progress, the full potential of these methods often remains untapped due to challenges associated with training large supervised models, particularly in the presence of small and class - unbalanced labelled datasets. self - supervised learning has recently established itself as a powerful methodology to deal with some of the aforementioned challenges, by directly learning a lower - dimensional representation from large samples of unlabelled data. the resulting model and data representation can then be used for data inspection and various downstream tasks if a small subset of labelled data is available. in this work, we explored contrastive learning methods to learn suitable radio data representation from unlabelled images taken from the askap emu and sarao meerkat gps surveys. we evaluated trained models and the obtained data representation over smaller labelled datasets, also taken from different radio surveys, in selected analysis tasks : source detection and classification, and search for objects with peculiar morphology. for all explored downstream tasks, we reported and discussed the benefits brought by self - supervised foundational models built on radio data. | arxiv:2404.18462 |
we find a late times approximation for the syk spectral form factor from a large $ n $ steepest descent version of the path integral over two replica collective fields. main ingredients are a suitable uv regularization of the two replica kinetic operator, the property of its fourier transform and some spectral analysis of the four point function two replica ladder kernel. | arxiv:2102.01653 |
epidemics of influenza are major public health concerns. since influenza prediction always relies on the weekly clinical or laboratory surveillance data, typically the weekly influenza - like illness ( ili ) rate series, accurate multi - step - ahead influenza predictions using ili series is of great importance, especially, to the potential coming influenza outbreaks. this study proposes comprehensive learning particle swarm optimization based machine learning ( clpso - ml ) framework incorporating support vector regression ( svr ) and multilayer perceptron ( mlp ) for multi - step - ahead influenza prediction. a comprehensive examination and comparison of the performance and potential of three commonly used multi - step - ahead prediction modeling strategies, including iterated strategy, direct strategy and multiple - input multiple - output ( mimo ) strategy, was conducted using the weekly ili rate series from both the southern and northern china. the results show that : ( 1 ) the mimo strategy achieves the best multi - step - ahead prediction, and is potentially more adaptive for longer horizon ; ( 2 ) the iterated strategy demonstrates special potentials for deriving the least time difference between the occurrence of the predicted peak value and the true peak value of an influenza outbreak ; ( 3 ) for ili in the northern china, svr model implemented with mimo strategy performs best, and svr with iterated strategy also shows remarkable performance especially during outbreak periods ; while for ili in the southern china, both svr and mlp models with mimo strategy have competitive prediction performance | arxiv:2110.14343 |
we present a new family of asymptotically locally ads $ _ 5 $ squashed supersymmetric black hole solutions of fayet - iliopoulos gauged $ { \ cal n } = 2 $, $ d = 5 $ supergravity with two vector multiplets that have a natural uplift to type iib supergravity. our new family of black holes is characterized by three parameters, of which two control the horizon geometry while the latter regulates the squashing at the boundary. we evaluate the main physical properties of the family of solutions using holographic renormalization and find that the entropy is independent on the squashing and it is reproduced by using the angular momentum and the page charges. in previously known solutions page and holographic charges are equal, due to the vanishing of the chern - simons term that here, instead, is relevant. this result suggests that for asymptotically locally ads $ _ 5 $ solutions we should refer to the page charges to describe the thermodynamics of the system. | arxiv:1903.00021 |
we formulate feynman path integral on a non commutative plane using coherent states. the propagator for a free particle exhibits uv cut - off induced by the parameter of non commutativity. | arxiv:hep-th/0307217 |
we study the decay scenario of a codimension - 2 nhim in a three degrees of freedom hamiltonian system under increasing perturbation when the nhim loses its normal hyperbolicity. on one hand, we follow this decay in the poincar \ ' e map for the internal dynamics of the nhim. on the other hand, we also follow the decay in a time delay function calculated on a 2 - dimensional plane in the phase space of the system. in addition, we observe the role of tangential transient effects on the decaying nhim and their manifestation in the delay time indicator function. thereby we obtain ideas on how the decay of nhims and the tangential transient effects are encoded in indicator functions. as an example of demonstration, we use the motion of an electron in a perturbed magnetic dipole field. | arxiv:2501.02102 |
kagome lattices have emerged as an ideal platform for exploring various exotic quantum phenomena such as correlated topological phases, frustrated lattice geometry, unconventional charge density wave orders, chern quantum phases, superconductivity, etc. in particular, the vanadium based nonmagnetic kagome metals av3sb5 ( a = k, rb, and cs ) have seen a flurry of research interest due to the discovery of multiple competing orders. here, we report the discovery of a new ti based kagome metal ybti3bi4 and employ angle - resolved photoemission spectroscopy ( arpes ), magnetotransport in combination with density functional theory calculations to investigate its electronic structure. we reveal spectroscopic evidence of multiple flat bands arising from the kagome lattice of ti with predominant ti 3d character. through our calculations of the z2 indices, we have identified that the system exhibits topological nontriviality with surface dirac cones at the gamma point and a quasi two - dimensional dirac state at the k point which is further confirmed by our arpes measured band dispersion. these results establish ybti3bi4 as a novel platform for exploring the intersection of nontrivial topology, and electron correlation effects in this newly discovered ti based kagome lattice. | arxiv:2309.01176 |
large language models ( llms ) are increasingly deployed in large - scale online services, enabling sophisticated applications. however, the computational overhead of generating key - value ( kv ) caches in the prefill stage presents a major bottleneck, particularly for long - context inputs. prefix caching mitigates this issue by storing kv caches for reuse, reducing redundant computation. despite its advantages, prefix caching suffers from high latency due to the limited i / o bandwidth of storage devices, constraining inference efficiency. to address this challenge, we introduce cake, a novel kv cache loading system that optimally utilizes both computational and i / o resources in parallel. cake employs a bidirectional scheduling strategy that dynamically balances kv cache computation and loading, ensuring efficient resource utilization. additionally, cake incorporates an adaptive scheduling mechanism that seamlessly integrates with non - prefix caching requests, improving system throughput and adapting to fluctuating resource availabilty. through extensive evaluations across various hardware configurations, datasets, and storage conditions, cake achieves on average 2. 6x reduction in time to first token ( ttft ) compared to compute - only and i / o - only methods. our findings highlight cake as an effective and practical solution for optimizing long - context llm inference, bridging the gap between computation and i / o efficiency in large - scale ai deployments. | arxiv:2410.03065 |
the true level crossing in the asymmetric quantum rabi model without any obvious symmetry can be exhibited in the energy spectrum if the qubit bias is a multiple of the cavity frequency, which should imply the existence of the hidden symmetry. in this work, within a bogoliubov operator approach, we can readily derive the symmetry operators associated with the hidden symmetry hierarchically for arbitrary multiples. the symmetry operators for small multiples in the literature can be extremely easily reproduced in our general scheme. in addition, a general parity operator is defined through the symmetry operator, which naturally includes the well - known parity operator of the symmetric model. we believe that the present approach can be straightforwardly extended to other asymmetric rabi models to find the relevant symmetry operators. | arxiv:2107.08937 |
the linearized - laplace approximation ( lla ) has been shown to be effective and efficient in constructing bayesian neural networks. it is theoretically compelling since it can be seen as a gaussian process posterior with the mean function given by the neural network ' s maximum - a - posteriori predictive function and the covariance function induced by the empirical neural tangent kernel. however, while its efficacy has been studied in large - scale tasks like image classification, it has not been studied in sequential decision - making problems like bayesian optimization where gaussian processes - - with simple mean functions and kernels such as the radial basis function - - are the de - facto surrogate models. in this work, we study the usefulness of the lla in bayesian optimization and highlight its strong performance and flexibility. however, we also present some pitfalls that might arise and a potential problem with the lla when the search space is unbounded. | arxiv:2304.08309 |
the 3 - term recurrence relation for hermite polynomials was recently generalized to a recurrence relation for wronskians of hermite polynomials. in this note, a similar generalization for laguerre polynomials is obtained. | arxiv:1905.12312 |
abundances for fe, o, and the alpha - elements ( mg, si, ca, and ti ) have been derived from high resolution spectra of a sample of about one hundred dwarfs with high precision parallaxes measured by hipparcos. the stars have metal abundances in the range - 2. 5 < [ fe / h ] < 0. 2. the observational data set consists of high dispersion ( 20, 000 < r < 70, 000 ), high s / n ( > 200 ) spectra collected at the asiago and mcdonald observatories. the abundance analysis followed the same precepts used by gratton et al. ( 1997a ) for ~ 300 field stars and for giants in 24 globular clusters carretta and gratton ( 1997 ), and includes corrections for departures from lte in the formation of o lines. our main results are : 1. the equilibrium of ionization of fe is well satisfied in late f - - early k dwarfs 2. o and alpha - elements are overabundant by ~ 0. 3dex this large homogeneous data set was used in the derivation of accurate ages for globular clusters. | arxiv:astro-ph/9707060 |
machine learning ( ml ) has gained popularity in actuarial research and insurance industrial applications. however, the performance of most ml tasks heavily depends on data preprocessing, model selection, and hyperparameter optimization, which are considered to be intensive in terms of domain knowledge, experience, and manual labor. automated machine learning ( automl ) aims to automatically complete the full life - cycle of ml tasks and provides state - of - the - art ml models without human intervention or supervision. this paper introduces an automl workflow that allows users without domain knowledge or prior experience to achieve robust and effortless ml deployment by writing only a few lines of code. this proposed automl is specifically tailored for the insurance application, with features like the balancing step in data preprocessing, ensemble pipelines, and customized loss functions. these features are designed to address the unique challenges of the insurance domain, including the imbalanced nature of common insurance datasets. the full code and documentation are available on the github repository. ( https : / / github. com / panyidong / insurautoml ) | arxiv:2408.14331 |
the velocity of dislocations is derived analytically to incorporate and predict the intriguing effects induced by the preferential solute segregation and cottrell atmospheres in both two - dimensional and three - dimensional binary systems of various crystalline symmetries. the corresponding mesoscopic description of defect dynamics is constructed through the amplitude formulation of the phase - field crystal model which has been shown to accurately capture elasticity and plasticity in a wide variety of systems. modifications of the peach - koehler force as a result of solute concentration variations and compositional stresses are presented, leading to interesting new predictions of defect motion due to effects of cottrell atmospheres. these include the deflection of dislocation glide paths, the variation of climb speed and direction, and the change or prevention of defect annihilation, all of which play an important role in determining the fundamental behaviors of complex defect network and dynamics. the analytic results are verified by numerical simulations. | arxiv:2101.06128 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.