text
stringlengths
1
3.65k
source
stringlengths
15
79
evaluating the quality of academic journal is becoming increasing important within the context of research performance evaluation. traditionally, journals have been ranked by peer review lists such as that of the association of business schools ( uk ) or though their journal impact factor ( jif ). however, several new indicators have been developed, such as the h - index, sjr, snip and the eigenfactor which take into account different factors and therefore have their own particular biases. in this paper we evaluate these metrics both theoretically and also through an empirical study of a large set of business and management journals. we show that even though the indicators appear highly correlated in fact they lead to large differences in journal rankings. we contextualize our results in terms of the uk ' s large scale research assessment exercise ( the rae / ref ) and particularly the abs journal ranking list. we conclude that no one indicator is superior but that the h - index ( which includes the productivity of a journal ) and snip ( which aims to normalize for field effects ) may be the most effective at the moment.
arxiv:1604.06685
we investigate the minkowski ground state associated with a real massless scalar field as seen by an accelerated observer under the perspective of the de broglie - bohm quantum theory. we use the schr \ " odinger picture to obtain the wave functional associated with the minkowski vacuum in rindler coordinates, and we calculate the field trajectories through the bohmian guidance equations. the unruh temperature naturally emerges from the calculus of the average energy, but the bohmian approach precisely distinguishes between its quantum and classical components, showing that they periodically interchange their roles as the dominant cause for the temperature effects, with abrupt jumps in the infrared regime. we also compute the power spectra, and we exhibit a very special bohmian field configuration with remarkable physical properties.
arxiv:2304.10997
with advances in vision and perception architectures, we have realized that working with data is equally crucial, if not more, than the algorithms. till today, we have trained machines based on our knowledge and perspective of the world. the entire concept of dataset structural index ( dsi ) revolves around understanding a machine ` s perspective of the dataset. with dsi, i show two meta values with which we can get more information over a visual dataset and use it to optimize data, create better architectures, and have an ability to guess which model would work best. these two values are the variety contribution ratio and similarity matrix. in the paper, i show many applications of dsi, one of which is how the same level of accuracy can be achieved with the same model architectures trained over less amount of data.
arxiv:2110.04070
deception is helpful for agents masking their intentions from an observer. we consider a team of agents deceiving their supervisor. the supervisor defines nominal behavior for the agents via reference policies, but the agents share an alternate task that they can only achieve by deviating from these references. as such, the agents use deceptive policies to complete the task while ensuring that their behaviors remain plausible to the supervisor. we propose a setting with centralized deceptive policy synthesis and decentralized execution. we model each agent with a markov decision process and constrain the agents ' deceptive policies so that, with high probability, at least one agent achieves the task. we then provide an algorithm to synthesize deceptive policies that ensure the deviations of all agents are small by minimizing the worst kullback - leibler divergence between any agent ' s deceptive and reference policies. thanks to decentralization, this algorithm scales linearly with the number of agents and also facilitates the efficient synthesis of reference policies. we then explore a more general version of the deceptive policy synthesis problem. in particular, we consider a supervisor who selects a subset of agents to eliminate based on the agents ' behaviors. we give algorithms to synthesize deceptive policies so that, after the supervisor eliminates some agents, the remaining agents complete the task with high probability. we demonstrate the developed methods in a package delivery example.
arxiv:2406.17160
the main objective of this study is to investigate the phenomenon of the bouncing scenario of the universe. the most widely recognized cosmological framework is the standard cosmological model, sometimes referred to as the big bang model. this is mainly because of its inherent properties and its consistent alignment with recent observational studies. however, the standard cosmological model faces some challenges concerning the physical conditions at the initial epochs. some of these issues include the initial singularity problem, flatness problem, horizon problem, etc. some of these challenges could potentially be addressed by incorporating the inflationary scenario into the cosmological framework of the universe. however, the inflationary mechanism is not able to tackle the occurrence of the initial singularity. the bouncing cosmology offers a probable solution to this initial singularity issue. in addition, it is capable of addressing some other issues that may arise during the early stages. hence, in the modified gravity theory, bounce cosmology has been discussed.
arxiv:2402.06895
we estimate dissipative properties viz : shear and bulk viscosities of hadronic matter using relativistic boltzmann equation in relaxation time approximation within ambit of excluded volume hadron resonance gas ( ehrg ) model. we find that at zero baryon chemical potential the shear viscosity to entropy ratio ( $ \ eta / s $ ) decreases with temperature while at finite baryon chemical potential this ratio shows same behavior as a function of temperature but reaches close to kovtun - son - starinets ( kss ) bound. further along chemical freezout curve, ratio $ \ eta / s $ is almost constant apart from small initial monotonic rise. this observation may have some relevance to the experimental finding that the differential elliptic flow of charged hadrons does not change considerably at lower center of mass energy. we further find that bulk viscosity to entropy density ( $ \ zeta / s $ ) decreases with temperature while this ratio has higher value at finite baryon chemical potential at higher temperature. along freezout curve $ \ zeta / s $ decreases monotonically at lower center of mass energy and then saturates.
arxiv:1506.04613
deep learning ( dl ) has become one of the mainstream and effective methods for point cloud analysis tasks such as detection, segmentation and classification. to reduce overfitting during training dl models and improve model performance especially when the amount and / or diversity of training data are limited, augmentation is often crucial. although various point cloud data augmentation methods have been widely used in different point cloud processing tasks, there are currently no published systematic surveys or reviews of these methods. therefore, this article surveys these methods, categorizing them into a taxonomy framework that comprises basic and specialized point cloud data augmentation methods. through a comprehensive evaluation of these augmentation methods, this article identifies their potentials and limitations, serving as a useful reference for choosing appropriate augmentation methods. in addition, potential directions for future research are recommended. this survey contributes to providing a holistic overview of the current state of point cloud data augmentation, promoting its wider application and development.
arxiv:2308.12113
a computational fluid dynamics ( cfd ) model is developed to simulate the dynamics of meniscus formation and capillary flow between vertical parallel plates. the arbitrary lagrangian - eulerian ( ale ) approach was employed to predict and reconstruct the exact shape of the meniscus. the model was used to simulate the rise of water and the evolution of the meniscus in vertical channels with various spacing values of 0. 5 mm, 0. 7 mm, and 1 mm. the validity of the model was established by comparing the steady - state capillary height and the meniscus profile with analytical solutions. the developed model presents a novel approach for simulation of capillary flow accounting for the detailed hydrodynamic phenomena that cannot be captured by analytical models.
arxiv:2003.05036
modular, jacobi, and mock - modular forms serve as generating functions for bps black hole degeneracies. by training feed - forward neural networks on fourier coefficients of automorphic forms derived from the dedekind eta function, eisenstein series, and jacobi theta functions, we demonstrate that machine learning techniques can accurately predict modular weights from truncated expansions. our results reveal strong performance for negative weight modular and quasi - modular forms, particularly those arising in exact black hole counting formulae, with lower accuracy for positive weights and more complicated combinations of jacobi theta functions. this study establishes a proof of concept for using machine learning to identify how data is organized in terms of modular symmetries in gravitational systems and suggests a pathway toward automated detection and verification of symmetries in quantum gravity.
arxiv:2505.05549
in this work, we obtain the leading corrections to the jet momentum broadening distribution in a qcd medium arising from the transverse flow of the matter. we first derive the single - particle propagator of a highly energetic parton resumming its multiple interactions with the homogeneous flowing matter, explicitly keeping the leading subeikonal flow terms. then, we use this propagator to obtain the jet broadening distribution and its leading moments. we show that this distribution becomes anisotropic in the presence of transverse flow, since its odd moments are generally non - zero and proportional to the transverse velocity of the medium. finally, we evaluate several odd moments, which we compare to the corresponding results at first order in opacity, showing that accounting for multiple in - medium scatterings is essential to describe some observables in dense nuclear matter.
arxiv:2207.07141
selection systems and the corresponding replicator equations model the evolution of replicators with a high level of abstraction. in this paper we apply novel methods of analysis of selection systems to the replicator equations. to be suitable for the suggested algorithm the interaction matrix of the replicator equation should be transformed ; in particular the standard singular value decomposition allows us to rewrite the replicator equation in a convenient form. the original $ n $ - dimensional problem is reduced to the analysis of asymptotic behavior of the solutions to the so - called escort system, which in some important cases can be of significantly smaller dimension than the original system. the newton diagram methods are applied to study the asymptotic behavior of the solutions to the escort system, when interaction matrix has rank 1 or 2. a general replicator equation with the interaction matrix of rank 1 is fully analyzed ; the conditions are provided when the asymptotic state is a polymorphic equilibrium. as an example of the system with the interaction matrix of rank 2 we consider the problem from [ adams, m. r. and sornborger, a. t., j math biol, 54 : 357 - 384, 2007 ], for which we show, for arbitrary dimension of the system and under some suitable conditions, that generically one globally stable equilibrium exits on the 1 - skeleton of the simplex.
arxiv:0906.4986
we report the use of bilayer graphene as an atomically - smooth contact for nanoscale devices. a two - terminal bucky ball ( c60 ) based molecular memory is fabricated with bilayer graphene as a contact on the polycrystalline nickel electrode. graphene provides an atomically - smooth covering over an otherwise rough metal surface. the use of graphene additionally prohibits the electromigration of nickel atoms into the c60 layer. the devices exhibit a low - resistance state in the first sweep cycle and irreversibly switch to a high resistance state at 0. 8 - 1. 2 v bias. the reverse sweep has a hysteresis behavior as well. in the subsequent cycles, the devices retain the high - resistance state, thus making it write - once read - many memory ( worm ). the ratio of current in low - resistance to high - resistance state is lying in 20 - 40 range for various devices with excellent retention characteristics. control sample without the bilayer graphene shows random hysteresis and switching.
arxiv:1303.6603
this paper addresses the task of automatically detecting narrative structures in raw texts. previous works have utilized the oral narrative theory by labov and waletzky to identify various narrative elements in personal stories texts. instead, we direct our focus to news articles, motivated by their growing social impact as well as their role in creating and shaping public opinion. we introduce compres - - the first dataset for narrative structure in news media. we describe the process in which the dataset was constructed : first, we designed a new narrative annotation scheme, better suited for news media, by adapting elements from the narrative theory of labov and waletzky ( complication and resolution ) and adding a new narrative element of our own ( success ) ; then, we used that scheme to annotate a set of 29 english news articles ( containing 1, 099 sentences ) collected from news and partisan websites. we use the annotated dataset to train several supervised models to identify the different narrative elements, achieving an $ f _ 1 $ score of up to 0. 7. we conclude by suggesting several promising directions for future work.
arxiv:2007.04874
an incomplete particle identification distorts the observed event - by - event fluctuations of the hadron chemical composition in nucleus - nucleus collisions. a new experimental technique called the { \ em identity method } was recently proposed. it eliminated the misidentification problem for one specific combination of the second moments in a system of two hadron species. in the present paper this method is extended to calculate all the second moments in a system with arbitrary number of hadron species. special linear combinations of the second moments are introduced. these combinations are presented in terms of single - particle variables and can be found experimentally from the event - by - event averaging. the mathematical problem is then reduced to solving a system of linear equations. the effect of incomplete particle identification is fully eliminated from the final results.
arxiv:1106.4473
the purpose of this paper is to study ergodic averages with deterministic weights. more precisely we study the convergence of the ergodic averages of the type $ \ frac { 1 } { n } \ sum _ { k = 0 } ^ { n - 1 } \ theta ( k ) f \ circ t ^ { u _ k } $ where $ \ theta = ( \ theta ( k ) ; k \ in \ nn ) $ is a bounded sequence and $ u = ( u _ k ; k \ in \ nn ) $ a strictly increasing sequence of integers such that for some $ \ delta < 1 $ $ $ s _ n ( \ theta, u ) : = \ sup _ { \ alpha \ in \ prr } | \ sum _ { k = 0 } ^ { n - 1 } \ theta ( k ) \ exp ( 2i \ pi \ alpha u _ k ) | = o ( n ^ { \ delta } ) \ leqno { ( { \ cal h } _ 1 ) } $ $ i. e., there exists a constant $ c $ such that $ s _ n ( \ theta, u ) \ leq c n ^ { \ delta } $. we define $ \ delta ( \ theta, u ) $ to be the infimum of the $ \ delta $ satisfying $ \ h _ 1 $ for $ \ theta $ and $ u $.
arxiv:0808.0142
database - search algorithms, that deduce peptides from mass spectrometry ( ms ) data, have tried to improve the computational efficiency to accomplish larger, and more complex systems biology studies. existing serial, and high - performance computing ( hpc ) search engines, otherwise highly successful, are known to exhibit poor - scalability with increasing size of theoretical search - space needed for increased complexity of modern non - model, multi - species ms - based omics analysis. consequently, the bottleneck for computational techniques is the communication costs of moving the data between hierarchy of memory, or processing units, and not the arithmetic operations. this post - moore change in architecture, and demands of modern systems biology experiments have dampened the overall effectiveness of the existing hpc workflows. we present a novel efficient parallel computational method, and its implementation on memory - distributed architectures for peptide identification tool called hicops, that enables more than 100 - fold improvement in speed over most existing hpc proteome database search tools. hicops empowers the supercomputing database search concept for comprehensive identification of peptides, and all their modified forms within a reasonable time - frame. we demonstrate this by searching gigabytes of experimental ms data against terabytes of databases where hicops completes peptide identification in few minutes using 72 parallel nodes ( 1728 cores ) compared to several weeks required by existing state - of - the - art tools using 1 node ( 24 cores ) ; 100 minutes vs 5 weeks ; 500x speedup. finally, we formulate a theoretical framework for our overhead - avoiding strategy, and report superior performance evaluation results for key metrics including execution time, cpu utilization, speedups, and i / o efficiency. the software will be made available at : hicops. github. io
arxiv:2102.02286
( 2n - 2 ) $, where $ k \ geq1 $, and for $ m = 2n ( n - 1 ) $. in addition, we show that for $ m \ geq3 $, $ b _ m ( \ mathbb { r } p ^ 2 \ setminus \ { x _ 1, \ dots, x _ n \ } ) $ is not residually nilpotent and for $ m \ geq 5 $, it is not residually solvable.
arxiv:2111.07838
we have used the spitzer infrared spectrograph to observe seven members of the tw hya association, the nearest stellar association whose age ( $ \ sim $ 10 myr ) is similar to the timescales thought to apply to planet formation and disk dissipation. only two of the seven targets display infrared excess emission, indicating that substantial amounts of dust still exist closer to the stars than is characteristic of debris disks ; however, in both objects we confirm an abrupt short - wavelength edge to the excess, as is seen in disks with cleared - out central regions. the mid - infrared excesses in the spectra of hen 3 - 600 and tw hya include crystalline silicate emission features, indicating that the grains have undergone significant thermal processing. we offer a detailed comparison between the spectra of tw hya and hen 3 - 600, and a model that corroborates the spectral shape and our previous understanding of the radial structure of these protoplanetary disks.
arxiv:astro-ph/0406138
the time evolution of an initially uncorrelated system is governed by a completely positive ( cp ) map. more generally, the system may contain initial ( quantum ) correlations with an environment, in which case the system evolves according to a not - completely positive ( ncp ) map. it is an interesting question what the relative measure is for these two types of maps within the set of positive maps. after indicating the scope of the full problem of computing the true volume for generic maps acting on a qubit, we study the case of pauli channels in an abstract space whose elements represent an equivalence class of maps that are identical up to a non - pauli unitary. in this space, we show that the volume of ncp maps is twice that of cp maps.
arxiv:1902.00906
many social and biological networks consist of communities - groups of nodes within which connections are dense, but between which connections are sparser. recently, there has been considerable interest in designing algorithms for detecting community structures in real - world complex networks. in this paper, we propose an evolving network model which exhibits community structure. the network model is based on the inner - community preferential attachment and inter - community preferential attachment mechanisms. the degree distributions of this network model are analyzed based on a mean - field method. theoretical results and numerical simulations indicate that this network model has community structure and scale - free properties.
arxiv:physics/0510239
in this paper, a 3 dimensional dynamic model of a perforated rectangular metal hydride tank is presented. the metal hydride tank consists of powder lani5 and the tubes are organized in the geometry of the rectangular in order to simulate the flux of the ambient air through the reactor, which affect hardly both the hydriding and the dehydriding reaction. a simulating study is made by solving simultaneously the energy, mass and momentum differential equations of conservation by using comsol multiphysics ( version 4. 2 ) software. the simulation results show great agreement with the experimental data.
arxiv:1303.4512
digitalization has led to radical changes in the distribution of goods across various sectors. the tendency is to move from traditional buyer - seller markets to subscription - based on - demand " smart " matching platforms enabled by pervasive icts. the driving force behind this lies in the fact that assets, which were scarce in the past, are readily abundant, approaching a regime of zero marginal costs. this is also becoming a reality in electrified energy systems due to the substantial growth of distributed renewable energy sources such as solar and wind ; the increasing number of small - scale storage units such as batteries and heat pumps ; and the availability of flexible loads that enable demand - side management. in this context, this article proposes a system architecture based on a logical ( cyber ) association of spatially distributed ( physical ) elements as an approach to build a virtual microgrid operated as a software - defined energy network ( sden ) that is enabled by packetized energy management. the proposed cyber - physical system presumes that electrical energy is shared among its members and that the energy sharing is enabled in the cyber domain by handshakes inspired by resource allocation methods utilized in computer networks, wireless communications, and peer - to - peer internet applications ( e. g., bittorrent ). the proposal has twofold benefits : ( i ) reducing the complexity of current market - based solutions by removing unnecessary and costly mediations and ( ii ) guaranteeing energy access to all virtual microgrid members according to their individual needs. this article concludes that the proposed solution generally complies with the existing regulations but has highly disruptive potential to organize a dominantly electrified energy system in the mid - to long - term, being a technical counterpart to the recently developed social - oriented microgrid proposals.
arxiv:2102.00656
surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid – liquid interfaces, solid – gas interfaces, solid – vacuum interfaces, and liquid – gas interfaces. it includes the fields of surface chemistry and surface physics. some related practical applications are classed as surface engineering. the science encompasses concepts such as heterogeneous catalysis, semiconductor device fabrication, fuel cells, self - assembled monolayers, and adhesives. surface science is closely related to interface and colloid science. interfacial chemistry and physics are common subjects for both. the methods are different. in addition, interface and colloid science studies macroscopic phenomena that occur in heterogeneous systems due to peculiarities of interfaces. = = history = = the field of surface chemistry started with heterogeneous catalysis pioneered by paul sabatier on hydrogenation and fritz haber on the haber process. irving langmuir was also one of the founders of this field, and the scientific journal on surface science, langmuir, bears his name. the langmuir adsorption equation is used to model monolayer adsorption where all surface adsorption sites have the same affinity for the adsorbing species and do not interact with each other. gerhard ertl in 1974 described for the first time the adsorption of hydrogen on a palladium surface using a novel technique called leed. similar studies with platinum, nickel, and iron followed. most recent developments in surface sciences include the 2007 nobel prize of chemistry winner gerhard ertl ' s advancements in surface chemistry, specifically his investigation of the interaction between carbon monoxide molecules and platinum surfaces. = = chemistry = = surface chemistry can be roughly defined as the study of chemical reactions at interfaces. it is closely related to surface engineering, which aims at modifying the chemical composition of a surface by incorporation of selected elements or functional groups that produce various desired effects or improvements in the properties of the surface or interface. surface science is of particular importance to the fields of heterogeneous catalysis, electrochemistry, and geochemistry. = = = catalysis = = = the adhesion of gas or liquid molecules to the surface is known as adsorption. this can be due to either chemisorption or physisorption, and the strength of molecular adsorption to a catalyst surface is critically important to the catalyst ' s performance ( see sabatier principle ). however, it is difficult to study these phenomena
https://en.wikipedia.org/wiki/Surface_science
this paper introduces a new addition to the spinex ( similarity - based predictions with explainable neighbors exploration ) family, tailored specifically for time series and forecasting analysis. this new algorithm leverages the concept of similarity and higher - order temporal interactions across multiple time scales to enhance predictive accuracy and interpretability in forecasting. to evaluate the effectiveness of spinex, we present comprehensive benchmarking experiments comparing it against 18 algorithms and across 49 synthetic and real datasets characterized by varying trends, seasonality, and noise levels. our performance assessment focused on forecasting accuracy and computational efficiency. our findings reveal that spinex consistently ranks among the top 5 performers in forecasting precision and has a superior ability to handle complex temporal dynamics compared to commonly adopted algorithms. moreover, the algorithm ' s explainability features, pareto efficiency, and medium complexity ( on the order of o ( log n ) ) are demonstrated through detailed visualizations to enhance the prediction and decision - making process. we note that integrating similarity - based concepts opens new avenues for research in predictive analytics, promising more accurate and transparent decision making.
arxiv:2408.02159
we determine all holomorphically separable complex manifolds of dimension $ p + q $ which admits a smooth envelope of holomorphy such that the general indefinite unitary group of size $ p + q $ acts effectively by holomorphic transformations. also we give exact description of the automorphism groups of those complex manifolds. as an application we consider a characterization of those complex manifolds by their automorphism groups.
arxiv:1506.03940
rolling tachyon field models are among the candidates suggested as explanations for the recent acceleration of the universe. in these models the field is expected to interact with gauge fields and lead to variations of the fine - structure constant $ \ alpha $. here we take advantage of recent observational progress and use a combination of background cosmological observations of type ia supernovas and astrophysical and local measurements of $ \ alpha $ to improve constraints on this class of models. we show that the constraints on $ \ alpha $ imply that the field dynamics must be extremely slow, leading to a constraint of the present - day dark energy equation of state $ ( 1 + w _ 0 ) < 2. 4 \ times10 ^ { - 7 } $ at the $ 99. 7 \ % $ confidence level. therefore current and forthcoming standard background cosmology observational probes can ' t distinguish this class of models from a cosmological constant, while detections of $ \ alpha $ variations could possibly do so since they would have a characteristic redshift dependence.
arxiv:1606.08380
networked programmable logic controllers ( plcs ) are proprietary industrial devices utilized in critical infrastructure that execute control logic applications in complex proprietary runtime environments that provide standardized access to the hardware resources in the plc. these control applications are programmed in domain - specific iec 61131 - 3 languages, compiled into a proprietary binary format, and process data provided via industrial protocols. control applications present an attack surface threatened by manipulated traffic. for example, remote code injection in a control application would directly allow to take over the plc, threatening physical process damage and the safety of human operators. however, assessing the security of control applications is challenging due to domain - specific challenges and the limited availability of suitable methods. network - based fuzzing is often the only way to test such devices but is inefficient without guidance from execution tracing. this work presents the fieldfuzz framework that analyzes the security risks posed by the codesys runtime ( used by over 400 devices from 80 industrial plc vendors ). fieldfuzz leverages efficient network - based fuzzing based on three main contributions : i ) reverse - engineering enabled remote control of control applications and runtime components, ii ) automated command discovery and status code extraction via network traffic and iii ) a monitoring setup to allow on - system tracing and coverage computation. we use fieldfuzz to run fuzzing campaigns, which uncover multiple vulnerabilities, leading to three reported cve ids. to study the cross - platform applicability of fieldfuzz, we reproduce the findings on a diverse set of industrial control system ( ics ) devices, showing a significant improvement over the state - of - the - art.
arxiv:2204.13499
given a finite set $ x $, a collection $ \ mathcal { t } $ of rooted phylogenetic trees on $ x $ and an integer $ k $, the hybridization number problem asks if there exists a phylogenetic network on $ x $ that displays all trees from $ \ mathcal { t } $ and has reticulation number at most $ k $. we show two kernelization algorithms for hybridization number, with kernel sizes $ 4k ( 5k ) ^ t $ and $ 20k ^ 2 ( \ delta ^ + - 1 ) $ respectively, with $ t $ the number of input trees and $ \ delta ^ + $ their maximum outdegree. experiments on simulated data demonstrate the practical relevance of these kernelization algorithms. in addition, we present an $ n ^ { f ( k ) } t $ - time algorithm, with $ n = | x | $ and $ f $ some computable function of $ k $.
arxiv:1311.4045
the earlier - developed master equation approach and kinetic cluster methods are applied to study kinetics of l1 _ 0 type orderings in alloys, including the formation of twinned structures characteristic of cubic - tetragonal - type phase transitions. a microscopical model of interatomic deformational interactions is suggested which generalizes a similar model of khachaturyan for dilute alloys to the physically interesting case of concentrated alloys. the model is used to simulate a1 - > l1 _ 0 transformations after a quench of an alloy from the disordered a1 phase to the single - phase l1 _ 0 state for a number of alloy models with different chemical interactions, temperatures, concentrations, and tetragonal distortions. we find a number of peculiar features in both transient microstructures and transformation kinetics, many of them agreeng well with experimental data. the simulations also demonstrate a phenomenon of an interaction - dependent alignment of antiphase boundaries in nearly - equilibrium twinned bands which seems to be observed in some experiments.
arxiv:cond-mat/0108422
in [ j. wen, y. shi, stat. probab. lett. 156 ( 2020 ) 108599 ] the authors first introduced a kind of anticipated backward stochastic volterra integral equations ( anticipated bsvies, for short ). by virtue of the duality principle, it is found in this paper that the anticipated bsvies can be applied to the study of stochastic differential games. for this in this paper we deeply investigate a more general class of anticipated bsvies whose generator includes both pointwise time - advanced functions and average time - advanced functions. in theory, the well - posedness and the comparison theorem of anticipated bsvies are established, and some regularity results of adapted m - solutions are proved by applying malliavin calculus, which cover the previous results for bsvies. further, using linear absvies as the adjoint equation, we present the maximum principle for the nonzero - sum differential game system of stochastic delay volterra integral equations ( sdvies, for short ) for the first time. as one of the applications of the theorem, a nash equilibrium point of the linear - quadratic differential game problem of sdvies is obtained.
arxiv:2501.14263
in this talk a number of broad issues are raised about the origins of cp violation and how to test the ideas.
arxiv:hep-ph/0201045
on the assumption that two electrons with the same group velocity effectively attract each other a simple model hamiltonian is proposed to question the existence of unconventional electron pairs formed by electrons in a strong periodic potential.
arxiv:1001.0795
weather forecasting is essential for facilitating diverse socio - economic activity and environmental conservation initiatives. deep learning techniques are increasingly being explored as complementary approaches to numerical weather prediction ( nwp ) models, offering potential benefits such as reduced complexity and enhanced adaptability in specific applications. this work presents a novel design, small shuffled attention unet ( ssa - unet ), which enhances smaat - unet ' s architecture by including a shuffle channeling mechanism to optimize performance and diminish complexity. to assess its efficacy, this architecture and its reduced variant are examined and trained on two datasets : a dutch precipitation dataset from 2016 to 2019, and a french cloud cover dataset containing radar images from 2017 to 2018. three output configurations of the proposed architecture are evaluated, yielding outputs of 1, 6, and 12 precipitation maps, respectively. to better understand how this model operates and produces its predictions, a gradient - based approach called grad - cam is used to analyze the outputs generated. the analysis of heatmaps generated by grad - cam facilitated the identification of regions within the input maps that the model considers most informative for generating its predictions. the implementation of ssa - unet can be found on our github \ footnote { \ href { https : / / github. com / marcoturzi / ssa - unet } { https : / / github. com / marcoturzi / ssa - unet } }
arxiv:2504.18309
within the context of massive n - component $ \ phi ^ 4 $ scalar field theory, we use asymptotic pade - approximant methods to estimate from prior orders of perturbation theory the five - loop contributions to the coupling - constant beta - function $ \ beta _ g $, the anomalous mass dimension $ \ gamma _ m $, the vacuum - energy beta - function $ \ beta _ v $, and the anomalous dimension $ \ gamma _ 2 $ of the scalar field propagator. these estimates are then compared with explicit calculations of the five - loop contributions to $ \ beta _ g $, $ \ gamma _ m $, $ \ beta _ v $, and are seen to be respectively within 5 %, 18 %, and 27 % of their true values for $ n $ between 1 and 5. we then extend asymptotic pade - approximant methods to predict the presently unknown six - loop contributions to $ \ beta _ g $, $ \ gamma _ m $, and $ \ beta _ v $. these predictions, as well as the six - loop prediction for $ \ gamma _ 2 $, provide a test of asymptotic pade - approximant methods against future calculations.
arxiv:hep-ph/9809538
we present a higher order space - time unfitted finite element method for convection - diffusion problems on coupled ( surface and bulk ) domains. in that way, we combine a method suggested by heimann, lehrenfeld, preu { \ ss } ( siam j. sci. comput. 45 ( 2 ), 2023, b139 - b165 ) for the bulk case with a method suggested by sass, reusken ( comput. math. appl. 146 ( 15 ), 2023, 253 - 270 ) for the surface case. the geometry is allowed to change with time, and the higher order discrete approximation of this geometry is ensured by a time - dependent isoparametric mapping. the space - time discretisation approach allows for straightforward handling of arbitrary high orders. in that way, we also generalise results of hansbo, larson, zahedi ( comput. methods appl. mech. engrg. 307, 2016, 96 - 116 ) to higher orders. the convergence of the proposed higher order discretisations is confirmed numerically.
arxiv:2401.07807
: 10. 1016 / j. hm. 2004. 09. 001. kichenassamy, satynad ( 2006 ), " baudhayana ' s rule for the quadrature of the circle ", historia mathematica, 33 ( 2 ) : 149 – 183, doi : 10. 1016 / j. hm. 2005. 05. 001. neugebauer, otto ; pingree, david, eds. ( 1970 ), the pancasiddhantika of varahamihira, copenhagen { { citation } } : cs1 maint : location missing publisher ( link ). new edition with translation and commentary, ( 2 vols. ). pingree, david ( 1971 ), " on the greek origin of the indian planetary model employing a double epicycle ", journal of historical astronomy, 2 ( 1 ) : 80 – 85, bibcode : 1971jha..... 2... 80p, doi : 10. 1177 / 002182867100200202, s2cid 118053453. pingree, david ( 1973 ), " the mesopotamian origin of early indian mathematical astronomy ", journal of historical astronomy, 4 ( 1 ) : 1 – 12, bibcode : 1973jha..... 4.... 1p, doi : 10. 1177 / 002182867300400102, s2cid 125228353. pingree, david, ed. ( 1978 ), the yavanajataka of sphujidhvaja, harvard oriental series 48 ( 2 vols. ), edited, translated and commented by d. pingree, cambridge, ma { { citation } } : cs1 maint : location missing publisher ( link ). pingree, david ( 1988 ), " reviewed work ( s ) : the fidelity of oral tradition and the origins of science by frits staal ", journal of the american oriental society, 108 ( 4 ) : 637 – 638, doi : 10. 2307 / 603154, jstor 603154. pingree, david ( 1992 ), " hellenophilia versus the history of science ", isis, 83 ( 4 ) : 554 – 563, bibcode : 1992isis... 83.. 554p, doi : 10. 1086 / 356288, jstor 234257, s2ci
https://en.wikipedia.org/wiki/Indian_mathematics
the accurate diagnosis and molecular profiling of colorectal cancers are critical for planning the best treatment options for patients. microsatellite instability ( msi ) or mismatch repair ( mmr ) status plays a vital role in appropriate treatment selection, has prognostic implications and is used to investigate the possibility of patients having underlying genetic disorders ( lynch syndrome ). nice recommends that all crc patients should be offered mmr / msi testing. immunohistochemistry is commonly used to assess mmr status with subsequent molecular testing performed as required. this incurs significant extra costs and requires additional resources. the introduction of automated methods that can predict msi or mmr status from a target image could substantially reduce the cost associated with mmr testing. unlike previous studies on msi prediction involving training a cnn using coarse labels ( msi vs microsatellite stable ( mss ) ), we have utilised fine - grain mmr labels for training purposes. in this paper, we present our work on predicting msi status in a two - stage process using a single target slide either stained with ck8 / 18 or h & e. first, we trained a multi - headed convolutional neural network model where each head was responsible for predicting one of the mmr protein expressions. to this end, we performed the registration of mmr stained slides to the target slide as a pre - processing step. in the second stage, statistical features computed from the mmr prediction maps were used for the final msi prediction. our results demonstrated that msi classification can be improved by incorporating fine - grained mmr labels in comparison to the previous approaches in which only coarse labels were utilised.
arxiv:2203.00449
a database - assisted tv white space network can achieve the goal of green cognitive communication by effectively reducing the energy consumption in cognitive communications. the success of such a novel network relies on a proper business model that provides incentives for all parties involved. in this paper, we propose an integrated spectrum and information market for a database - assisted tv white space network, where the geo - location database serves as both the spectrum market platform and the information market platform. we study the interactions among the database, the spectrum licensee, and unlicensed users by modelling the system as a three - stage sequential decision process. in stage i, the database and the licensee negotiate regarding the commission for the licensee to use the spectrum market platform. in stage ii, the database and the licensee compete for selling information or channels to unlicensed users. in stage iii, unlicensed users determine whether they should buy exclusive usage right of licensed channels from the licensee or information regarding unlicensed channels from the database. analyzing such a three - stage model is challenging due to the co - existence of both positive and negative network externalities in the information market. despite of this, we are able to characterize how the network externalities affect the equilibrium behaviors of all parties involved. we analytically show that in this integrated market, the licensee can never get a market share more than half. our numerical results further show that the proposed integrated market can improve the network profit up to 87 %, compared with a pure information market.
arxiv:1603.04982
we report an experimental study of the elastic properties of a two - dimensional ( 2d ) colloidal crystal subjected to light - induced substrate potentials. in agreement with recent theoretical predictions [ h. h. von gruenberg and j. baumgartl, phys. rev. e 75, 051406 ( 2007 ) ] the phonon band structure of such systems can be tuned depending on the symmetry and depth of the substrate potential. calculations with binary crystals suggest that phononic band engineering can be also performed by variations of the pair potential and thus opens novel perspectives for the fabrication of phononic crystals with band gaps tunable by external fields.
arxiv:0710.0861
integrating deep learning ( dl ) techniques in the internet of vehicles ( iov ) introduces many security challenges and issues that require thorough examination. this literature review delves into the inherent vulnerabilities and risks associated with dl in iov systems, shedding light on the multifaceted nature of security threats. through an extensive analysis of existing research, we explore potential threats posed by dl algorithms, including adversarial attacks, data privacy breaches, and model poisoning. additionally, we investigate the impact of dl on critical aspects of iov security, such as intrusion detection, anomaly detection, and secure communication protocols. our review emphasizes the complexities of ensuring the robustness, reliability, and trustworthiness of dl - based iov systems, given the dynamic and interconnected nature of vehicular networks. furthermore, we discuss the need for novel security solutions tailored to address these challenges effectively and enhance the security posture of dl - enabled iov environments. by offering insights into these critical issues, this chapter aims to stimulate further research, innovation, and collaboration in securing dl techniques within the context of the iov, thereby fostering a safer and more resilient future for vehicular communication and connectivity.
arxiv:2407.16410
in this paper, the weighted estimates for multilinear pseudo - differential operators were systematically studied in rearrangement invariant banach and quasi - banach spaces. these spaces contain the lebesgue space, the classical lorentz space and marcinkiewicz space as typical examples. more precisely, the weighted boundedness and weighted modular estimates, including the weak endpoint case, were established for multilinear pseudo - differential operators and their commutators. as applications, we show that the above results also hold for the multilinear fourier multipliers, multilinear square functions, and a class of multilinear calder \ ' { o } n - zygmund operators.
arxiv:2312.08938
very recently, the authors of [ prl { \ bf 118 } ( 2017 ) 021102 ] have shown that violation of energy - momentum tensor ( emt ) could result in an accelerated expansion state via appearing an effective cosmological constant, in the context of unimodular gravity. inspired by this outcome, in this paper we investigate cosmological consequences of violation of the emt conservation in a particular class of $ f ( r, t ) $ gravity when only the pressure - less fluid is present. in this respect, we focus on the late time solutions of models of the type $ f ( r, t ) = r + \ beta \ lambda ( - t ) $. as the first task, we study the solutions when the conservation of emt is respected and then we proceed with those in which violation occurs. we have found, provided that the emt conservation is violated, there generally exist two accelerated expansion solutions which their stability properties depend on the underlying model. more exactly, we obtain a dark energy solution for which the effective equation of state ( eos ) depend on model parameters and a de sitter solution. we present a method to parametrize $ \ lambda ( - t ) $ function which is useful in dynamical system approach and has been employed in the herein model. also, we discuss the cosmological solutions for models with $ \ lambda ( - t ) = 8 \ pi g ( - t ) ^ { \ alpha } $ in the presence of the ultra relativistic matter.
arxiv:1702.07380
very often the skyrmions form a triangular crystal in chiral magnets. here we study the effect of itinerant electrons on the structure of skyrmion crystal ( skx ) on triangular lattice using kondo lattice model in the large coupling limit and treating the localized spins as classical vectors. to simulate the system, we employ hybrid markov chain monte carlo method ( hmcmc ) which includes electron diagonalization in each mcmc update for classical spins. we present the low temperature results for $ 12 \ times 12 $ system at electron density $ n = 1 / 3 $ which show a sudden jump in skyrmion number when we increase the hopping strength of the itinerant electrons. we find that this high skyrmion number skx phase is stabilized by combined effects : lowering of density of states at electron filling $ n = 1 / 3 $ and also pushing the bottom energy states further down. we show that these results hold for larger system using travelling cluster variation of hmcmc. we expect that itinerant triangular magnets might exhibit the possible transition between low density to high density skx phases by applying external pressure.
arxiv:2005.02724
trigonometric rosen - morse potential is employed as a mesonic potential interaction. the extended nikiforov - uvarov method is used to solve the n - radial fractional schrodinger equation analytically. using the generalized fractional derivative, the energy eigenvalues are obtained in the fractional form. the current findings are used to calculate the masses of mesons such as charmonium, bottomonium, and heavy - light mesons. the current findings are superior to those of other recent studies and show good agreement with experimental data as a result, the fractional parameter is crucial in optimizing meson masses.
arxiv:2209.00566
we propose a mechanism of electroweak baryogenesis based on the standard model and explaining the coincidence between the baryon and dark matter ( dm ) densities. large curvature fluctuations slightly below the threshold for primordial black hole ( pbh ) formation locally reheat the plasma above the sphaleron barrier when they collapse gravitationally, leading to regions with a maximal baryogenesis at the quantum chromodynamics epoch. using numerical relativity simulations, we calculate the overdensity threshold for baryogenesis. if pbh significantly contribute to the dm, aborted pbhs can generate a baryon density and an averaged baryon - to - photon ratio consistent with observations.
arxiv:2401.09408
cdr ( cross - domain recommendation ), i. e., leveraging information from multiple domains, is a critical solution to data sparsity problem in recommendation system. the majority of previous research either focused on single - target cdr ( stcdr ) by utilizing data from the source domains to improve the model ' s performance on the target domain, or applied dual - target cdr ( dtcdr ) by integrating data from the source and target domains. in addition, multi - target cdr ( mtcdr ) is a generalization of dtcdr, which is able to capture the link among different domains. in this paper we present hgdr ( heterogeneous graph - based framework with disentangled representations learning ), an end - to - end heterogeneous network architecture where graph convolutional layers are applied to model relations among different domains, meanwhile utilizes the idea of disentangling representation for domain - shared and domain - specifc information. first, a shared heterogeneous graph is generated by gathering users and items from several domains without any further side information. second, we use hgdr to compute disentangled representations for users and items in all domains. experiments on real - world datasets and online a / b tests prove that our proposed model can transmit information among domains effectively and reach the sota performance. the code can be found here : https : / / github. com / netease - media / hgcdr.
arxiv:2407.00909
the separation and reconstructions of charged hadron and neutral hadron from their overlapped showers in electromagnetic calorimeter is very important for the reconstructions of some particles with hadronic decays, for example the tau reconstruction in the searches for the standard model and supersymmetric higgs bosons at the lhc. in this paper, a method combining the shower cluster in electromagnetic calorimeter and the parametric formula for hadron showers, was developed to separate the overlapped showers between charged hadron and neutral hadron. taking the hadronic decay containing one charged pion and one neutral pion in the final status of tau for example, satisfied results of the separation of the overlapped showers, the reconstructions of the energy and positions of the hadrons were obtained. an improved result for the tau reconstruction with this decay model can be also achieved after the application of the proposed method.
arxiv:1305.2035
we propose a hybrid quantum system where the strong coupling regime can be achieved between a rydberg atomic ensemble and propagating surface phonon polaritons on a piezoelectric superlattice. by exploiting the large electric dipole moment and long lifetime of rydberg atoms as well as tightly confined surface phonon polariton modes, it is possible to achieve a coupling constant far exceeding the relevant decay rates. the frequency of the surface mode can be selected so it is resonant with a rydberg transition by engineering the piezoelectric superlattice. we describe a way to observe the rabi splitting associated with the strong coupling regime under realistic experimental conditions. the system can be viewed as a new type of optomechanical system.
arxiv:1606.02364
electrical power system calculations rely heavily on the $ y _ { bus } $ matrix, which is the laplacian matrix of the network under study, weighted by the complex - valued admittance of each branch. it is often useful to partition the $ y _ { bus } $ into four submatrices, to separately quantify the connectivity between and among the load and generation nodes in the network. simple manipulation of these submatrices gives the $ f _ { lg } $ matrix, which offers useful insights on how voltage deviations propagate through a power system and on how energy losses may be minimized. various authors have observed that in practice the elements of $ f _ { lg } $ are real - valued and its rows sum close to one : the present paper explains and proves these properties.
arxiv:1503.08652
i discuss different theories of leptonic flavor and their capability of describing the features of the lepton sector, namely charged lepton masses, neutrino masses, lepton mixing angles and leptonic ( low and high energy ) cp phases. in particular, i show examples of theories with an abelian flavor symmetry g _ f, with a non - abelian g _ f as well as theories with non - abelian g _ f and cp.
arxiv:1705.00684
we study a continuous - time markowitz mean - variance portfolio selection model in which a naive agent, unaware of the underlying time - inconsistency, continuously reoptimizes over time. we define the resulting naive policies through the limit of discretely naive policies that are committed only in very small time intervals, and derive them analytically and explicitly. we compare naive policies with pre - committed optimal policies and with consistent planners ' equilibrium policies in a black - scholes market, and find that the former are mean - variance inefficient starting from any given time and wealth, and always take riskier exposure than equilibrium policies.
arxiv:2212.07516
i examine the structure of the deformed lorentz transformations in one of the recently - proposed schemes with two observer - independent scales. i develop a technique for the analysis of general combinations of rotations and deformed boosts. in particular, i verify explicitly that the transformations form group.
arxiv:gr-qc/0207076
within the central 10pc of our galaxy lies a dense nuclear star cluster ( nsc ), and similar nscs are found in most nearby galaxies. studying the structure and kinematics of nscs reveals the history of mass accretion of galaxy nuclei. because the milky way ( mw ) nsc is at a distance of only 8kpc, we can spatially resolve the mwnsc on sub - pc scales. this makes the mwnsc a reference object for understanding the formation of all nscs. we have used the nir long - slit spectrograph isaac ( vlt ) in a drift - scan to construct an integral - field spectroscopic map of the central 9. 5 x 8pc of our galaxy. we use this data set to extract stellar kinematics both of individual stars and from the unresolved integrated light spectrum. we present a velocity and dispersion map from the integrated light and model these kinematics using kinemetry and axisymmetric jeans models. we also measure co bandhead strengths of 1, 375 spectra from individual stars. we find kinematic complexity in the nscs radial velocity map including a misalignment of the kinematic position angle by 9 degree counterclockwise relative to the galactic plane, and indications for a rotating substructure perpendicular to the galactic plane at a radius of 20 " or 0. 8pc. we determine the mass of the nsc within r = 4. 2pc to 1. 4 x 10 ^ 7 msun. we also show that our kinematic data results in a significant underestimation of the supermassive black hole ( smbh ) mass. the kinematic substructure and position angle misalignment may hint at distinct accretion events. this indicates that the mwnsc grew at least partly by the mergers of massive star clusters. compared to other nscs, the mwnsc is on the compact side of the r _ eff - m _ nsc relation. the underestimation of the smbh mass might be caused by the kinematic misalignment and a stellar population gradient. but it is also possible that there is a bias in smbh mass measurements obtained with integrated light.
arxiv:1406.2849
we extended mcmillan ' s green ' s function method to study the equilibrium spin current ( esc ) in a ferromagnet / ferromagnet ( fm / fm ) tunnelling junction, in which the magnetic moments in both fm electrodes are not collinear. the single - electron green ' s function of the junction system is directly constructed from the elements of the scattering matrix which can be obtained by matching wavefunctions at boundaries. the esc is found to be determined only by the andreev - type reflection amplitudes as in the josephson effect. the obtained expression of esc is an exact result and at the strong barrier limit gives the same explanation for the origin of esc as the linear response theory, that is, esc comes from the exchange coupling between the magnetic moments of the two fm electrodes, $ { \ mathbf { j } } \ sim { \ mathbf { h } } _ { l } \ times { \ mathbf { h } } _ { r } $. in the weak barrier region, esc cannot form spontaneously in a noncollinear fm / fm junction when there is no tunneling barrier between the two fm electrodes.
arxiv:cond-mat/0609407
multi - pitch estimation is a decades - long research problem involving the detection of pitch activity associated with concurrent musical events within multi - instrument mixtures. supervised learning techniques have demonstrated solid performance on more narrow characterizations of the task, but suffer from limitations concerning the shortage of large - scale and diverse polyphonic music datasets with multi - pitch annotations. we present a suite of self - supervised learning objectives for multi - pitch estimation, which encourage the concentration of support around harmonics, invariance to timbral transformations, and equivariance to geometric transformations. these objectives are sufficient to train an entirely convolutional autoencoder to produce multi - pitch salience - grams directly, without any fine - tuning. despite training exclusively on a collection of synthetic single - note audio samples, our fully self - supervised framework generalizes to polyphonic music mixtures, and achieves performance comparable to supervised models trained on conventional multi - pitch datasets.
arxiv:2402.15569
bayesian optimization is a sample - efficient approach to solving global optimization problems. along with a surrogate model, this approach relies on theoretically motivated value heuristics ( acquisition functions ) to guide the search process. maximizing acquisition functions yields the best performance ; unfortunately, this ideal is difficult to achieve since optimizing acquisition functions per se is frequently non - trivial. this statement is especially true in the parallel setting, where acquisition functions are routinely non - convex, high - dimensional, and intractable. here, we demonstrate how many popular acquisition functions can be formulated as gaussian integrals amenable to the reparameterization trick and, ensuingly, gradient - based optimization. further, we use this reparameterized representation to derive an efficient monte carlo estimator for the upper confidence bound acquisition function in the context of parallel selection.
arxiv:1712.00424
we consider hierarchical structures such as fibonacci sequences and penrose tilings, and examine the consequences of different choices for the definition of isomorphism. in particular we discuss the role such a choice plays with regard to matching rules for such structures.
arxiv:math-ph/9812016
cylindrical algebraic decomposition ( cad ) is a key tool for solving problems in real algebraic geometry and beyond. in recent years a new approach has been developed, where regular chains technology is used to first build a decomposition in complex space. we consider the latest variant of this which builds the complex decomposition incrementally by polynomial and produces cads on whose cells a sequence of formulae are truth - invariant. like all cad algorithms the user must provide a variable ordering which can have a profound impact on the tractability of a problem. we evaluate existing heuristics to help with the choice for this algorithm, suggest improvements and then derive a new heuristic more closely aligned with the mechanics of the new algorithm.
arxiv:1405.6094
motivated by a desire to find a useful 2d lorentz - invariant reformulation of the ads _ 5 x s ^ 5 superstring world - sheet theory in terms of physical degrees of freedom we construct the pohlmeyer - reduced version of the corresponding sigma model. the pohlmeyer reduction procedure involves several steps. starting with a coset space string sigma model in the conformal gauge and writing the classical equations in terms of currents one can fix the residual conformal diffeomorphism symmetry and kappa - symmetry and introduce a new set of variables ( related locally to currents but non - locally to the original string coordinate fields ) so that the virasoro constraints are automatically satisfied. the resulting gauge - fixed equations can be obtained from a lagrangian of a non - abelian toda type : a gauged wzw model with an integrable potential coupled also to a set of 2d fermionic fields. a gauge - fixed form of the pohlmeyer - reduced theory can be found by integrating out the 2d gauge field of the gauged wzw model. its small - fluctuation spectrum contains 8 bosonic and 8 fermionic degrees of freedom with equal masses. we conjecture that the reduced model has world - sheet supersymmetry and is ultraviolet - finite. we show that in the special case of the ads _ 2 x s ^ 2 superstring model the reduced theory is indeed supersymmetric : it is equivalent to the n = 2 supersymmetric extension of the sine - gordon model.
arxiv:0711.0155
this paper proposes a novel max - pressure ( mp ) algorithm that incorporates pedestrian traffic into the mp control architecture. pedestrians are modeled as being included in one of two groups : those walking on sidewalks and those queued at intersections waiting to cross. traffic dynamics models for both groups are developed. under the proposed control policy, the signal timings are determined based on the queue length of both vehicles and pedestrians waiting to cross the intersection. the proposed algorithm maintains the decentralized control structure, and the paper proves that it also exhibits the maximum stability property for both vehicles and pedestrians. microscopic traffic simulation results demonstrate that the proposed model can improve the overall operational efficiency - - i. e., reduce person travel delays - - under various vehicle demand levels compared to the original queue - based mp ( q - mp ) algorithm and a recently developed rule - based mp algorithm considering pedestrians. the q - mp ignores the yielding behavior of right - turn vehicles to conflicting pedestrian movements, which leads to high delay for vehicles. on the other hand, the delay incurred by pedestrians is high from the rule - based model since it imposes large waiting time tolerance to guarantee the operational efficiency of vehicles. the proposed algorithm outperforms both models since the states of both vehicles and pedestrians are taken into consideration to determine signal timings.
arxiv:2406.19305
multiplayer online games are ideal settings for studying the effects of technological disruptions on social behavior. software patches to online games cause significant changes to the game ' s rules and require players to develop new strategies to cope with these disruptions. we surveyed players, analyzed the content of software patch notes, and analyzed changes to the character selection behaviors in more than 53 million matches of dota 2 in the days before and after software patches over a 30 - month period. we found that the severity of patches is correlated with the magnitude of behavioral changes following a patch. we discuss the opportunities of leveraging software patches to online games as a valuable but overlooked empirical instrument for measuring behavioral dynamics.
arxiv:2207.02736
almost all existing deep learning approaches for semantic segmentation tackle this task as a pixel - wise classification problem. yet humans understand a scene not in terms of pixels, but by decomposing it into perceptual groups and structures that are the basic building blocks of recognition. this motivates us to propose an end - to - end pixel - wise metric learning approach that mimics this process. in our approach, the optimal visual representation determines the right segmentation within individual images and associates segments with the same semantic classes across images. the core visual learning problem is therefore to maximize the similarity within segments and minimize the similarity between segments. given a model trained this way, inference is performed consistently by extracting pixel - wise embeddings and clustering, with the semantic label determined by the majority vote of its nearest neighbors from an annotated set. as a result, we present the segsort, as a first attempt using deep learning for unsupervised semantic segmentation, achieving $ 76 \ % $ performance of its supervised counterpart. when supervision is available, segsort shows consistent improvements over conventional approaches based on pixel - wise softmax training. additionally, our approach produces more precise boundaries and consistent region predictions. the proposed segsort further produces an interpretable result, as each choice of label can be easily understood from the retrieved nearest segments.
arxiv:1910.06962
case of differentiable manifold ) \ begin { equation * } \ int v ( \ gamma ( t ) ) \ cdot \ dot \ gamma ( t ) dt \ in 2 \ pi \ z \ end { equation * } for $ \ pi $ - a. e. $ \ gamma $, where $ \ pi $ is a test plan supported on closed curves. this condition generalizes the conditions that the vorticity is quantized. we also give a representation of every possible solution. in particular, we deduce that the wave function $ \ psi = \ sqrt { \ rho } w $ is in $ w ^ { 1, 2 } ( x ) $ whenever $ \ sqrt { \ rho } \ in w ^ { 1, 2 } ( x ) $.
arxiv:2110.04628
we calculate the intensities and angular distributions of positive and negative muons produced by atmospheric neutrinos. we comment on some sources of uncertainty in the charge ratio. we also draw attention to a potentially interesting signature of neutrino oscillations in the muon charge ratio, and we discuss the prospects for its observation ( which are not quite within the reach of currently planned magnetized detectors ).
arxiv:astro-ph/0210512
##card ' s exchange principle. in 1910, he founded what may have been the first criminal laboratory in the world, after persuading the police department of lyon ( france ) to give him two attic rooms and two assistants. symbolic of the newfound prestige of forensics and the use of reasoning in detective work was the popularity of the fictional character sherlock holmes, written by arthur conan doyle in the late 19th century. he remains a great inspiration for forensic science, especially for the way his acute study of a crime scene yielded small clues as to the precise sequence of events. he made great use of trace evidence such as shoe and tire impressions, as well as fingerprints, ballistics and handwriting analysis, now known as questioned document examination. such evidence is used to test theories conceived by the police, for example, or by the investigator himself. all of the techniques advocated by holmes later became reality, but were generally in their infancy at the time conan doyle was writing. in many of his reported cases, holmes frequently complains of the way the crime scene has been contaminated by others, especially by the police, emphasising the critical importance of maintaining its integrity, a now well - known feature of crime scene examination. he used analytical chemistry for blood residue analysis as well as toxicology examination and determination for poisons. he used ballistics by measuring bullet calibres and matching them with a suspected murder weapon. = = = late 19th – early 20th century figures = = = hans gross applied scientific methods to crime scenes and was responsible for the birth of criminalistics. edmond locard expanded on gross ' work with locard ' s exchange principle which stated " whenever two objects come into contact with one another, materials are exchanged between them ". this means that every contact by a criminal leaves a trace. alexandre lacassagne, who taught locard, produced autopsy standards on actual forensic cases. alphonse bertillon was a french criminologist and founder of anthropometry ( scientific study of measurements and proportions of the human body ). he used anthropometry for identification, stating that, since each individual is unique, by measuring aspects of physical difference there could be a personal identification system. he created the bertillon system around 1879, a way of identifying criminals and citizens by measuring 20 parts of the body. in 1884, over 240 repeat offenders were caught using the bertillon system, but the system was largely superseded by fingerprinting. joseph thomas walker, known for his work at massachusetts state police chemical laboratory, for developing many modern forensic
https://en.wikipedia.org/wiki/Forensic_science
the phases of terms of amplitude that arise from the $ \ pi \ pi $ interaction are obtained by using a simple realistic model of $ \ pi \ pi $ interaction via virtual $ \ rho $ - meson, instead of the chpt. it is shown that the standard chpt approach cannot reproduce the contribution of the $ \ rho $ - meson to the $ \ pi \ pi $ interaction. it is shown that the interference between the terms of amplitude with different cp - parity appears only when the photon is polarized ( linearly or circularly ). instead of measuring the linear polarization, the angular correlation between the $ \ pi ^ { + } \ pi ^ { - } $ and $ e ^ { + } e ^ { - } $ planes in $ k _ { s, l } \ to \ pi ^ { + } \ pi ^ { - } e ^ { + } e ^ { - } $ decay can be studied.
arxiv:hep-ph/0205164
let $ v $ and $ w $ be quiver representations over $ \ mathbb { f } _ 1 $ and let $ k $ be a field. the scalar extensions $ v ^ k $ and $ w ^ k $ are quiver representations over $ k $ with a distinguished, very well - behaved basis. we construct a basis of $ \ mathrm { hom } _ { kq } ( v ^ k, w ^ k ) $ generalising the well - known basis of the morphism spaces between string and tree modules. we use this basis to give a combinatorial characterisation of absolutely indecomposable representations. furthermore, we show that indecomposable representations with finite nice length are absolutely indecomposable. this answers a question of jun and sistko.
arxiv:2403.04597
the friedmann - - lema \ ^ { \ i } tre - - robertson - - walker ( flrw ) solution to the einstein - scalar field system with spatial topology $ \ mathbb { s } ^ 3 $ models a universe that emanates from a singular spacelike hypersurface ( the big bang ), along which various spacetime curvature invariants blow up, only to re - collapse in a symmetric fashion in the future ( the big crunch ). in this article, we give a complete description of the maximal developments of perturbations of the flrw data at the chronological midpoint of its evolution. we show that the perturbed solutions also exhibit curvature blowup along a pair of spacelike hypersurfaces, signifying the stability of the big bang and the big crunch. moreover, we provide a sharp description of the asymptotic behavior of the solution up to the singularities, showing in particular that various time - rescaled solution variables converge to regular tensorfields on the singular hypersurfaces that are close to the corresponding flrw tensorfields. our proof crucially relies on $ l ^ 2 $ - type approximate monotonicity identities in the spirit of the ones we used in our joint works with rodnianski, in which we proved similar results for nearly spatially flat solutions with spatial topology $ \ mathbb { t } ^ 3 $. in the present article, we rely on new ingredients to handle nearly round spatial metrics on $ \ mathbb { s } ^ 3 $, whose curvatures are order - unity near the initial data hypersurface. in particular, our proof relies on i ) the construction of a globally defined spatial vectorfield frame adapted to the symmetries of a round metric on $ \ mathbb { s } ^ 3 $ ; ii ) estimates for the lie derivatives of various geometric quantities with respect to the elements of the frame ; and iii ) sharp estimates for the asymptotic behavior of the flrw solution ' s scale factor near the singular hypersurfaces.
arxiv:1709.06477
we consider wave equations in three space dimensions, and obtain new weighted $ l ^ \ infty $ - $ l ^ \ infty $ estimates for a tangential derivative to the light cone. as an application, we give a new proof of the global existence theorem, which was originally proved by klainerman and christodoulou, for systems of nonlinear wave equations under the null condition. our new proof has the advantage of using neither the scaling nor the pseudo - rotation operators.
arxiv:0706.4158
we propose a new mechanism to generate the electroweak scale within the framework of qcd, which is extended to include conformally invariant scalar degrees of freedom belonging to a larger irreducible representation of $ su ( 3 ) _ c $. the electroweak symmetry breaking is triggered dynamically via the higgs portal by the condensation of the colored scalar field around 1 tev. the mass of the colored boson is restricted to be 350 gev $ \ lesssim m _ s \ lesssim $ 3 tev, with the upper bound obtained from perturbative renormalization group evolution. this implies that the colored boson can be produced at lhc. if the colored boson is electrically charged, the branching fraction of the higgs decaying into two photons can slightly increase, and moreover, it can be produced at future linear colliders. our idea of non - perturbative ew scale generation can serve as a new starting point for more realistic model building in solving the hierarchy problem.
arxiv:1403.4262
hydrogen is playing a crucial role in the green energy transition. yet, its tendency to react with and diffuse into surrounding materials poses a challenge. therefore, it is critical to develop coatings that protect hydrogen - sensitive system components in reactive - hydrogen environments. in this work, we report group iv - v transition metal carbide ( tmc ) thin films as potential candidates for hydrogen - protective coatings in hydrogen radical ( h * ) environments at elevated temperatures. we identify three classes of tmcs based on the reduction of carbides and surface oxides ( tmox ). hfc, zrc, tic, tac, nbc, and vc ( class a ) are found to have a stable carbidic - c ( tm - c ) content, with a further sub - division into partial ( class a1 : hfc, zrc, and tic ) and strong ( class a2 : tac, nbc, and vc ) surface tmox reduction. in contrast to class a, a strong carbide reduction is observed in co2c ( class b ), along with a strong surface tmox reduction. the h * - tmc / tmox interaction is hypothesized to entail three processes : ( i ) hydrogenation of surface c / o - atoms, ( ii ) formation of chx / ohx species, and ( iii ) subsurface c / o - atoms diffusion to the surface vacancies. the number of adsorbed h - atoms required to form chx / ohx species ( i ), and the corresponding energy barriers ( ii ) are estimated based on the change in the gibbs free energy ( deltag ) for the reduction reactions of tmcs and tmox. hydrogenation of surface carbidic - c - atoms is proposed to limit the reduction of tmcs, whereas the reduction of surface tmox is governed by the thermodynamic barrier for forming h2o.
arxiv:2404.14108
distinguishability plays a major role in quantum and statistical physics. when particles are identical their wave function must be either symmetric or antisymmetric under permutations and the number of microscopic states, which determines entropy, is counted up to permutations. when the particles are distinguishable, wavefunctions have no symmetry and each permutation is a different microstate. this binary and discontinuous classification raises a few questions : one may wonder what happens if particles are almost identical, or when the property that distinguishes between them is irrelevant to the physical interactions in a given system. here i sketch a general answer to these questions. for any pair of non - identical particles there is a timescale, $ \ tau _ d $, required for a measurement to resolve the differences between them. below $ \ tau _ d $, particles seem identical, above it - different, and the uncertainty principle provides a lower bound for $ \ tau _ d $. thermal systems admit a conjugate temperature scale, $ t _ d $. above this temperature the system appears to equilibrate before it resolves the differences between particles, below this temperature the system identifies these differences before equilibration. as the physical differences between particles decline towards zero, $ \ tau _ d \ to \ infty $ and $ t _ d \ to 0 $.
arxiv:2109.11657
recent work suggests that heisenberg spin glasses may belong to the same universality class than structural glasses. indeed, finding a lattice equivalent for supercooled liquids would probably allow easier numerical and analytical studies, that may help to answer long - standing questions on the glass transition. supercooled liquids have many peculiar behaviors that should be found in the paramagnetic phase of heisenberg spin glasses if the analogy between the two systems holds. it is with this motivation that we undertake a study of the paramagnetic phase of heisenberg spin glasses. we shall emphasize the role of the energy landscape, with a detailed study of the properties of the inherent structures ( by analogy with supercooled liquids, we name inherent structure the local minimum of the energy function which is closest to the current spin configuration ). finding inherent structures will require the development of a new search algorithm. we shall investigate as well the existence of a dynamic transition in the paramagnetic phase. as a matter of fact, both the existence of a complex energy landscape, as well as the existence of a dynamic transition, are distinguished features of the physics of supercooled liquids.
arxiv:1503.08409
the results of ac and dc magnetic susceptibility isothermal magnetization and heat - capacity measurements as a function of temperature ( t ) are reported for sr3nirho6 and sr3nipto6 containing magnetic chains arranged in a triangular fashion in the basal plane and crystallizing in k4cdcl6 - derived rhombohedral structure. the results establish that both the compounds are magnetically frustrated, however in different ways. in the case of the rh compound, the susceptibility data reveal that there are two magnetic transitions, one in the range 10 - 15 k and the other appearing as a smooth crossover near 45 k, with a large frequency dependence of ac susceptibility in the range 10 to 40 k ; in addition, the features in c ( t ) are smeared out at these temperatures. the magnetic properties are comparable to those of previously known few compounds with partially disordered antiferromagnetic structure. on the other hand, for sr3nipto6, there is no evidence for long - range magnetic ordering down to 1. 8 k despite large value of paramagnetic curie temperature.
arxiv:0706.1308
we investigate the orientation of the magnetic field deflections in switchbacks ( sb ) to determine if they are characterised by a possible preferential orientation. we compute the deflection angles of the magnetic field relative to the parker spiral direction for encounters 1 to 9 of the psp mission. we first characterize the distribution of these deflection angles for calm solar wind intervals, and assess the precision of the parker model as a function of distance to the sun. we then assume that the solar wind is composed of two populations, the background calm solar wind and the population of sb, characterized by larger fluctuations. we model the total distribution of deflection angles we observe in the solar wind as a weighed sum of two distinct normal distributions, each corresponding to one of the populations. we fit the observed data with our model using a mcmc algorithm and retrieve the most probable mean vector and covariance matrix coefficients of the two gaussian functions, as well as the population proportion. we first observe that the accuracy of the spiral direction in the ecliptic is a function of radial distance, in a manner that is consistent with psp being near the solar wind acceleration region. we then find that the fitted switchback population presents a systematic bias in its deflections compared to the calm solar wind population. this result holds for all encounters but e6, and regardless of the magnetic field main polarity. this implies a marked preferential orientation of sb in the clockwise direction in the ecliptic plane, and we discuss this result and its implications in the context of the existing switchback formation theories. finally, we report the observation of a 12 - hour patch of sb that systematically deflect in the same direction, so that the magnetic field vector tip within the patch deflects and returns to the parker spiral within a given plane.
arxiv:2203.14591
the transfer matrix method is usually employed to study problems described by $ n $ equations of matrix sturm - liouville ( msl ) kind. in some cases a numerical degradation ( the so called $ \ omega d $ problem ) appears thus impairing the performance of the method. we present here a procedure that can overcome this problem in the case of multilayer systems having piecewise constant coefficients. this is performed by studying the relations between the associated transfer matrix and other transfer matrix variants. in this way it was possible to obtain the matrices which can overcome the $ \ omega d $ problem in the general case and then in problems which are particular cases of the general one. in this framework different strategies are put forward to solve different boundary condition problems by means of these numerically stable matrices. numerical and analytic examples are presented to show that these stable variants are more adequate than other matrix methods to overcome the $ \ omega d $ problem. due to the ubiquity of the msl system, these results can be applied to the study of many elementary excitations in multilayer structures.
arxiv:1503.09038
loopy and generalized belief propagation are popular algorithms for approximate inference in markov random fields and bayesian networks. fixed points of these algorithms correspond to extrema of the bethe and kikuchi free energy. however, belief propagation does not always converge, which explains the need for approaches that explicitly minimize the kikuchi / bethe free energy, such as cccp and ups. here we describe a class of algorithms that solves this typically nonconvex constrained minimization of the kikuchi free energy through a sequence of convex constrained minimizations of upper bounds on the kikuchi free energy. intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. several ideas are applied to obtain tight convex bounds that yield dramatic speed - ups over cccp.
arxiv:1212.2480
the lhc data on jet fragmentation function and jet shapes in pbpb collisions at center - of - mass energy 2. 76 tev per nucleon pair are analyzed and interpreted in the frameworks of pyquen jet quenching model. a specific modification of longitudinal and radial jet profiles in most central pbpb collisions as compared with pp data is close to that obtained with pyquen simulations taking into account wide - angle radiative and collisional partonic energy loss. the contribution of radiative and collisional loss to the medium - modified intra - jet structure is estimated.
arxiv:1410.0147
this paper is about simulating the spread of opinions in a society and about finding ways to counteract that spread. to abstract away from potentially emotionally laden opinions, we instead simulate the spread of a zombie outbreak in a society. the virus causing this outbreak is different from traditional approaches : it not only causes a binary outcome ( healthy vs infected ) but rather a continuous outcome. to counteract the outbreak, a discrete number of infection - level specific treatments is available. this corresponds to acts of mild persuasion or the threats of legal action in the opinion spreading use case. this paper offers a genetic and a cultural algorithm that find the optimal mixture of treatments during the run of the simulation. they are assessed in a number of different scenarios. it is shown, that albeit far from being perfect, the cultural algorithm delivers superior performance at lower computational expense.
arxiv:1401.6420
the observations of carbon isotopic ratios in evolved stars suggest that non standard mixing is acting in low mass stars as they are ascending the red giant branch. we propose a simple consistent mechanism, based on the most recent developments in the description of rotation - induced mixing by zahn ( 1992 ), which simultaneously accounts for the low $ ^ { 12 } $ c / $ ^ { 13 } $ c ratios in globular cluster and field pop ii giants and for the lithium abundances in metal - poor giant stars. it also leads to the destruction of $ ^ 3 $ he produced on the main sequence in low mass stars. this should both naturally account for the recent measurements of $ ^ 3 $ he / h in galactic hii regions and allow for high values of $ ^ 3 $ he observed in some planetary nebulae.
arxiv:astro-ph/9511080
the three - dimensional structure of proteins plays a crucial role in determining their function. protein structure prediction methods, like alphafold, offer rapid access to a protein structure. however, large protein complexes cannot be reliably predicted, and proteins are dynamic, making it important to resolve their full conformational distribution. single - particle cryo - electron microscopy ( cryo - em ) is a powerful tool for determining the structures of large protein complexes. importantly, the numerous images of a given protein contain underutilized information about conformational heterogeneity. these images are very noisy projections of the protein, and traditional methods for cryo - em reconstruction are limited to recovering only one or a few consensus conformations. in this paper, we introduce cryosphere, which is a deep learning method that uses a nominal protein structure ( e. g., from alphafold ) as input, learns how to divide it into segments, and moves these segments as approximately rigid bodies to fit the different conformations present in the cryo - em dataset. this approach provides enough constraints to enable meaningful reconstructions of single protein structural ensembles. we demonstrate this with two synthetic datasets featuring varying levels of noise, as well as two real dataset. we show that cryosphere is very resilient to the high levels of noise typically encountered in experiments, where we see consistent improvements over the current state - of - the - art for heterogeneous reconstruction.
arxiv:2407.01574
we consider integrable matrix product states ( mps ) in integrable spin chains and show that they correspond to " operator valued " solutions of the so - called twisted boundary yang - baxter ( or reflection ) equation. we argue that the integrability condition is equivalent to a new linear intertwiner relation, which we call the " square root relation ", because it involves half of the steps of the reflection equation. it is then shown that the square root relation leads to the full boundary yang - baxter equations. we provide explicit solutions in a number of cases characterized by special symmetries. these correspond to the " symmetric pairs " $ ( su ( n ), so ( n ) ) $ and $ ( so ( n ), so ( d ) \ otimes so ( n - d ) ) $, where in each pair the first and second elements are the symmetry groups of the spin chain and the integrable state, respectively. these solutions can be considered as explicit representations of the corresponding twisted yangians, that are new in a number of cases. examples include certain concrete mps relevant for the computation of one - point functions in defect ads / cft.
arxiv:1812.11094
we studied the eclipsing ultraluminous x - ray source cg x - 1 in the circinus galaxy, re - examining two decades of { \ it chandra } and { \ it xmm - newton } observations. the short binary period ( 7. 21 hr ) and high luminosity ( $ l _ { \ rm x } \ approx 10 ^ { 40 } $ erg s $ ^ { - 1 } $ ) suggest a wolf - rayet donor, close to filling its roche lobe ; this is the most luminous wolf - rayet x - ray binary known to - date, and a potential progenitor of a gravitational - wave merger. we phase - connect all observations, and show an intriguing dipping pattern in the x - ray lightcurve, variable from orbit to orbit. we interpret the dips as partial occultation of the x - ray emitting region by fast - moving clumps of compton - thick gas. we suggest that the occulting clouds are fragments of the dense shell swept - up by a bow shock ahead of the compact object, as it orbits in the wind of the more massive donor.
arxiv:1903.02327
this paper examines the value of a cancellable european option in a finite time horizon setting. the specifications of this generalized european option allow the seller to cancel the option at any point in time for a fixed penalty paid directly to the holder. here, we provide an explicit valuation formula for the european game call where the early cancellation time is obtained iteratively.
arxiv:1304.5962
when a small number of poisoned samples are injected into the training dataset of a deep neural network, the network can be induced to exhibit malicious behavior during inferences, which poses potential threats to real - world applications. while they have been intensively studied in classification, backdoor attacks on semantic segmentation have been largely overlooked. unlike classification, semantic segmentation aims to classify every pixel within a given image. in this work, we explore backdoor attacks on segmentation models to misclassify all pixels of a victim class by injecting a specific trigger on non - victim pixels during inferences, which is dubbed influencer backdoor attack ( iba ). iba is expected to maintain the classification accuracy of non - victim pixels and mislead classifications of all victim pixels in every single inference and could be easily applied to real - world scenes. based on the context aggregation ability of segmentation models, we proposed a simple, yet effective, nearest - neighbor trigger injection strategy. we also introduce an innovative pixel random labeling strategy which maintains optimal performance even when the trigger is placed far from the victim pixels. our extensive experiments reveal that current segmentation models do suffer from backdoor attacks, demonstrate iba real - world applicability, and show that our proposed techniques can further increase attack performance.
arxiv:2303.12054
we study the phase transition between the quantum hall liquid state and the insulating state within the framework of the chern - simons - landau - ginzburg theory of the quantum hall effect. for the transition induced by a background periodic potential in the absence of disorder, the model is described by a relativistic scalar field coupled to the chern - simons gauge field. for this system, we show that the transition is of the first order, induced by the fluctuations of the gauge field, rather than second order, with statistical angle - dependent scaling exponent.
arxiv:cond-mat/9403038
curvature of nanomagnets can be used to induce chiral textures in the magnetization field. here we perform analytical calculations and micromagnetic simulations aiming to analyze the stability of in - surface magnetization configurations in toroidal nanomagnets. we have obtained that despite toroidal vortex - like configurations are highly stable in magnetic nanotori, the interplay between geometry and magnetic properties promotes the competition between effective interactions yielding the development of a core in a vortex state when the aspect ratio between internal and external radii of nanoturus is $ \ gtrsim0. 75 $.
arxiv:1812.03729
we apply the gradient - newton - galerkin - algorithm ( gnga ) of neuberger & swift to find solutions to a semilinear elliptic dirichlet problem on the region whose boundary is the koch snowflake. in a recent paper, we described an accurate and efficient method for generating a basis of eigenfunctions of the laplacian on this region. in that work, we used the symmetry of the snowflake region to analyze and post - process the basis, rendering it suitable for input to the gnga. the gnga uses newton ' s method on the eigenfunction expansion coefficients to find solutions to the semilinear problem. this article introduces the bifurcation digraph, an extension of the lattice of isotropy subgroups. for our example, the bifurcation digraph shows the 23 possible symmetry types of solutions to the pde and the 59 generic symmetry - breaking bifurcations among these symmetry types. our numerical code uses continuation methods, and follows branches created at symmetry - breaking bifurcations, so the human user does not need to supply initial guesses for newton ' s method. starting from the known trivial solution, the code automatically finds at least one solution with each of the symmetry types that we predict can exist. such computationally intensive investigations necessitated the writing of automated branch following code, whereby symmetry information was used to reduce the number of computations per gnga execution and to make intelligent branch following decisions at bifurcation points.
arxiv:1010.1054
motivated by the classical picture of heat flow we construct a stationary temperature gradient in a relativistic microscopic transport model. employing the relativistic navier - stokes ansatz we extract the heat conductivity { \ kappa } for a massless boltzmann gas using only binary collisions with isotropic cross sections. we compare the numerical results to analytical expressions from different theories and discuss the final results. the directly extracted value for the heat conductivity can be referred to as a literature reference within the numerical uncertainties.
arxiv:1301.1190
hierarchical vaes have emerged in recent years as a reliable option for maximum likelihood estimation. however, instability issues and demanding computational requirements have hindered research progress in the area. we present simple modifications to the very deep vae to make it converge up to $ 2. 6 \ times $ faster, save up to $ 20 \ times $ in memory load and improve stability during training. despite these changes, our models achieve comparable or better negative log - likelihood performance than current state - of - the - art models on all $ 7 $ commonly used image datasets we evaluated on. we also make an argument against using 5 - bit benchmarks as a way to measure hierarchical vae ' s performance due to undesirable biases caused by the 5 - bit quantization. additionally, we empirically demonstrate that roughly $ 3 \ % $ of the hierarchical vae ' s latent space dimensions is sufficient to encode most of the image information, without loss of performance, opening up the doors to efficiently leverage the hierarchical vaes ' latent space in downstream tasks. we release our source code and models at https : / / github. com / rayhane - mamah / efficient - vdvae.
arxiv:2203.13751
we present extensive numerical simulations of the axelrod ' s model for social influence, aimed at understanding the formation of cultural domains. this is a nonequilibrium model with short range interactions and a remarkably rich dynamical behavior. we study the phase diagram of the model and uncover a nonequilibrium phase transition separating an ordered ( culturally polarized ) phase from a disordered ( culturally fragmented ) one. the nature of the phase transition can be continuous or discontinuous depending on the model parameters. at the transition, the size of cultural regions is power - law distributed.
arxiv:cond-mat/0003111
most vision - and - language pretraining research focuses on english tasks. however, the creation of multilingual multimodal evaluation datasets ( e. g. multi30k, xgqa, xvnli, and marvl ) poses a new challenge in finding high - quality training data that is both multilingual and multimodal. in this paper, we investigate whether machine translating english multimodal data can be an effective proxy for the lack of readily available multilingual data. we call this framework td - mml : translated data for multilingual multimodal learning, and it can be applied to any multimodal dataset and model. we apply it to both pretraining and fine - tuning data with a state - of - the - art model. in order to prevent models from learning from low - quality translated text, we propose two metrics for automatically removing such translations from the resulting datasets. in experiments on five tasks across 20 languages in the iglue benchmark, we show that translated data can provide a useful signal for multilingual multimodal learning, both at pretraining and fine - tuning.
arxiv:2210.13134
there is no denying the tremendous leap in the performance of machine learning methods in the past half - decade. some might even say that specific sub - fields in pattern recognition, such as machine - vision, are as good as solved, reaching human and super - human levels. arguably, lack of training data and computation power are all that stand between us and solving the remaining ones. in this position paper we underline cases in vision which are challenging to machines and even to human observers. this is to show limitations of contemporary models that are hard to ameliorate by following the current trend to increase training data, network capacity or computational power. moreover, we claim that attempting to do so is in principle a suboptimal approach. we provide a taster of such examples in hope to encourage and challenge the machine learning community to develop new directions to solve the said difficulties.
arxiv:1802.04834
the main objective of this article is a constructive generalization of the holomorphic power and laurent series expansions in c to dimension 3 using the framework of hypercomplex function theory. for this reason, deals the first part of this article with generalized fourier & taylor series expansions in the space of square integrable quaternion - valued functions which possess peculiar properties regarding the hypercomplex derivative and primitive. in analogy to the complex one - dimensional case, both series expansions are orthogonal series with respect to the unit ball in r ^ 3 and their series coefficients can be explicitly ( one - to - one ) linked with each other. furthermore, very compact and efficient representation formulae ( recurrence, closed - form ) for the elements of the orthogonal bases are presented. the latter results are then used to construct a new orthonormal bases of outer solid spherical monogenics in the space of square integrable quaternion - valued functions. this finally leads to the definition of a generalized laurent series expansion for the spherical shell.
arxiv:1007.1764
we propose a type of thermal interface materials incorporating insulating nanowires with partially metallic coating in insulating polymer matrix. large thermal conductivity can be obtained due to thermal percolation while the electrical insulation is maintained by controlling $ c _ { \ rm m } \ varphi < \ varphi _ { \ rm c } ^ { \ rm e } $ and $ \ varphi > \ varphi _ { \ rm c } ^ { \ rm th } $, where $ \ varphi $ is the volume fraction of filler, $ c _ { \ rm m } $ is the metallic coating fraction, $ \ varphi _ { \ rm c } ^ { \ rm e } $ and $ \ varphi _ { \ rm c } ^ { \ rm th } $ are the electrical and thermal percolation thresholds, respectively. the electrical conductivity of such composite materials can further be regulated by coating configuration. in this regard, we propose the concept of " thermal - percolation electrical - insulation ", providing a guide to design efficient hybrid thermal interface materials.
arxiv:2109.09921
we present an overview of eclipsing systems of the hw - virginis type, based on space observations from the tess mission. we perform a detailed analysis of the properties of aa dor, which was monitored for almost a full year. this excellent time - series dataset permitted us to search for both stellar pulsations and eclipse timing variations. in addition, we used the high - precision trigonometric parallax from gaia early data release 3 to make an independent determination of the fundamental stellar parameters. no convincing pulsations were detected down to a limit of 76 parts per million, however we detected one peak with false alarm probability of 0. 2 %. 20 sec cadences being collected during year 3 should confirm or reject our detection. from eclipse timing measurements we were able to confirm that the orbital period is stable, with an upper limit to any period change of 5. 75 $ \ cdot $ 10 $ ^ { - 13 } $ s / s. the apparent offset of the secondary eclipse is consistent with the predicted r { \ o } mer delay when the primary mass is that of a canonical extended horizontal branch star. using parallax and a spectral energy distribution corroborates that the mass of the primary in aa dor is canonical, and its radius and luminosity is consistent with an evolutionary state beyond core helium burning. the mass of the secondary is found to be at the limit of hydrogen burning.
arxiv:2105.01074
audio - visual question answering ( avqa ) requires reference to video content and auditory information, followed by correlating the question to predict the most precise answer. although mining deeper layers of audio - visual information to interact with questions facilitates the multimodal fusion process, the redundancy of audio - visual parameters tends to reduce the generalization of the inference engine to multiple question - answer pairs in a single video. indeed, the natural heterogeneous relationship between audiovisuals and text makes the perfect fusion challenging, to prevent high - level audio - visual semantics from weakening the network ' s adaptability to diverse question types, we propose a framework for performing mutual correlation distillation ( mcd ) to aid question inference. mcd is divided into three main steps : 1 ) firstly, the residual structure is utilized to enhance the audio - visual soft associations based on self - attention, then key local audio - visual features relevant to the question context are captured hierarchically by shared aggregators and coupled in the form of clues with specific question vectors. 2 ) secondly, knowledge distillation is enforced to align audio - visual - text pairs in a shared latent space to narrow the cross - modal semantic gap. 3 ) and finally, the audio - visual dependencies are decoupled by discarding the decision - level integrations. we evaluate the proposed method on two publicly available datasets containing multiple question - and - answer pairs, i. e., music - avqa and avqa. experiments show that our method outperforms other state - of - the - art methods, and one interesting finding behind is that removing deep audio - visual features during inference can effectively mitigate overfitting. the source code is released at http : / / github. com / rikeilong / mcd - foravqa.
arxiv:2403.06679
in this work we provide an exact and efficient numerical approach to simulate multi - time correlation functions in the mahan - nozi \ ` { e } res - de dominicis model, which crudely mimics the spectral properties of doped two - dimensional semiconductors such as monolayer transition metal dichalcogenides. we apply this approach to study the coherent two - dimensional electronic spectra of the model. we show that several experimentally observed phenomena, such as peak asymmetry and coherent oscillations in the waiting - time dependence of the trion - exciton cross peaks of the two - dimensional rephasing spectrum, emerge naturally in our approach. additional features are also present which find no correspondence with experimentally expected behavior. we trace these features to the infinite hole mass property of the model. we use this understanding to construct an efficient approach which filters out configurations associated with the lack of exciton recoil, enabling the connection to previous work and providing a route to the construction of realistic two - dimensional spectra over a broad doping range in two - dimensional semiconductors.
arxiv:2206.01799
if wood has been with us since time immemorial, being part of our environment, housing and tools, now wood has gain momentum, as it is clear that wood improves our life style. because of the healthiness, resistance, ecology and comfort, wood is important for all of us, no matter what our life style is. woodtouch project aims to open a completely new market for furniture and interior design companies, enabling touch interaction between the user and wooden furniture surfaces. why not switch on or dim the lights touching a wooden table? why not turn on the heating system? why not use wood as a touch sensitive surface for domotic control? the furniture designed with this novel technology, offers a wooden outer image and has different touch sensitive areas over the ones the user is able to control all sorts of electric appliances touching over a wooden surface.
arxiv:1307.0951
breakthrough experiments have recently realized fractional chern insulators ( fcis ) in moir \ ' e materials. however, all states observed are abelian, the possible existence of more exotic non - abelian fcis remains controversial both experimentally and theoretically. here, we investigate the competition between charge density wave order, gapless composite fermion liquids and non - abelian moore - read states at half - filling of a moir \ ' e band. although groundstate ( quasi - ) degeneracies and spectral flow is not sufficient for distinguishing between charge order and moore - read states, we find evidence using entanglement spectroscopy that both these states of matter can be realized with realistic coulomb interactions. in a double twisted bilayer graphene model transitions between these phases by tuning the coupling strength between the layers : at weak coupling there is a composite fermi liquid phase and at strong coupling a low entanglement state signaling cdw order emerges. remarkably, however, there is compelling evidence for a non - abelian moore - read fci phase at intermediate coupling.
arxiv:2405.08887