text
stringlengths
1
3.65k
source
stringlengths
15
79
searching for unfamiliar american sign language ( asl ) signs is challenging for learners because, unlike spoken languages, they cannot type a text - based query to look up an unfamiliar sign. advances in isolated sign recognition have enabled the creation of video - based dictionaries, allowing users to submit a video and receive a list of the closest matching signs. previous hci research using wizard - of - oz prototypes has explored interface designs for asl dictionaries. building on these studies, we incorporate their design recommendations and leverage state - of - the - art sign - recognition technology to develop an automated video - based dictionary. we also present findings from an observational study with twelve novice asl learners who used this dictionary during video - comprehension and question - answering tasks. our results address human - ai interaction challenges not covered in previous woz research, including recording and resubmitting signs, unpredictable outputs, system latency, and privacy concerns. these insights offer guidance for designing and deploying video - based asl dictionary systems.
arxiv:2504.05857
the kinematic moment of inertia of the rare earth even - even nuclei was calculated using three parametric energy based expression. the plot of kinematic moment of inertia versus nuclear spin shows a better sensitivity to back bending than energy plot.
arxiv:1302.3875
monads are extensively used nowadays to abstractly model a wide range of computational effects such as nondeterminism, statefulness, and exceptions. it turns out that equipping a monad with a ( uniform ) iteration operator satisfying a set of natural axioms allows for modelling iterative computations just as abstractly. the emerging monads are called complete elgot monads. it has been shown recently that extending complete elgot monads with free effects ( e. g. operations of sending / receiving messages over channels ) canonically leads to generalized coalgebraic resumption monads, previously used as semantic domains for non - wellfounded guarded processes. in this paper, we continue the study of the relationship between abstract complete elgot monads and those that capture coalgebraic resumptions, by comparing the corresponding categories of ( eilenberg - moore ) algebras. to this end we first provide a characterization of the latter category ; even more generally, we formulate this characterization in terms of uustalu ' s parametrized monads. this is further used for establishing a characterization of complete elgot monads as precisely those monads whose algebras are coherently equipped with the structure of algebras of coalgebraic resumption monads.
arxiv:1603.02148
in this work, we have analytically investigated the insulator / superconductor phase transition in the presence of $ d $ - dimensional gauss - bonnet ads soliton background. using the sturm - liouville eigenvalue method, we have calculated the value of the critical chemical potential $ \ mu _ c $ in any arbitrary dimension $ d \ geq 5 $. we have then studied the condensation operator values and charge density in terms of the chemical potential and discussed the $ d = 5, 6, 7 $ cases using our general results in $ d $ dimensions. our analytical results agree very well with the numerically findings in the literature.
arxiv:1901.10538
we present an extension and generalization of sahlqvist - - van benthem correspondence to the case of distribution - free modal logic, with, or without negation and / or implication connectives. we follow a reductionist strategy, reducing the correspondence problem at hand to the same problem, but for a suitable system of sorted modal logic ( the modal companion of the distribution - free system ). the reduction, via a fully abstract translation, builds on duality between normal lattice expansions and sorted residuated frames with relations ( a generalization of classical kripke frames with relations ). the approach is scalable and it can be generalized to other systems, with or without distribution, such as distributive modal logic, or substructural logics with, or without additional modal operators.
arxiv:2503.12523
we report the discovery of a double - peaked lyman - $ \ alpha $ ( ly $ \ alpha $ ) emitter ( lae ) at $ z = 3. 2177 \ pm0. 0001 $ in vlt / muse data. the galaxy is strongly lensed by the galaxy cluster rxc ~ j0018. 5 + 1626 recently observed in the relics survey, and the double - peaked ly $ \ alpha $ emission is clearly detected in the two counter images in the muse field - of - view. we measure a relatively high ly $ \ alpha $ rest - frame equivalent width ( ew ) of $ \ mathrm { ew } _ { \ mathrm { ly } \ alpha, 0 } = ( 63 \ pm2 ) \, \ mathring { \ mathrm { a } } $. additional near - infrared ( nir ) spectroscopy allows us to measure the h $ \ beta $, [ oiii ] $ \ lambda4959 \, \ mathring { \ mathrm { a } } $ and [ oiii ] $ \ lambda5007 \, \ mathring { \ mathrm { a } } $ emission lines, which show moderate rest - frame ews of the order of a few $ \ sim10 - 100 \, \ mathring { \ mathrm { a } } $, an [ oiii ] $ \ lambda5007 \, \ mathring { \ mathrm { a } } $ / h $ \ beta $ ratio of $ 4. 8 \ pm0. 7 $, and a lower limit on the [ oiii ] / [ oii ] ratio of $ > 5. 6 $. the galaxy has very blue uv - continuum slopes of $ \ beta _ { \ mathrm { fuv } } = - 2. 23 \ pm0. 06 $ and $ \ beta _ { \ mathrm { nuv } } = - 3. 0 \ pm0. 2 $, and is magnified by factors $ \ mu \ sim7 - 10 $ in each of the two images, thus enabling a view into a low - mass ( $ m _ { \ star } \ simeq10 ^ { 7. 5 } \, \ mathrm { m } _ { \ odot } $ ) high - redshift galaxy analog. notably, the blue peak of the ly $ \ alpha $ profile is significantly stronger than the red peak, which suggests an inflow of matter and possibly very low
arxiv:2204.09668
a protocol is provided to reconstruct the wigner function for the motional state of a trapped ion via fluorescence detection on another ion in the same trap. this " sympathetic tomography " of a dark ion without optical transitions suitable for state measurements is based on the mapping of its motional state onto one of the collective modes of the ion pair. the quantum state of this vibrational eigenmode is subsequently measured through sideband excitation of the bright ion. physical processes to implement the desired state transfer and read - out are derived, and the accomplishment of the scheme for different mass ratios is evaluated.
arxiv:1110.4804
information engines can use structured environments as a resource to generate work by randomizing ordered inputs and leveraging the increased shannon entropy to transfer energy from a thermal reservoir to a work reservoir. we give a broadly applicable expression for the work production of an information engine, generally modeled as a memoryful channel that communicates inputs to outputs as it interacts with an evolving environment. the expression establishes that an information engine must have more than one memory state in order to leverage input environment correlations. to emphasize this functioning, we designed an information engine powered solely by temporal correlations and not by statistical biases, as employed by previous engines. key to this is the engine ' s ability to synchronize - - - the engine automatically returns to a desired dynamical phase when thrown into an unwanted, dissipative phase by corruptions in the input - - - that is, by unanticipated environmental fluctuations. this self - correcting mechanism is robust up to a critical level of corruption, beyond which the system fails to act as an engine. we give explicit analytical expressions for both work and critical corruption level and summarize engine performance via a thermodynamic - function phase diagram over engine control parameters. the results reveal a new thermodynamic mechanism based on nonergodicity that underlies error correction as it operates to support resilient engineered and biological systems.
arxiv:1606.08506
let $ \ mu $ be a non - trivial probability measure on the unit circle $ \ partial \ bbd $, $ w $ the density of its absolutely continuous part, $ \ alpha _ n $ its verblunsky coefficients, and $ \ phi _ n $ its monic orthogonal polynomials. in this paper we compute the coefficients of $ \ phi _ n $ in terms of the $ \ alpha _ n $. if the function $ \ log w $ is in $ l ^ 1 ( d \ theta ) $, we do the same for its fourier coefficients. as an application we prove that if $ \ alpha _ n \ in \ ell ^ 4 $ and $ q ( z ) = \ sum _ { m = 0 } ^ n q _ m z ^ m $ is a polynomial, then with $ \ bar q ( z ) = \ sum _ { m = 0 } ^ n \ bar q _ m z ^ m $ and $ s $ the left shift operator on sequences we have $ | q ( e ^ { i \ theta } ) | ^ 2 \ log w ( \ theta ) \ in l ^ 1 ( d \ theta ) $ if and only if $ \ { \ bar q ( s ) \ alpha \ } _ n \ in \ ell ^ 2 $. we also study relative ratio asymptotics of the reversed polynomials $ \ phi _ { n + 1 } ^ * ( \ mu ) / \ phi _ n ^ * ( \ mu ) - \ phi _ { n + 1 } ^ * ( \ nu ) / \ phi _ n ^ * ( \ nu ) $ and provide a necessary and sufficient condition in terms of the verblunsky coefficients of the measures $ \ mu $ and $ \ nu $ for this difference to converge to zero uniformly on compact subsets of $ \ bbd $.
arxiv:math/0509192
we introduce a simple extensive - form algorithm for finding equilibria of two - player, zero - sum games. the algorithm is realization equivalent to a generalized form of fictitious play. we compare its performance to that of a similar extensive - form fictitious play algorithm and a counter - factual regret minimization algorithm. all three algorithms share the same advantages over normal - form fictitious play in terms of reducing storage requirements and computational complexity. the new algorithm is intuitive and straightforward to implement, making it an appealing option for those looking for a quick and easy game solving tool.
arxiv:2310.09658
the evolution of an accretion disk, formed as a consequence of the disruption of a star by a black hole, is followed by solving numerically the hydrodynamic equations. the present investigation aims to study the dependence of resulting light curves on dynamical and physical properties of such a transient disk during its existence. one of main results derived from our simulations is that black body fits of x - ray data tend to overestimate the true mean disk temperature. the temperature derived from black body fits should be identified with the color x - ray temperature rather than the average value derived from the true temperature distribution along the disk. the time interval between the beginning of the circularization of the bound debris and the beginning of the accretion process by the black hole is determined by the viscous timescale, which fixes also the raising part of the resulting light curve. the luminosity peak coincides with the beginning of matter accretion by the black hole and the late evolution of the light curve depends on the evolution of the debris fallback rate. peak bolometric luminosities are in the range 10 ^ 45 - 10 ^ 46 erg s ^ - 1 whereas peak luminosities in soft x - rays ( 0. 2 - 2. 0 kev ) are typically one order of magnitude lower. the timescale derived from our preferred models for the flare luminosity to decay by two orders of magnitude is about 3 - 4 years. predicted soft x - ray light curves were fitted to data on galaxies in which a variable x - ray emission, related to tidal events, was detected.
arxiv:1105.2060
effects of inhomogeneities on observations have been vastly studied using both perturbative methods, n - body simulations and swiss cheese solutions to the einstein equations. in nearly all cases, such studied setups assume vanishing spatial background curvature. while a spatially flat friedmann - lemaitre - robertson - walker model is in accordance with observations, a non - vanishing curvature is not ruled out. it is therefore important to note that, as has been pointed out in the literature, 1 dimensional averages might not converge to volume averages in non - euclidean space. if this is indeed the case, it will affect the interpretation of observations in spacetimes with non - vanishing average spatial curvature. this possibility is therefore studied here by computing the integrated expansion rate and shear, the accumulated density contrast, and fluctuations in the redshift - distance relation in swiss cheese models with different background curvatures. it is found that differences in mean and dispersion of these quantities in the different models are small and naturally attributable to differences in background expansion rate and density contrasts. thus, the study does not yield an indication that the relationship between 1 dimensional spatial averages and volume averages depends significantly on background curvature.
arxiv:1909.00610
deep neural networks are prone to various bias issues, jeopardizing their applications for high - stake decision - making. existing fairness methods typically offer a fixed accuracy - fairness trade - off, since the weight of the well - trained model is a fixed point ( fairness - optimum ) in the weight space. nevertheless, more flexible accuracy - fairness trade - offs at inference time are practically desired since : 1 ) stakes of the same downstream task can vary for different individuals, and 2 ) different regions have diverse laws or regularization for fairness. if using the previous fairness methods, we have to train multiple models, each offering a specific level of accuracy - fairness trade - off. this is often computationally expensive, time - consuming, and difficult to deploy, making it less practical for real - world applications. to address this problem, we propose you only debias once ( yodo ) to achieve in - situ flexible accuracy - fairness trade - offs at inference time, using a single model that trained only once. instead of pursuing one individual fixed point ( fairness - optimum ) in the weight space, we aim to find a " line " in the weight space that connects the accuracy - optimum and fairness - optimum points using a single model. points ( models ) on this line implement varying levels of accuracy - fairness trade - offs. at inference time, by manually selecting the specific position of the learned " line ", our proposed method can achieve arbitrary accuracy - fairness trade - offs for different end - users and scenarios. experimental results on tabular and image datasets show that yodo achieves flexible trade - offs between model accuracy and fairness, at ultra - low overheads. for example, if we need $ 100 $ levels of trade - off on the \ acse dataset, yodo takes $ 3. 53 $ seconds while training $ 100 $ fixed models consumes $ 425 $ seconds. the code is available at https : / / github. com / ahxt / yodo.
arxiv:2503.07066
in 3d lidar - based robot self - localization, pole - like landmarks are gaining popularity as lightweight and discriminative landmarks. this work introduces a novel approach called " discriminative rotation - invariant poles, " which enhances the discriminability of pole - like landmarks while maintaining their lightweight nature. unlike conventional methods that model a pole landmark as a 3d line segment perpendicular to the ground, we propose a simple yet powerful approach that includes not only the line segment ' s main body but also its surrounding local region of interest ( roi ) as part of the pole landmark. specifically, we describe the appearance, geometry, and semantic features within this roi to improve the discriminability of the pole landmark. since such pole landmarks are no longer rotation - invariant, we introduce a novel rotation - invariant convolutional neural network that automatically and efficiently extracts rotation - invariant features from input point clouds for recognition. furthermore, we train a pole dictionary through unsupervised learning and use it to compress poles into compact pole words, thereby significantly reducing real - time costs while maintaining optimal self - localization performance. monte carlo localization experiments using publicly available nclt dataset demonstrate that the proposed method improves a state - of - the - art pole - based localization framework.
arxiv:2406.11266
we develop a dynamic multi - agent model of an interbank payment system where banks choose their level of available funds on the basis of private payoff maximisation. the model consists of the repetition of a simultaneous move stage game with incomplete information, incomplete monitoring, and stochastic payoffs. adaptation takes place with bayesian updating, with banks maximizing immediate payoffs. we carry out numerical simulations to solve the model and investigate two special scenarios : an operational incident and exogenous throughput guidelines for payment submission. we find that the demand for intraday credit is an s - shaped function of the cost ratio between intraday credit costs and the costs associated with delaying payments. we also find that the demand for liquidity is increased both under operational incidents and in the presence of effective throughput guidelines.
arxiv:0705.3050
let $ \ { x _ i ( t ), t \ ge0 \ }, 1 \ le i \ le n $ be independent centered stationary gaussian processes with unit variance and almost surely continuous sample paths. for given positive constants $ u, t $, define the set of conjunctions $ c _ { [ 0, t ], u } : = \ { t \ in [ 0, t ] : \ min _ { 1 \ le i \ le n } x _ i ( t ) \ ge u \ }. $ motivated by some applications in brain mapping and digital communication systems, we obtain exact asymptotic expansion of $ p ( c _ { [ 0, t ], u } \ not = \ varphi ) $ as $ u \ to \ infty $. moreover, we establish the berman sojourn limit theorem for the random process $ \ { \ min _ { 1 \ le i \ le n } x _ i ( t ), t \ ge0 \ } $ and derive the tail asymptotics of the supremum of each order statistics process.
arxiv:1312.7129
in recent years the machine learning techniques have shown a great potential in various problems from a multitude of disciplines, including materials design and drug discovery. the high computational speed on the one hand and the accuracy comparable to that of dft on another hand make machine learning algorithms efficient for high - throughput screening through chemical and configurational space. however, the machine learning algorithms available in the literature require large training datasets to reach the chemical accuracy and also show large errors for the so - called outliers - the out - of - sample molecules, not well - represented in the training set. in the present paper we propose a new machine learning algorithm for predicting molecular properties that addresses these two issues : it is based on a local model of interatomic interactions providing high accuracy when trained on relatively small training sets and an active learning algorithm of optimally choosing the training set that significantly reduces the errors for the outliers. we compare our model to the other state - of - the - art algorithms from the literature on the widely used benchmark tests.
arxiv:1709.07082
the emerging trend of edge computing has led several cloud providers to release their own platforms for performing computation at the ' edge ' of the network. we compare two such platforms, amazon aws greengrass and microsoft azure iot edge, using a new benchmark comprising a suite of performance metrics. we also compare the performance of the edge frameworks to cloud - only implementations available in their respective cloud ecosystems. amazon aws greengrass and azure iot edge use different underlying technologies, edge lambda functions vs. containers, and so we also elaborate on platform features available to developers. our study shows that both of these edge platforms provide comparable performance, which nevertheless differs in important ways for key types of workloads used in edge applications. finally, we discuss several current issues and challenges we faced in deploying these platforms.
arxiv:1811.05948
gravitational lensing is a powerful tool for quantifying the mass content and distribution in distant galaxies. by using milliarcsecond angular resolution observations of radio - loud gravitationally lensed sources it is also possible to detect and quantify small deviations from a smooth mass density distribution, which can be due to low mass substructures in the lensing galaxy. we present high - resolution global vlbi observations of the gravitationally lensed radio source mg j0751 + 2716 ( at z = 3. 2 ), that shows evidence of both compact and extended structure ( core - jet morphology ) across several gravitational arcs. these data provide a wealth of observational constraints that are used to determine the inner ( baryonic and dark matter ) mass profile of a group of galaxies and also investigate the smoothness of the dark matter distribution on mas - scales, which is sensitive to possible structures of $ 10 ^ { 6 - 7 } $ m $ _ { \ odot } $ within the lensing halo or along the line - of - sight. our lens modelling finds evidence for astrometric anomalies in this system, which suggest presence of extra mass structure in the lens model. to date this kind of detailed studies of gravitational lensing systems like mg j0751 + 2716 has been limited by the currently small sample of radio - loud gravitational lenses. in this context, we also present a new pilot gravitational lens search in the vlbi survey mjive - 20, in perspective of future surveys with the next generation of radio interferometers.
arxiv:1902.07046
there are two paradigms to study nanoscale engines in stochastic and quantum thermodynamics. autonomous models, which do not rely on any external time - dependence, and models that make use of time - dependent control fields, often combined with dividing the control protocol into idealized strokes of a thermodynamic cycle. while the latter paradigm offers theoretical simplifications, its utility in practice has been questioned due to the involved approximations. here, we bridge the two paradigms by constructing an autonomous model, which implements a thermodynamic cycle in a certain parameter regime. this effect is made possible by self - oscillations, realized in our model by the well studied electron shuttling mechanism. based on experimentally realistic values, we find that a thermodynamic cycle analysis for a single - electron working fluid is { \ it not } justified, but a few - electron working fluid could suffice to justify it. furthermore, additional open challenges remain to autonomously implement the more studied carnot and otto cycles.
arxiv:2101.05027
crowd workers are distributed and decentralized. while decentralization is designed to utilize independent judgment to promote high - quality results, it paradoxically undercuts behaviors and institutions that are critical to high - quality work. reputation is one central example : crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. in this paper, we draw inspiration from historical worker guilds ( e. g., in the silk trade ) to design and implement crowd guilds : centralized groups of crowd workers who collectively certify each other ' s quality through double - blind peer assessment. a two - week field experiment compared crowd guilds to a traditional decentralized crowd work model. crowd guilds produced reputation signals more strongly correlated with ground - truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
arxiv:1611.01572
this work investigates the accuracy and numerical stability of cur decompositions with oversampling. the cur decomposition approximates a matrix using a subset of columns and rows of the matrix. when the number of columns and the rows are the same, the cur decomposition can become unstable and less accurate due to the presence of the matrix inverse in the core matrix. nevertheless, we demonstrate that the cur decomposition can be implemented in a numerical stable manner and illustrate that oversampling, which increases either the number of columns or rows in the cur decomposition, can enhance its accuracy and stability. additionally, this work devises an algorithm for oversampling motivated by the theory of the cur decomposition and the cosine - sine decomposition, whose competitiveness is illustrated through experiments.
arxiv:2405.06375
metallic quantum critical phenomena are believed to play a key role in many strongly correlated materials, including high temperature superconductors. theoretically, the problem of quantum criticality in the presence of a fermi surface has proven to be highly challenging. however, it has recently been realized that many models used to describe such systems are amenable to numerically exact solution by quantum monte carlo ( qmc ) techniques, without suffering from the fermion sign problem. in this article, we review the status of the understanding of metallic quantum criticality, and the recent progress made by qmc simulations. we focus on the cases of spin density wave and ising nematic criticality. we describe the results obtained so far, and their implications for superconductivity, non - fermi liquid behavior, and transport in the vicinity of metallic quantum critical points. some of the outstanding puzzles and future directions are highlighted.
arxiv:1804.01988
we show that entropy - sgd ( chaudhari et al., 2017 ), when viewed as a learning algorithm, optimizes a pac - bayes bound on the risk of a gibbs ( posterior ) classifier, i. e., a randomized classifier obtained by a risk - sensitive perturbation of the weights of a learned classifier. entropy - sgd works by optimizing the bound ' s prior, violating the hypothesis of the pac - bayes theorem that the prior is chosen independently of the data. indeed, available implementations of entropy - sgd rapidly obtain zero training error on random labels and the same holds of the gibbs posterior. in order to obtain a valid generalization bound, we rely on a result showing that data - dependent priors obtained by stochastic gradient langevin dynamics ( sgld ) yield valid pac - bayes bounds provided the target distribution of sgld is { \ epsilon } - differentially private. we observe that test error on mnist and cifar10 falls within the ( empirically nonvacuous ) risk bounds computed under the assumption that sgld reaches stationarity. in particular, entropy - sgld can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state - of - the - art performance.
arxiv:1712.09376
high - precision measurements of radial velocities of the m 67 cluster members are used to calculate the radial - velocity dispersions in the stellar groups found earlier in the cluster ' s corona. the previously detected feature in one of the groups ( group 60 ) consisting of stars with almost identical space velocities was confirmed. the possibility of more accurate future studies of the parameters of star groups using the gaia catalogues is discussed.
arxiv:1704.04410
##a ), and linear discriminant analysis ( lda ), and selecting the most relevant features for model training based on importance scores and correlation matrices. features vary in significance. even relatively insignificant features may contribute to a model. feature selection can reduce the number of features to prevent a model from becoming too specific to the training data set ( overfitting ). feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. common causes include : feature templates - implementing feature templates instead of coding new features feature combinations - combinations that cannot be represented by a linear system feature explosion can be limited via techniques such as : regularization, kernel methods, and feature selection. = = automation = = automation of feature engineering is a research topic that dates back to the 1990s. machine learning software that incorporates automated feature engineering has been commercially available since 2016. related academic literature can be roughly separated into two types : multi - relational decision tree learning ( mrdtl ) uses a supervised algorithm that is similar to a decision tree. deep feature synthesis uses simpler methods. = = = multi - relational decision tree learning ( mrdtl ) = = = multi - relational decision tree learning ( mrdtl ) extends traditional decision tree methods to relational databases, handling complex data relationships across tables. it innovatively uses selection graphs as decision nodes, refined systematically until a specific termination criterion is reached. most mrdtl studies base implementations on relational databases, which results in many redundant operations. these redundancies can be reduced by using techniques such as tuple id propagation. = = = open - source implementations = = = there are a number of open - source libraries and tools that automate feature engineering on relational data and time series : featuretools is a python library for transforming time series and relational data into feature matrices for machine learning. mcmd : an open - source feature engineering algorithm for joint clustering of multiple datasets. onebm or one - button machine combines feature transformations and feature selection on relational data with feature selection techniques. [ onebm ] helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. on the other hand, it enables non - experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time, and cost. getml community is an open source tool for automated feature engineering on time series and relational data. it is implemented in c / c + + with a python interface. it has been shown to be at least
https://en.wikipedia.org/wiki/Feature_engineering
every animal cell is filled with a cytoskeleton, a dynamic gel made of inextensible fibers, such as microtubules, actin fibers, and intermediate filaments, all suspended in a viscous fluid. numerical simulation of this gel is challenging because the fiber aspect ratios can be as large as $ 10 ^ 4 $. we describe a new method for rapidly computing the dynamics of inextensible slender filaments in periodically - sheared stokes flow. the dynamics of the filaments are governed by a nonlocal slender body theory which we partially reformulate in terms of the rotne - prager - yamakawa hydrodynamic tensor. to enforce inextensibility, we parameterize the space of inextensible fiber motions and strictly confine the dynamics to the manifold of inextensible configurations. to do this, we introduce a set of lagrange multipliers for the tensile force densities on the filaments and impose the constraint of no virtual work in an $ l ^ 2 $ weak sense. we augment this approach with a spectral discretization of the local and nonlocal slender body theory operators which is linear in the number of unknowns and gives improved spatial accuracy over approaches based on solving a line tension equation. for dynamics, we develop a second - order semi - implicit temporal integrator which requires at most a few evaluations of nonlocal hydrodynamics and a few block diagonal linear solves per time step. after demonstrating the improved accuracy and robustness of our approach through numerical examples, we apply our formulation to a permanently cross - linked actin mesh in a background oscillatory shear flow. we observe a characteristic frequency at which the network transitions from quasi - static, primarily elastic behavior to dynamic, primarily viscous behavior. we find that nonlocal hydrodynamics increases the viscous modulus by as much as 25 %, even for semi - dilute fiber suspensions.
arxiv:2007.11728
we ask if it is possible to positively influence social behavior with no risk of unintentionally incentivizing pathological behavior. in network routing problems, if network traffic is composed of many individual agents, it is known that self - interested behavior among the agents can lead to suboptimal network congestion. we study situations in which a system planner charges monetary tolls for the use of network links in an effort to incentivize efficient routing choices by the users, but in which the users ' sensitivity to tolls is heterogeneous and unknown. we seek locally - computed tolls that are guaranteed not to incentivize worse network routing than in the un - influenced case. our main result is to show that if networks are sufficiently complex and populations sufficiently diverse, perverse incentives cannot be systematically avoided : any taxation mechanism that improves outcomes on one network must necessarily degrade them on another. nonetheless, for the simple class of parallel networks, non - perverse taxes do exist ; we fully characterize all such taxation mechanisms, showing that they are a generalized version of traditional marginal - cost tolls.
arxiv:1911.10181
the steinitz exchange lemma is a basic theorem in linear algebra used, for example, to show that any two bases for a finite - dimensional vector space have the same number of elements. the result is named after the german mathematician ernst steinitz. we present another proof of the result of n. j. s. hughes \ cite { 1 } on steinitz ' exchange theorem for infinite bases. in our proof we assume kuratowski - zorn maximum principle instead of well ordering. we present some examples of dependence spaces of general nature with theirs possible applications of the result in other as linear or universal algebra domains of mathematical sciences. the lecture was presented on 77th workshop on general algebra, 24th conference for young algebraists in potsdam ( germany ) on 21st march 2009.
arxiv:0906.1132
we shall introduce a notion of free coactions of a finite dimensional $ c ^ * $ - hopf algebra on a $ c ^ * $ - algebra modifying a notion of free actions of a discrete group on a $ c ^ * $ - algebra and we shall study several properties on coactions of a finite dimensional $ c ^ * $ - hopf algebra on $ c ^ * $ - algebras, which are relating to strong morita equivalence for inclusions of $ c ^ * $ - algebras.
arxiv:2003.11695
while reinforcement learning from human feedback ( rlhf ) is widely used to align large language models ( llms ) with human preferences, it typically assumes homogeneous preferences across users, overlooking diverse human values and minority viewpoints. although personalized preference learning addresses this by tailoring separate preferences for individual users, the field lacks standardized methods to assess its effectiveness. we present a multi - faceted evaluation framework that measures not only performance but also fairness, unintended effects, and adaptability across varying levels of preference divergence. through extensive experiments comparing eight personalization methods across three preference datasets, we demonstrate that performance differences between methods could reach 36 % when users strongly disagree, and personalization can introduce up to 20 % safety misalignment. these findings highlight the critical need for holistic evaluation approaches to advance the development of more effective and inclusive preference learning systems.
arxiv:2502.19158
our understanding of the structural features of foams and emulsions has advanced significantly over the last 20 years. however, with a search for " super - stable " liquid dispersions, foam and emulsion science employs increasingly complex formulations which create solid - like visco - elastic layers at the bubble / drop surfaces. these lead to elastic, adhesive and frictional forces between bubbles / drops, impacting strongly how they pack and deform against each other, asking for an adaptation of the currently available structural description. the possibility to modify systematically the interfacial properties makes these dispersions ideal systems for the exploration of soft granular materials with complex interactions. we present here a first systematic analysis of the structural features of such a system using a model silicone emulsion containing millimetre - sized polyethylene glycol drops ( peg ). solid - like drop surfaces are obtained by polymeric cross - linking reactions at the peg - silicone interface. using a novel droplet - micromanipulator, we highlight the presence of elastic, adhesive and frictional interactions between two drops. we then provide for the first time a full tomographic analysis of the structural features of these emulsions. an in - depth analysis of the angle of repose, local volume fraction distributions, pair correlation functions and the drop deformations for different skin formulations allow us to put in evidence the striking difference with " ordinary " emulsions having fluid - like drop surfaces. while strong analogies with frictional hard - sphere systems can be drawn, these systems display a set of unique features due to the high deformability of the drops which await systematic exploration.
arxiv:1809.07002
the trend towards autonomous driving and the continuous research in the automotive area, like advanced driver assistance systems ( adas ), requires an accurate localization under all circumstances. an accurate estimation of the vehicle state is a basic requirement for any trajectory - planning algorithm. still, even when the introduction of the gps l5 band promises lane - accuracy, coverage limitations in roofed areas still have to be addressed. in this work, a method for high precision indoor positioning using a lidar is presented. the method is based on the combination of motion models with lidar measurements, and uses infrastructural elements as positioning references. this allows to estimate the orientation, velocity over ground and position of a vehicle in a local tangent plane ( ltp ) reference frame. when the outputs of the proposed method are compared to those of an automotive dynamic motion analyzer ( adma ), mean errors of 1 degree, 0. 1 m / s and of 4. 7 cm respectively are obtained. the method can be implemented by using a lidar sensor as a stand - alone unit. a median runtime of 40. 77 us on an intel i7 - 6820hq cpu signals the possibility of real - time processing.
arxiv:2005.06798
luminous red galaxies ( lrgs ) and blue star - forming emission - line galaxies ( elgs ) are key tracers of large - scale structure used by cosmological surveys. theoretical predictions for such data are often done via simplistic models for the galaxy - halo connection. in this work, we use the large, high - fidelity hydrodynamical simulation of the millenniumtng project ( mtng ) to inform a new phenomenological approach for obtaining an accurate and flexible galaxy - halo model on small scales. our aim is to study lrgs and elgs at two distinct epochs, $ z = 1 $ and $ z = 0 $, and recover their clustering down to very small scales, $ r \ sim 0. 1 \ { \ rm mpc } / h $, i. e. the one - halo regime, while a companion paper extends this to a two - halo model for larger distances. the occupation statistics of elgs in mtng inform us that : ( 1 ) the satellite occupations exhibit a slightly super - poisson distribution, contrary to commonly made assumptions, and ( 2 ) that haloes containing at least one elg satellite are twice as likely to host a central elg. we propose simple recipes for modeling these effects, each of which calls for the addition of a single free parameter to simpler halo occupation models. to construct a reliable satellite population model, we explore the lrg and elg satellite radial and velocity distributions and compare them with those of subhalos and particles in the simulation. we find that elgs are anisotropically distributed within halos, which together with our occupation results provides strong evidence for cooperative galaxy formation ( manifesting itself as one - halo galaxy conformity ) ; i. e. ~ galaxies with similar properties form in close proximity to each other. our refined galaxy - halo model represents a useful improvement of commonly used analysis tools and thus can be of help to increase the constraining power of large - scale structure surveys.
arxiv:2210.10068
the impressive success of style - based gans ( stylegans ) in high - fidelity image synthesis has motivated research to understand the semantic properties of their latent spaces. in this paper, we approach this problem through a geometric analysis of latent spaces as a manifold. in particular, we propose a local dimension estimation algorithm for arbitrary intermediate layers in a pre - trained gan model. the estimated local dimension is interpreted as the number of possible semantic variations from this latent variable. moreover, this intrinsic dimension estimation enables unsupervised evaluation of disentanglement for a latent space. our proposed metric, called distortion, measures an inconsistency of intrinsic tangent space on the learned latent space. distortion is purely geometric and does not require any additional attribute information. nevertheless, distortion shows a high correlation with the global - basis - compatibility and supervised disentanglement score. our work is the first step towards selecting the most disentangled latent space among various latent spaces in a gan without attribute labels.
arxiv:2205.13182
we develop a high - power narrow - linewidth fiber amplifier based on a tapered yb - doped fiber. in the experiment, the narrow end of the tapered fiber is used as the input port of both pump source and signal light to form a robust all - fiber configuration. the stimulated brillouin scattering ( sbs ) is effectively suppressed owing to its gradually increased mode area as well as continuously changed brillouin frequency shift dependent on the core diameter. as a result, an output power of 260 w without sbs and mode instability ( mi ) is obtained, with a slope efficiency of 71. 3 % and a narrow linewidth of ~ 2ghz. however, the mi rather than sbs is observed when output power reaches 260 w. the features of mi are experimentally studied in detail. it is pointed out that, the mi seems the primary and durative limitation factor of the tapered fiber amplifier for high - power narrow - linewidth output, although it has advantage to reduce other nonlinear effect. to the best of our knowledge, this is the first detailed experimental study on mi in a tapered fiber. at last, some worthwhile discussion about the optimization of the tapered fiber and the whole system for both sbs and mi suppression is conducted based on the present and previous results.
arxiv:1612.08842
the measurement of phi mesons provides a unique and complementary method for exploring properties of the hot and dense medium created in the relativistic heavy ion collisions. it has a relatively small hadronic interaction cross section and is sensitive to the increase of strangeness ( strangeness enhancement ), a phenomenon associated with soft particles in bulk matter. measurements in the dilepton channels are especially interesting since leptons interact only electromagnetically, thus carrying the information from their production phase directly to the detector. measurements in different nucleus - nucleus collisions allow us to perform a systematic study of the nuclear medium effects on phi meson production. the phenix detector provides the capabilities to measure the phi meson production in a wide range of transverse momentum and rapidity to study these effects. in this proceeding, we present measurements of the phi mesons in a variety of collision systems at $ \ sqrt { s _ nn } = 200 $ gev. in case of small systems, the data are compared with ampt calculations to study the various cold nuclear medium effects involved in phi meson production.
arxiv:1710.06689
superconducting transmon qubits with strong anharmonicity and insensitivity to offset charge are highly desirable for low - error implementation. in this work we propose a c - axis junction, comprising triplet superconductors, and set at a relative twist angle. invoking spin - orbit coupling and spin polarization, which are known to occur in the material platform of choice, we examine the resulting transmon hamiltonian. this junction allows for direct control of the single and double cooper pair tunneling strength, and most remarkably, an anomalous magnetic flux - - i. e. a phase offset equivalent to magnetic flux, yet in zero magnetic field. having control over these three parameters - - single and double pair tunneling and anomalous flux - - allows for optimal design of the transmon qubit. interestingly, in this architecture, the anomalous flux is determined by the twist angle of the junction, thereby offering a novel zero - field functionality. our key results rely on symmetry arguments, for concreteness we demonstrate the implementation of our concept using a model of moir \ ' e graphene - based c - axis junctions.
arxiv:2403.07215
this paper presents an educational code written using fenics, based on the level set method, to perform compliance minimization in structural optimization. we use the concept of distributed shape derivative to compute a descent direction for the compliance, which is defined as a shape functional. the use of the distributed shape derivative is facilitated by fenics, which allows to handle complicated partial differential equations with a simple implementation. the code is written for compliance minimization in the framework of linearized elasticity, and can be easily adapted to tackle other functionals and partial differential equations. we also provide an extension of the code for compliant mechanisms. we start by explaining how to compute shape derivatives, and discuss the differences between the distributed and boundary expressions of the shape derivative. then we describe the implementation in details, and show the application of this code to some classical benchmarks of topology optimization. the code is available at http : / / antoinelaurain. com / compliance. htm, and the main file is also given in the appendix.
arxiv:1705.01442
reanalysis datasets combining numerical physics models and limited observations to generate a synthesised estimate of variables in an earth system, are prone to biases against ground truth. biases identified with the nasa modern - era retrospective analysis for research and applications, version 2 ( merra - 2 ) aerosol optical depth ( aod ) dataset, against the aerosol robotic network ( aeronet ) ground measurements in previous studies, motivated the development of a deep learning based aod prediction model globally. this study combines a convolutional neural network ( cnn ) with merra - 2, tested against all aeronet sites. the new hybrid cnn - based model provides better estimates validated versus aeronet ground truth, than only using merra - 2 reanalysis.
arxiv:1910.06789
this paper is an announcement of the minimal model theory for log surfaces in all characteristics and contains some related results including a simplified proof of the artin - keel contraction theorem in the surface case.
arxiv:1205.2464
the lack of large - scale, labeled data sets impedes progress in developing robust and generalized predictive models for on - body sensor - based human activity recognition ( har ). labeled data in human activity recognition is scarce and hard to come by, as sensor data collection is expensive, and the annotation is time - consuming and error - prone. to address this problem, we introduce imutube, an automated processing pipeline that integrates existing computer vision and signal processing techniques to convert videos of human activity into virtual streams of imu data. these virtual imu streams represent accelerometry at a wide variety of locations on the human body. we show how the virtually - generated imu data improves the performance of a variety of models on known har datasets. our initial results are very promising, but the greater promise of this work lies in a collective approach by the computer vision, signal processing, and activity recognition communities to extend this work in ways that we outline. this should lead to on - body, sensor - based har becoming yet another success story in large - dataset breakthroughs in recognition.
arxiv:2006.05675
in this paper a novel hybrid position / force control with autonomous switching between both control modes is introduced for hydraulic actuators. a hybrid position / force control structure with feed - forwarding, full - state feedback, including integral control error, pre - compensator of the deadzone, and low - pass filtering of the control value is designed. controller gains are obtained via local linearization and pole placement accomplished separately for the position and force control. a hysteresis - based autonomous switching is integrated into the closed control loop, while multiple lyapunov function based approach is applied for stability analysis of the entire hybrid control system. experimental evaluation is shown on the developed test setup with the standard industrial hydraulic cylinders, and that for different motion and load profiles.
arxiv:2006.03670
a construction of the real 4d minkowski space - time starting from quantum harmonic oscillators is proposed. first, a 2d spinor space and its dual are derived from the standard commutation relations obeyed by the ladder operators of two independent 1d harmonic oscillators. the complex 4d minkowvski vector space v is then constructed from these spinor space. the flat, real 4d minkowski manifold is finally built as an approximate description of a manifold of unitary operators constructed from v. lorentz invariance is recovered and several possible extensions are discussed, which connections to quantum optics and condensed matter physics.
arxiv:2310.01447
let $ \ mathbf { k } $ be a field and let $ v : \ mathscr { c } \ to \ mathbf { k } \ textup { - mod } $ be a point - wise finite dimensional persistence modules, where $ \ mathscr { c } $ is a small category. assume that for all local artinian $ \ mathbf { k } $ - algebras $ r $ with residue field isomorphic to $ \ mathbf { k } $, there is a generalized persistence module $ m : \ mathscr { c } \ to r \ textup { - mod } $, such that for all $ x \ in \ mathrm { ob } ( \ mathscr { c } ) $, $ m ( x ) $ is free over $ r $ with finite rank and $ \ mathbf { k } \ otimes _ r m ( x ) \ cong v ( x ) $. if $ v $ is a direct sum of indecomposable persistence modules $ v _ i : \ mathscr { c } \ to \ mathbf { k } \ textup { - mod } $ with endomorphism ring isomorphic to $ \ mathbf { k } $, then $ m $ is a direct sum of indecomposables $ m _ i : \ mathscr { c } \ to r \ textup { - mod } $ with endomorphism ring isomorphic to $ r $
arxiv:2403.05032
in arxiv : 1501. 03019 [ hep - th ], the areas of certain complex extremal surfaces in de sitter space were found to have resemblance with entanglement entropy in appropriate dual euclidean non - unitary cfts, with the area being real and negative in $ ds _ 4 $. in this paper, we study some toy models of 2 - dim ghost conformal field theories with negative central charge with a view to exploring this further from the cft point of view. in particular we consider $ bc $ - ghost systems with central charge $ c = - 2 $ and study the replica formulation for entanglement entropy for a single interval, and associated issues arising in this case, notably pertaining to ( i ) the $ sl ( 2 ) $ vacuum coinciding with the ghost ground state, and ( ii ) the background charge inherent in these systems which leads to particular forms for the norms of states ( involving zero modes ). this eventually gives rise to negative entanglement entropy. we also discuss a ( logarithmic ) cft of anti - commuting scalars, with similarities in some features. finally we discuss a simple toy model of two " ghost - spins " which mimics some of these features.
arxiv:1602.06505
downsampling or under - sampling is a technique that is utilized in the context of large and highly imbalanced classification models. we study optimal downsampling for imbalanced classification using generalized linear models ( glms ). we propose a pseudo maximum likelihood estimator and study its asymptotic normality in the context of increasingly imbalanced populations relative to an increasingly large sample size. we provide theoretical guarantees for the introduced estimator. additionally, we compute the optimal downsampling rate using a criterion that balances statistical accuracy and computational efficiency. our numerical experiments, conducted on both synthetic and empirical data, further validate our theoretical results, and demonstrate that the introduced estimator outperforms commonly available alternatives.
arxiv:2410.08994
we present an analysis of several high - resolution chandra grating observations of the x - ray binary pulsar her x - 1. with a total exposure of 170 ks, the observations are separated by years and cover three combinations of orbital and super - orbital phases. our goal is to determine distinct properties of the photoionized emission and its dependence on phase - dependent variations of the continuum. we find that the continua can be described by a partial covering model which above 2 kev is consistent with recent results from \ rxte studies and at low energies is consistent with recent \ xmm and \ sax studies. besides a powerlaw with fixed index, an additional thermal blackbody of 114 ev is required to fit wavelengths above 12 \ aa ( $ \ sim $ 1 kev ). we find that likely all the variability is caused by highly variable absorption columns in the range ( 1 - - 3 ) $ \ times 10 ^ { 23 } $ cm $ ^ { - 2 } $. strong fe k line fluorescence in almost all observations reveals that dense, cool material is present not only in the outer regions of the disk but interspersed throughout the disk. most spectra show strong line emission stemming from a photoionized accretion disk corona. we model the line emission with generic thermal plasma models as well as with the photoionization code xstar and investigate changes of the ionization balance with orbital and superorbital phases. most accretion disk coronal properties such as disk radii, temperatures, and plasma densities are consistent with previous findings for the low state. we find that these properties change negligibly with respect to orbital and super - orbital phases. a couple of the higher energy lines exhibit emissivities that are significantly in excess of expectations from a static accretion disk corona.
arxiv:0905.3773
we identify new sufficiency conditions for coercivity of general multivariate polynomials $ f \ in \ mathbb { r } [ x ] $ which are expressed in terms of their newton polytopes at infinity and which consist of a system of affine - linear inequalities in the space of polynomial coefficients. by sharpening the already existing necessary conditions for coercivity for a class of gem irregular polynomials we provide a characterization of coercivity of circuit polynomials, which extends the known results on this well studied class of polynomials. for the already existing sufficiency conditions for coercivity which contain a description involving a set projection operation, we identify an equivalent description involving a single posynomial inequality. this makes them more easy to apply and hence also more appealing from the practical perspective. we relate our results to the existing literature and we illustrate our results with several examples.
arxiv:2001.03262
by saying that the operation of taking the complement is an involution. an inclusion of sets a ⊆ b is turned into an inclusion in the opposite direction ⊆. given two subsets a and b of s, a is contained in if and only if b is contained in. this duality appears in topology as a duality between open and closed subsets of some fixed topological space x : a subset u of x is closed if and only if its complement in x is open. because of this, many theorems about closed sets are dual to theorems about open sets. for example, any union of open sets is open, so dually, any intersection of closed sets is closed. the interior of a set is the largest open set contained in it, and the closure of the set is the smallest closed set that contains it. because of the duality, the complement of the interior of any set u is equal to the closure of the complement of u. = = = dual cone = = = a duality in geometry is provided by the dual cone construction. given a set c { \ displaystyle c } of points in the plane r 2 { \ displaystyle \ mathbb { r } ^ { 2 } } ( or more generally points in r n { \ displaystyle \ mathbb { r } ^ { n } } ), the dual cone is defined as the set c ∗ ⊆ r 2 { \ displaystyle c ^ { * } \ subseteq \ mathbb { r } ^ { 2 } } consisting of those points ( x 1, x 2 ) { \ displaystyle ( x _ { 1 }, x _ { 2 } ) } satisfying x 1 c 1 + x 2 c 2 ≥ 0 { \ displaystyle x _ { 1 } c _ { 1 } + x _ { 2 } c _ { 2 } \ geq 0 } for all points ( c 1, c 2 ) { \ displaystyle ( c _ { 1 }, c _ { 2 } ) } in c { \ displaystyle c }, as illustrated in the diagram. unlike for the complement of sets mentioned above, it is not in general true that applying the dual cone construction twice gives back the original set c { \ displaystyle c }. instead, c ∗ ∗ { \ displaystyle c ^ { * * } } is the smallest cone containing c { \ displaystyle c } which may be bigger than c { \ displaystyle c }. therefore this duality is weaker than the
https://en.wikipedia.org/wiki/Duality_(mathematics)
many companies hire interns, often university or college students during a summer break, or externships. specializations include analysts, architects, developers, testers, technical support, middleware analysts, project managers, software product managers, educators, and researchers. most software engineers and programmers work 40 hours a week, but about 15 percent of software engineers and 11 percent of programmers worked more than 50 hours a week in 2008. potential injuries in these occupations are possible because like other workers who spend long periods sitting in front of a computer terminal typing at a keyboard, engineers and programmers are susceptible to eyestrain, back discomfort, thrombosis, obesity, and hand and wrist problems such as carpal tunnel syndrome. = = = = united states = = = = the u. s. bureau of labor statistics ( bls ) counted 1, 365, 500 software developers holding jobs in the u. s. in 2018. due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and many software engineers hold computer science degrees. the bls estimates from 2023 to 2033 that computer software engineering would increase by 17 %. this is down from the 2022 to 2032 bls estimate of 25 % for software engineering. and, is further down from their 30 % 2010 to 2020 bls estimate. due to this trend, job growth may not be as fast as during the last decade, as jobs that would have gone to computer software engineers in the united states would instead be outsourced to computer software engineers in countries such as india and other foreign countries. in addition, the bls job outlook for computer programmers, the u. s. bureau of labor statistics ( bls ) occupational outlook predicts a decline of - 7 percent from 2016 to 2026, a further decline of - 9 percent from 2019 to 2029, a decline of - 10 percent from 2021 to 2031. and then a decline of - 11 percent from 2022 to 2032. since computer programming can be done from anywhere in the world, companies sometimes hire programmers in countries where wages are lower. furthermore, the ratio of women in many software fields has also been declining over the years as compared to other engineering fields. then there is the additional concern that recent advances in artificial intelligence might impact the demand for future generations of software engineers. however, this trend may change or slow in the future as many current software engineers in the u. s. market flee the profession or age out of
https://en.wikipedia.org/wiki/Software_engineering
emulsions are paramount in various interdisciplinary topical areas, yet a satisfactory understanding of their behavior in buoyancy - driven thermal flows has not been established. in the present work, we unravel the dynamical regimes of thermal convection in emulsions by leveraging a large set of mesoscale numerical simulations. emulsions are prepared with a given volume fraction of the initially dispersed phase, $ \ phi $, ranging from dilute ( low values of $ \ phi $ ) to jammed emulsions ( high values of $ \ phi $ ), resulting in different rheological responses, i. e., from newtonian to non - newtonian yield - stress behaviors, respectively. we then characterize the dynamics of the emulsions in the paradigmatic setup of the rayleigh - b \ ' enard convection, i. e., when confined between two parallel walls at different temperatures under the effect of buoyancy forces, the latter encoded in the dimensionless rayleigh number ra. we thoroughly investigated the emulsion dynamics in changing $ \ phi $ and ra. for a given $ \ phi $, at increasing ra, we observe that the emulsion exhibits convection states, where structural changes may appear ( i. e., droplet breakup, coalescence, or phase - inversion ), which inevitably impact the emulsion rheology. for sufficiently high values of ra, two states of convection are observed : for low / moderate values of $ \ phi $ ( newtonian emulsions ), we observe breakup - dominated dynamics, whereas for high values of $ \ phi $ ( non - newtonian emulsions ), we observe phase - inverted states. for both scenarios, the droplet size distribution depends on ra, and scaling laws for the average droplet size are analyzed and quantified. our results offer unprecedented insights into the rich dynamics of emulsions under thermal convection, offering the first detailed characterization of the various dynamic regimes to be expected and their relation with structural changes occurring in such complex fluids.
arxiv:2411.11553
we formulate and apply a continuum model that incorporates elasticity, yield stress, plasticity and viscous drag. it is motivated by the two - dimensional foam rheology experiments of debregeas et al. [ g. debregeas, h. tabuteau, and j. - m. di meglio, phys. rev. lett. 87, 178305 ( 2001 ) ] and wang et al [ y. wang, k. krishan, and m. dennin, phys. rev. e 73, 031401 ( 2006 ) ], and is successful in exhibiting their principal features an exponentially decaying velocity profile and strain localisation. transient effects are also identified.
arxiv:cond-mat/0602021
new development of the theory of grothendieck polynomials, based on an exponential solution of the yang - baxter equation in the algebra of projectors are given.
arxiv:hep-th/9306005
we present an analysis of the 15 february 2011 x - class solar flare, previously reported to produce the first sunquake in solar cycle 24 ( kosovichev 2011 ). using acoustic holography, we confirm the first, and report a second, weaker, seismic source associated with this flare. we find that the two sources are located at either end of a sigmoid which indicates the presence of a flux rope. contrary to the majority of previously reported sunquakes, the acoustic emission precedes the peak of major hard x - ray ( hxr ) sources by several minutes. furthermore, the strongest hard x - ray footpoints derived from rhessi data are found to be located away from the seismic sources in the flare ribbons. we account for these discrepancies within the context of a phenomenological model of a flux rope eruption and accompanying two - ribbon flare. we propose that the sunquakes are triggered at the footpoints of the erupting flux rope at the start of the flare impulsive phase and eruption onset, while the main hard x - ray sources appear later at the footpoints of the flare loops formed under the rising flux rope. possible implications of this scenario for the theoretical interpretation of the forces driving sunquakes are discussed.
arxiv:1110.2005
accurately predicting the onset of specific activities within defined timeframes holds significant importance in several applied contexts. in particular, accurate prediction of the number of future users that will be exposed to an intervention is an important piece of information for experimenters running online experiments ( a / b tests ). in this work, we propose a novel approach to predict the number of users that will be active in a given time period, as well as the temporal trajectory needed to attain a desired user participation threshold. we model user activity using a bayesian nonparametric approach which allows us to capture the underlying heterogeneity in user engagement. we derive closed - form expressions for the number of new users expected in a given period, and a simple monte carlo algorithm targeting the posterior distribution of the number of days needed to attain a desired number of users ; the latter is important for experimental planning. we illustrate the performance of our approach via several experiments on synthetic and real world data, in which we show that our novel method outperforms existing competitors.
arxiv:2401.14722
recently a prescription to compute the four - dimensional n = 2 superconformal index in the presence of certain bps surface defects has been given. these surface defects are labelled by symmetric representations of su ( n ). in the present paper we give a prescription to compute the superconformal index in the presence of surface defects labelled by arbitrary representations of su ( n ). furthermore, we extend the dictionary between the n = 2 superconformal schur - index and correlators of q - deformed yang - mills to incorporate such surface defects.
arxiv:1303.4460
cayley ' s formula states that the number of labelled trees on $ n $ vertices is $ n ^ { n - 2 } $, and many of the current proofs involve complex structures or rigorous computation. we present a bijective proof of the formula by providing an elementary calculation of the probability that a cycle occurs in a random map from an $ n $ - element set to an $ n + 1 $ - element set.
arxiv:1409.1614
the integration of new modalities into frontier ai systems offers exciting capabilities, but also increases the possibility such systems can be adversarially manipulated in undesirable ways. in this work, we focus on a popular class of vision - language models ( vlms ) that generate text outputs conditioned on visual and textual inputs. we conducted a large - scale empirical study to assess the transferability of gradient - based universal image ` ` jailbreaks " using a diverse set of over 40 open - parameter vlms, including 18 new vlms that we publicly release. overall, we find that transferable gradient - based image jailbreaks are extremely difficult to obtain. when an image jailbreak is optimized against a single vlm or against an ensemble of vlms, the jailbreak successfully jailbreaks the attacked vlm ( s ), but exhibits little - to - no transfer to any other vlms ; transfer is not affected by whether the attacked and target vlms possess matching vision backbones or language models, whether the language model underwent instruction - following and / or safety - alignment training, or many other factors. only two settings display partially successful transfer : between identically - pretrained and identically - initialized vlms with slightly different vlm training data, and between different training checkpoints of a single vlm. leveraging these results, we then demonstrate that transfer can be significantly improved against a specific target vlm by attacking larger ensembles of ` ` highly - similar " vlms. these results stand in stark contrast to existing evidence of universal and transferable text jailbreaks against language models and transferable adversarial attacks against image classifiers, suggesting that vlms may be more robust to gradient - based transfer attacks.
arxiv:2407.15211
the non - hermitian skin effect refers to the accumulation of eigenstates near the boundary in open boundary lattice models, which can be systematically characterized using the non - bloch band theory. here, we apply the non - bloch band theory to investigate the stochastic reaction - diffusion process by mapping it to a non - hermitian kitaev chain. we exactly obtain the open boundary spectrum and the generalized brillouin zone, and identify a robust zero mode arising from the non - bloch topology. notably, distinct from its hermitian counterpart in the quantum context, the zero mode supports anomalous dynamical crossover in the markov process. we quantitatively demonstrate the intriguing dynamical effects through the spectral decomposition of the hamiltonian on the non - bloch eigenstates, and confirm our findings by conducting stochastic simulations with high accuracy. our study highlights the significant and general role of non - bloch topology in non - equilibrium dynamics.
arxiv:2306.11105
in the laser welding and additive manufacturing ( am ) communities, the balling defect is primarily attributed to the action of fluid instabilities with a few authors suggesting other mechanisms. without commenting on the validity of the fluid instability driven \ textit { mechanism } of balling in am, this work intends to present the most realistic analytical discussion of the balling defect driven purely by fluid instabilities. synchrotron - based x - ray radiography of thin samples indicate that fluid instability growth rates and solidification can be comparable in magnitude and thus compete. neglecting the action of fluid flows and heat transport, this work presents an analytical formalism which accounts for fluid instabilities and solidification competition, giving a continuous transition from balling to non - balling which is lacking in current literature. we adapt a rivulet instability model from the fluid physics community to account for the stabilizing effects of the substrate which the plateau - rayleigh instability model does not account for, and estimate the instability growth rate. our model predicts instability growth at higher wavelengths and shallower melt pool depths relative to width, as well as strong sensitivity to the solidification front curvature. deviations between model predictions and our experimental results demonstrate the importance of fluid flows and heat transport in the balling process. our experiments further demonstrate at least one mechanism by which the melt pool length and balling wavelength are not equivalent, as commonly claimed.
arxiv:2502.13341
quantification is the supervised learning task that consists of training predictors of the class prevalence values of sets of unlabelled data, and is of special interest when the labelled data on which the predictor has been trained and the unlabelled data are not iid, i. e., suffer from dataset shift. to date, quantification methods have mostly been tested only on a special case of dataset shift, i. e., prior probability shift ; the relationship between quantification and other types of dataset shift remains, by and large, unexplored. in this work we carry out an experimental analysis of how current quantification algorithms behave under different types of dataset shift, in order to identify limitations of current approaches and hopefully pave the way for the development of more broadly applicable methods. we do this by proposing a fine - grained taxonomy of types of dataset shift, by establishing protocols for the generation of datasets affected by these types of shift, and by testing existing quantification methods on the datasets thus generated. one finding that results from this investigation is that many existing quantification methods that had been found robust to prior probability shift are not necessarily robust to other types of dataset shift. a second finding is that no existing quantification method seems to be robust enough to dealing with all the types of dataset shift we simulate in our experiments. the code needed to reproduce all our experiments is publicly available at https : / / github. com / pglez82 / quant _ datasetshift.
arxiv:2310.04565
we present a slight generalization of the notion of completely integrable systems to get them being integrable by quadratures. we use this generalization to integrate dynamical systems on double lie groups.
arxiv:math/9906202
we study a metric on the set of finite graphs in which two graphs are considered to be similar if they have similar bounded dimensional " factors ". we show that limits of convergent graph sequences in this metric can be represented by symmetric borel measures on $ [ 0, 1 ] ^ 2 $. this leads to a generalization of dense graph limit theory to sparse graph sequences.
arxiv:1610.05719
tight binaries discovered in young, nearby associations, with known distances, are ideal targets to provide dynamical mass measurements to test the physics of evolutionary models at young ages and very low masses. we report for the first time the binarity of twa22, possible new dynamical calibrator for evolutionary models at young ages. based on an accurate trigonometric distance ( 17. 53 + - 0. 21 pc ) determination, we infer a total dynamical mass of 220 + - 21 mjup for the system. from the resolved near - infrared integral - field spectroscopy, we find an effective temperature teff = 2900 + 200 - 200 k for twa22 a and teff = 2900 + 200 - 100 k for twa22 b and surface gravities between 4. 0 and 5. 5 dex. from our photometry and a m6 + - 1 spectral type for both components, we find luminosities of log ( l / lsun ) = - 2. 11 + - 0. 13 dex and log ( l / lsun ) = - 2. 30 + - 0. 16 dex for twa22 a and b respectively. by comparing these parameters with evolutionary models, we question the age and the multiplicity of this system. we also discuss a possible underestimation of the mass predicted by evolutionary models for young stars close to the substellar boundary.
arxiv:0906.1799
in this third part of our calculation of the qcd nlo corrections to the photon impact factor we combine our previous results for the real corrections with the singular pieces of the virtual corrections and present finite analytic expressions for the quark - antiquark - gluon intermediate state inside the photon impact factor. we begin with a list of the infrared singular pieces of the virtual correction, obtained in the first step of our program. we then list the complete results for the real corrections ( longitudinal and transverse photon polarization ). in the next step we define, for the real corrections, the collinear and soft singular regions and calculate their contributions to the impact factor. we then subtract the contribution due to the central region. finally, we combine the real corrections with the singular pieces of the virtual corrections and obtain our finite results.
arxiv:hep-ph/0208130
the main result of this article is the decomposition of tensor products of representations of sl ( 2 ) in the sum of irreducible representations parametrized by outerplanar graphs. an outerplanar graph is a graph with the vertices 0, 1, 2,..., m, edges of which can be drawn in the upper half - plane without intersections. i allow for a graph to have multiple edges, but don ' t allow loops.
arxiv:math/9712259
nucleon weak matrix elements can be extracted from nucleon correlation functions with lattice qcd simulations. the signal - to - noise ratio prohibits the analysis at large source - sink separations and as a consequence, excited state contamination affects the extraction of the nucleon matrix elements. chiral perturbation theory ( chpt ) suggests that the dominant contamination in some of these channels is due to $ n \ pi $ states where the pion carries the same momentum of the current. in this talk, we report updates on the variational analysis with $ qqq $ - operators ( nucleon - like ) and $ ( qqq ) ( \ bar { q } q ) $ - operators ( nucleon - pion - like ) where we report for the first time some preliminary results of $ \ langle n \ pi | \ mathcal { j } | n \ rangle $, modulo some kinematic and volume factors, and we compare the results against chpt. this pilot study is performed on a cls ensemble with $ n _ f = 3 $, $ m _ \ pi \ approx 420 ~ \ mathrm { mev } $, $ a \ approx 0. 1 ~ \ mathrm { fm } $ and $ t = 2l \ approx 4. 8 ~ \ mathrm { fm } $.
arxiv:2405.20875
laser trapping and interfacing of laser - cooled atoms in an optical fiber network is an important capability for quantum information science. following the pioneering work of balykin et al. and vetsch et al., we propose a robust method of trapping single cesium atoms with a two - color state - insensitive evanescent wave around a dielectric nanofiber. specifically, we show that vector light shifts ( i. e., effective inhomogeneous zeeman broadening of the ground states ) induced by the inherent ellipticity of the forward - propagating evanescent wave can be effectively canceled by a backward - propagating evanescent wave. furthermore, by operating the trapping lasers at the magic wavelengths, we remove the differential scalar light shift between ground and excited states, thereby allowing for resonant driving of the optical d2 transition. this scheme provides a promising approach to trap and probe neutral atoms with long trap and coherence lifetimes with realistic experimental parameters.
arxiv:1110.5372
we consider ill - posed inverse problems where the forward operator $ t $ is unknown, and instead we have access to training data consisting of functions $ f _ i $ and their noisy images $ tf _ i $. this is a practically relevant and challenging problem which current methods are able to solve only under strong assumptions on the training set. here we propose a new method that requires minimal assumptions on the data, and prove reconstruction rates that depend on the number of training points and the noise level. we show that, in the regime of " many " training data, the method is minimax optimal. the proposed method employs a type of convolutional neural networks ( u - nets ) and empirical risk minimization in order to " fit " the unknown operator. in a nutshell, our approach is based on two ideas : the first is to relate u - nets to multiscale decompositions such as wavelets, thereby linking them to the existing theory, and the second is to use the hierarchical structure of u - nets and the low number of parameters of convolutional neural nets to prove entropy bounds that are practically useful. a significant difference with the existing works on neural networks in nonparametric statistics is that we use them to approximate operators and not functions, which we argue is mathematically more natural and technically more convenient.
arxiv:2108.02744
a new exponent characterizing the rounding of crystal facets is found by mapping a crystal surface onto the asymmetric six - vertex model ( i. e. with external fields h and v ) and using the bethe ansatz to obtain appropriate expansions of the free energy close to criticality. leading order exponents in \ delta h, \ delta v are determined along the whole phase boundary and in an arbitrary direction. a possible experimental verification of this result is discussed.
arxiv:cond-mat/9802152
we present a new algorithm for single camera 3d reconstruction, or 3d input for human - computer interfaces, based on precise tracking of an elongated object, such as a pen, having a pattern of colored bands. to configure the system, the user provides no more than one labelled image of a handmade pointer, measurements of its colored bands, and the camera ' s pinhole projection matrix. other systems are of much higher cost and complexity, requiring combinations of multiple cameras, stereocameras, and pointers with sensors and lights. instead of relying on information from multiple devices, we examine our single view more closely, integrating geometric and appearance constraints to robustly track the pointer in the presence of occlusion and distractor objects. by probing objects of known geometry with the pointer, we demonstrate acceptable accuracy of 3d localization.
arxiv:1809.04704
turning on background fields in string theory sometimes has an alternative interpretation as a deformation of the target space geometry. a particularly well - known case is the ns - ns two form b, which gives rise to space - time non - commutativity. in this note we point out that this phenomenon extends to ten - dimensional superspace when employing a covariant quantization of the superstring, generalizing an observation by ooguri and vafa in four dimensions. in particular, we will find that rr field strengths give rise to a non - zero $ \ { \ theta, \ theta \ } $ anti - commutator, just as in four dimensions, whereas the gravitino yields a non - zero value for $ [ x, \ theta ] $.
arxiv:hep-th/0302078
the scores obtained by students that have performed the enem exam, the brazilian high school national examination used to admit students at the brazilian universities, is analyzed. the average high school ' s scores are compared between different disciplines through the pearson correlation coefficient. the results show a very large correlation between the performance in the different subjects. even thought the students ' scores in the enem due to the standardization form a gaussian, we show that the high schools ' scores form a bimodal distribution that can not be used to evaluate and compare performance over time. we also show that this high schools distribution reflects the correlation between school performance and economic level of the students. the enem ' s scores are compared with a brazilian non standardized exam, the entrance exam at the universidade federal do rio grande do sul. the comparison of the performance of the same individuals in both tests is compared showing that the two tests not only select different abilities but chooses a different set of individuals. our results indicates that standardized exams might be an interesting tool to compare performance over the years but only of individuals and not of institutions.
arxiv:1509.04390
we prove that linear instability implies non - linear instability in the energy norm for the critically dissipative quasi - geostrophic equation.
arxiv:0903.1391
the quantum nonlinear schr $ \ ddot { o } $ dinger ( nls ) model with four - component fermions exhibits a $ y ( so ( 5 ) ) $ symmetry when considered on an infintite interval. the constructed generators of yangian are proved to satisfy the drinfel ' d formula and furthermore, the $ rtt $ relation with the general form of rational r - matrix given by yang - baxterization associated with $ so ( 5 ) $ algebraic structure.
arxiv:math-ph/0111030
motivated by applications to renewal theory, erd \ h { o } s, de bruijn and kingman posed a problem on boundedness of reciprocals $ ( 1 - z ) / ( 1 - f ( z ) ) $ in the unit disc for probability generating functions $ f ( z ) $. it was solved by ibragimov in $ 1975 $ by constructing a counterexample. in this paper, we provide much stronger counterexamples showing that the problem does not allow for a positive answer even under rather restrictive additional assumptions. moreover, we pursue a systematic study of $ l ^ p $ - integrabilty properties for the reciprocals. in particular, we show that while the boundedness of $ ( 1 - z ) / ( 1 - f ( z ) ) $ fails in general, the reciprocals do possess certain $ l ^ p $ - integrability properties under mild conditions on $ f $. we also study the same circle of problems in the continuous - time setting.
arxiv:1701.04357
properties of four quintic theta functions are developed in parallel with those of the classical jacobi null theta functions. the quintic theta functions are shown to satisfy analogues of jacobi ' s quartic theta function identity and counterparts of jacobi ' s principles of duplication, dimidiation and change of sign formulas. the resulting library of quintic transformation formulas is used to describe series multisections for modular forms in terms of simple matrix operations. these efforts culminate in a formal technique for deducing congruences modulo powers of five for a variety of combinatorial generating functions, including the partition function. further analysis of the quintic theta functions is undertaken by exploring their modular properties and their connection to eisenstein series. the resulting relations lead to a coupled system of differential equations for the quintic theta functions.
arxiv:1304.0684
multi - agent reinforcement learning involves multiple agents interacting with each other and a shared environment to complete tasks. when rewards provided by the environment are sparse, agents may not receive immediate feedback on the quality of actions that they take, thereby affecting learning of policies. in this paper, we propose a method called shaping advice in deep multi - agent reinforcement learning ( sam ) to augment the reward signal from the environment with an additional reward termed shaping advice. the shaping advice is given by a difference of potential functions at consecutive time - steps. each potential function is a function of observations and actions of the agents. the shaping advice needs to be specified only once at the start of training, and can be easily provided by non - experts. we show through theoretical analyses and experimental validation that shaping advice provided by sam does not distract agents from completing tasks specified by the environment reward. theoretically, we prove that convergence of policy gradients and value functions when using sam implies convergence of these quantities in the absence of sam. experimentally, we evaluate sam on three tasks in the multi - agent particle world environment that have sparse rewards. we observe that using sam results in agents learning policies to complete tasks faster, and obtain higher rewards than : i ) using sparse rewards alone ; ii ) a state - of - the - art reward redistribution method.
arxiv:2103.15941
summer 2015 marked the 100th anniversary of the excavation by j. w. fewkes of the sun temple in mesa verde national park, colorado ; an ancient ceremonial complex of unknown purpose, prominently located atop a mesa, constructed by the pueblo indians approximately 1000 years ago. in this analysis we perform a digital survey of the site, and examine the possibility that four key tower - like elements of the complex were used for observation of the rise or set of celestial bodies known to be sacred to the pueblo indians. we find statistically significant evidence that the site was used for astronomical observation of the rise and / or set of nearly all such bodies. the sun temple appears to represent the most comprehensive prehistoric astronomical observatory yet uncovered.
arxiv:1610.07463
we present a data - driven framework for reconstructing physics - consistent reduced models of multiscale systems. our approach addresses a fundamental challenge in model reduction : decomposing the deterministic drift into its conservative ( reversible ) and non - conservative ( irreversible ) components while preserving the system ' s invariant measure and autocorrelation. we leverage the k - means gaussian - mixture method ( kgmm ) to robustly estimate the score function - - the gradient of the logarithm of the steady - state probability density. this score function is then combined with a finite - volume discretization of the perron - frobenius generator to identify a drift matrix whose symmetric part captures the conservative dynamics and whose antisymmetric part encodes the minimal irreversible circulation required by the empirical data. the result is a reduced langevin model that preserves the stationary density, reproduces short - time correlations by construction, and maintains computational efficiency. we validate our framework on three increasingly complex systems : a one - dimensional nonlinear stochastic system with multiplicative noise, a two - dimensional asymmetric four - well potential system with non - gradient drift, and the stochastic lorenz - 63 attractor. in all cases, our surrogate models accurately recover both the stationary distribution and autocorrelation structure of the original dynamics, while providing an interpretable separation of conservative and non - conservative forces.
arxiv:2505.01895
this study explores a comprehensive approach to obstacle detection using advanced yolo models, specifically yolov8, yolov7, yolov6, and yolov5. leveraging deep learning techniques, the research focuses on the performance comparison of these models in real - time detection scenarios. the findings demonstrate that yolov8 achieves the highest accuracy with improved precision - recall metrics. detailed training processes, algorithmic principles, and a range of experimental results are presented to validate the model ' s effectiveness.
arxiv:2410.10096
the antineutrino scattering channel $ \ bar { \ nu } _ { \ mu } \, \ text { ch } \ rightarrow \ mu ^ { + } \, \ pi ^ { - } \, x $ ( nucleon ( s ) ) is analyzed in the incident energy range 1. 5 to 10 gev using the minerva detector at fermilab. differential cross sections are reported as functions of $ \ mu ^ { + } $ momentum and production angle, $ \ pi ^ { - } $ kinetic energy and production angle, and antineutrino energy and squared four - momentum transfer. distribution shapes are generally reproduced by simulations based on the genie, nuwro, and gibuu event generators, however genie ( gibuu ) overestimates ( underestimates ) the cross - section normalizations by 8 % ( 10 % ). comparisons of data with the genie - based reference simulation probe conventional treatments of cross sections and pion intranuclear rescattering. the distribution of non - track vertex energy is used to decompose the signal sample into reaction categories, and cross sections are determined for the exclusive reactions $ \ mu ^ { + } \ pi ^ { - } n $ and $ \ mu ^ + \ pi ^ { - } p $. a similar treatment applied to the published minerva sample $ \ bar { \ nu } _ { \ mu } \, \ text { ch } \ rightarrow \ mu ^ { + } \, \ pi ^ { 0 } \, x $ ( nucleon ( s ) ) has determined the $ \ mu ^ { + } \ pi ^ { 0 } n $ cross section, and the latter is used with $ \ sigma ( \ pi ^ { - } n ) $ and $ \ sigma ( \ pi ^ { - } p ) $ to carry out an isospin decomposition of $ \ bar { \ nu } _ { \ mu } $ - induced cc ( $ \ pi $ ). the ratio of magnitudes and relative phase for isospin amplitudes $ a _ { 3 } $ and $ a _ { 1 } $ thereby obtained are : $ r ^ { \ bar { \ nu } } = 0. 99 \ pm 0. 19 $ and $ \ phi ^ { \ bar { \ nu } } = 93 ^ { \ circ } \ pm 7 ^ { \ circ } $. our results are in agreement with bubble chamber measurements made four decades ago.
arxiv:1906.08300
we find the metric of small black holes on cylinders, i. e. neutral and static black holes with a small mass in d - dimensional minkowski - space times a circle. the metric is found using an ansatz for black holes on cylinders proposed in hep - th / 0204047. we use the new metric to compute corrections to the thermodynamics which is seen to deviate from that of the ( d + 1 ) - dimensional schwarzschild black hole. moreover, we compute the leading correction to the relative binding energy which is found to be non - zero. we discuss the consequences of these results for the general understanding of black holes and we connect the results to the phase structure of black holes and strings on cylinders.
arxiv:hep-th/0310259
the first step in most empirical work in multilingual nlp is to construct maps of the correspondence between texts and their translations ( { \ bf bitext maps } ). the smooth injective map recognizer ( simr ) algorithm presented here is a generic pattern recognition algorithm that is particularly well - suited to mapping bitext correspondence. simr is faster and significantly more accurate than other algorithms in the literature. the algorithm is robust enough to use on noisy texts, such as those resulting from ocr input, and on translations that are not very literal. simr encapsulates its language - specific heuristics, so that it can be ported to any language pair with a minimal effort.
arxiv:cmp-lg/9706025
vision transformer ( vit ) - based models have shown state - of - the - art performance ( e. g., accuracy ) in vision - based ai tasks. however, realizing their capability in resource - constrained embedded ai systems is challenging due to their inherent large memory footprints and complex computations, thereby incurring high power / energy consumption. recently, spiking vision transformer ( svit ) - based models have emerged as alternate low - power vit networks. however, their large memory footprints still hinder their applicability for resource - constrained embedded ai systems. therefore, there is a need for a methodology to compress svit models without degrading the accuracy significantly. to address this, we propose qsvit, a novel design methodology to compress the svit models through a systematic quantization strategy across different network layers. to do this, our qsvit employs several key steps : ( 1 ) investigating the impact of different precision levels in different network layers, ( 2 ) identifying the appropriate base quantization settings for guiding bit precision reduction, ( 3 ) performing a guided quantization strategy based on the base settings to select the appropriate quantization setting, and ( 4 ) developing an efficient quantized network based on the selected quantization setting. the experimental results demonstrate that, our qsvit methodology achieves 22. 75 % memory saving and 21. 33 % power saving, while also maintaining high accuracy within 2. 1 % from that of the original non - quantized svit model on the imagenet dataset. these results highlight the potential of qsvit methodology to pave the way toward the efficient svit deployments on resource - constrained embedded ai systems.
arxiv:2504.00948
we investigate the properties a mobile ion immersed in a bose - einstein condensate ( bec ) using different theoretical approaches. a coherent state variational ansatz predicts that the ion spectral function exhibits several branches in addition to polaronic quasiparticle states, and we employ a diagrammatic analysis of the ion - atom scattering in the bec to identify them as arising from the binding of an increasing number of bosons to the ion. we develop a simplified model describing the formation of these molecular ions showing that their spectral weight scales with the number of bound atoms. the number of atoms in the dressing cloud around the ion are calculated from thermodynamic arguments, and we finally show that the dynamics ensuing the injection of an ion into the bec exhibits various regimes governed by coherent quasiparticle propagation and decay.
arxiv:2012.11436
the research of extending deep reinforcement learning ( drl ) to multi - agent field has solved many complicated problems and made great achievements. however, almost all these studies only focus on discrete or continuous action space and there are few works having ever used multi - agent deep reinforcement learning to real - world environment problems which mostly have a hybrid action space. therefore, in this paper, we propose two algorithms : deep multi - agent hybrid soft actor - critic ( mahsac ) and multi - agent hybrid deep deterministic policy gradients ( mahddpg ) to fill this gap. this two algorithms follow the centralized training and decentralized execution ( ctde ) paradigm and could handle hybrid action space problems. our experiences are running on multi - agent particle environment which is an easy multi - agent particle world, along with some basic simulated physics. the experimental results show that these algorithms have good performances.
arxiv:2208.14447
in recent years, reward machines ( rms ) have stood out as a simple yet effective automata - based formalism for exposing and exploiting task structure in reinforcement learning settings. despite their relevance, little to no attention has been directed to the study of their security implications and robustness to adversarial scenarios, likely due to their recent appearance in the literature. with my thesis, i aim to provide the first analysis of the security of rm - based reinforcement learning techniques, with the hope of motivating further research in the field, and i propose and evaluate a novel class of attacks on rm - based techniques : blinding attacks.
arxiv:2311.09014
using chern character, we construct a natural transformation from the local hilbert functor to a functor of artin rings defined from hochschild homology, which allows us to reconstruct the semi - regularity map and the infinitesimal abel - jacobi map. combining that construction of the semi - regularity map with obstruction theory of functors of artin rings, we give a different proof of a theorem of bloch stating that the semi - regularity map annihilates certain obstructions to embedded deformations of a closed subvariety which is a locally complete intersection.
arxiv:2109.10626
in \ cite { ka14 } we produced an algorithm for deciding whether or not an element $ \ phi \ in out ( f _ n ) $ is an iwip ( " fully irreducible " ) automorphism. at several points that algorithm was rather inefficient as it involved some general enumeration procedures as well as running several abstract processes in parallel. in this paper we refine the algorithm from \ cite { ka14 } by eliminating these inefficient features, and also by eliminating any use of mapping class groups algorithms. our main result is to produce, for any fixed $ n \ ge 3 $, an algorithm which, given a topological representative $ f $ of an element $ \ phi $ of $ out ( f _ n ) $, decides in polynomial time in terms of the " size " of $ f $, whether or not $ \ phi $ is fully irreducible. in addition, we provide a train track criterion of being fully irreducible which covers all fully irreducible elements of $ out ( f _ n ) $, including both atoroidal and non - atoroidal ones. we also give an algorithm, alternative to that of turner, for finding all the indivisible nielsen paths in an expanding train track map, and estimate the complexity of this algorithm. an appendix by mark bell provides a polynomial upper bound, in terms of the size of the topological representative, on the complexity of the bestvina - handel algorithm \ cite { bh92 } for finding either an irreducible train track representative or a topological reduction.
arxiv:1609.03820
we construct examples of volume - preserving uniquely ergodic ( and hence minimal ) real - analytic diffeomorphisms on odd - dimemsional spheres
arxiv:1309.3137
self - resonance in the atomic vibration occurs when the average wavelength of the phonon thermal vibration is equivalent or harmonic of the diameters of the atoms. it is suggested that applying pressure at temperature corresponding to the self - resonance should effectively reduce the number of vacancies. this theoretical prediction is tested on niobium by measuring the magnetic susceptibility of the untreated and treated samples. the applied pressure - temperature treatment increased the critical temperature of niobium by about 30 percent which was also accompanied with volume increase.
arxiv:0911.2728
a time to digital converter ( tdc ) based system, to be used for most sub - detectors in the high - flux rare - decay experiment na62 at cern sps, was built as part of the na62 fully digital trigger and data acquisition system ( tdaq ), in which the tdc board ( tdcb ) and a general - purpose motherboard ( tel62 ) will play a fundamental role. while tdcbs, housing four high performance time to digital converters ( hptdc ), measure hit times from sub - detectors, the motherboard processes and stores them in a buffer, produces trigger primitives from different detectors and extracts only data related to the lowest trigger level decision, once this is taken on the basis of the trigger primitives themselves. the features of the tdcb board developed by the pisa na62 group are extensively discussed and performance data is presented in order to show its compliance with the experiment requirements.
arxiv:1407.2456
artificial intelligence ( ai ) is already part of our daily lives and is playing a key role in defining the economic and social shape of the future. in 2018, the european commission introduced its ai strategy able to compete in the next years with world powers such as china and us, but relying on the respect of european values and fundamental rights. as a result, most of the member states have published their own national strategy with the aim to work on a coordinated plan for europe. in this paper, we present an ongoing study on how european countries are approaching the field of artificial intelligence, with its promises and risks, through the lens of their national ai strategies. in particular, we aim to investigate how european countries are investing in ai and to what extent the stated plans can contribute to the benefit of the whole society. this paper reports the main findings of a qualitative analysis of the investment plans reported in 15 european national strategies
arxiv:2011.12863
large language models ( llms ) have the potential to enhance k - 12 stem education by improving both teaching and learning processes. while previous studies have shown promising results, there is still a lack of comprehensive understanding regarding how llms are effectively applied, specifically through prompt engineering - the process of designing prompts to generate desired outputs. to address this gap, our study investigates empirical research published between 2021 and 2024 that explores the use of llms combined with prompt engineering in k - 12 stem education. following the prisma protocol, we screened 2, 654 papers and selected 30 studies for analysis. our review identifies the prompting strategies employed, the types of llms used, methods of evaluating effectiveness, and limitations in prior work. results indicate that while simple and zero - shot prompting are commonly used, more advanced techniques like few - shot and chain - of - thought prompting have demonstrated positive outcomes for various educational tasks. gpt - series models are predominantly used, but smaller and fine - tuned models ( e. g., blender 7b ) paired with effective prompt engineering outperform prompting larger models ( e. g., gpt - 3 ) in specific contexts. evaluation methods vary significantly, with limited empirical validation in real - world settings.
arxiv:2410.11123
we consider equilibrium level occupation numbers in a fermi gas with a fixed number of particles, n, and finite level spacing. using the method of generating functions and the cumulant expansion we derive a recurrence relation for canonical partition function and an explicit formula for occupation numbers in terms of single - particle partition function at n different temperatures. we apply this result to a model with equidistant non - degenerate spectrum and obtain close - form expressions in terms of q - polynomials and rogers - ramanujan partial theta function. deviations from the standard fermi - dirac distribution can be interpreted in terms of a gap in the chemical potential between the particle and the hole excitations with additional correlations at temperatures comparable to the level spacing.
arxiv:1110.6264
the effect of epitaxial strain on the phonon spectra, crystal structure, spontaneous polarization, dielectric, piezoelectric, and elastic properties of ( 001 ) - oriented ferroelectric ( batio $ _ 3 $ ) $ _ m $ / ( srtio $ _ 3 $ ) $ _ n $ superlattices ( $ m = n = { } $ 1 - 4 ) was studied using the first - principles density - functional theory. the ground state of free - standing superlattices is the monoclinic $ cm $ polar phase. under the in - plane biaxial compressive strain, it transforms to tetragonal $ p4mm $ polar phase, and under the in - plane biaxial tensile strain, it transforms to orthorhombic $ amm2 $ polar phase. when changing the in - plane lattice parameter, a softening of several optical and acoustic modes appears at the boundaries between the polar phases, and corresponding components of dielectric, piezoelectric, and elastic tensors diverge critically. the comparison of the mixing enthalpy of disordered ba $ _ { 0. 5 } $ sr $ _ { 0. 5 } $ tio $ _ 3 $ solid solution modeled using two special quasirandom structures sqs - 4 with the mixing enthalpy of the superlattices reveals a tendency of the batio $ _ 3 $ - srtio $ _ 3 $ system to short - range ordering and shows that these superlattices are thermodynamically quite stable.
arxiv:1105.5828
we consider an inverse boundary value problem for a model time - harmonic equation of acoustic tomography of moving fluid with variable current velocity, sound speed, density and absorption. in the present article it is assumed that at fixed frequency the coefficients of this equation are already recovered modulo an appropriate gauge transformation using some reconstruction method from boundary measurements presented in the literature. our main result consists in formulas and equations that allow to get rid of this gauge non - uniqueness and recover the fluid parameters using boundary measurements at several frequencies.
arxiv:1512.06367
transverse momentum spectra of identified particles produced in heavy - ion collisions at the large hadron collider are described with relativistic fluid dynamics. we perform a systematic comparison of experimental data for pions, kaons and protons up to a transverse momentum of 3 gev $ / c $ with calculations using the fluidum code package to solve the evolution equations of fluid dynamics, the trento model to describe the initial state and the fastreso code to take resonance decays into account. using data in five centrality classes at the center - of - mass collision energy per nucleon pair $ \ sqrt { s _ \ text { nn } } = 2. 76 \, \ text { tev } $, we determine systematically the most likely parameters of our theoretical model including the shear and bulk viscosity to entropy ratios, the initialization time, initial density and freeze - out temperature through a global search and quantify their posterior probability. this is facilitated by the very efficient numerical implementation of fluidum and fastreso. based on the most likely model parameters we present predictions for the transverse momentum spectra of multi - strange hadrons as well as identified particle spectra from pb - pb collisions at $ \ sqrt { s _ \ text { nn } } = 5. 02 \, \ text { tev } $.
arxiv:1909.10485