text
stringlengths
1
3.65k
source
stringlengths
15
79
a partial complement of the graph $ g $ is a graph obtained from $ g $ by complementing all the edges in one of its induced subgraphs. we study the following algorithmic question : for a given graph $ g $ and graph class $ \ mathcal { g } $, is there a partial complement of $ g $ which is in $ \ mathcal { g } $? we show that this problem can be solved in polynomial time for various choices of the graphs class $ \ mathcal { g } $, such as bipartite, degenerate, or cographs. we complement these results by proving that the problem is np - complete when $ \ mathcal { g } $ is the class of $ r $ - regular graphs.
arxiv:1804.10920
the physical mechanism through which the outgoing material of massive red supergiants is accelerated above the escape velocity is unclear. thanks to the transparency of its circumstellar envelope, the nearby red supergiant betelgeuse gives the opportunity to probe the innermost layers of the envelope of a typical red supergiant down to the photosphere, i. e. where the acceleration of the wind is expected to occur. we took advantage of the sphere / zimpol adaptive optics imaging polarimeter to resolve the visible photosphere and close envelope of betelgeuse. we detect an asymmetric gaseous envelope inside a radius of 2 to 3 times the near - infrared photospheric radius of the star ( r * ), and a significant halpha emission mostly contained within 3 r *. from the polarimetric signal, we also identify the signature of dust scattering in an asymmetric and incomplete dust shell located at a similar radius. the presence of dust so close to the star may have a significant impact on the wind acceleration through radiative pressure on the grains. the 3 r * radius emerges as a major interface between the hot gaseous and dusty envelopes. the detected asymmetries strengthen previous indications that the mass loss of betelgeuse is likely tied to the vigorous convective motions in its atmosphere.
arxiv:1511.04451
this paper sheds new light on several interrelated topics of second - order variational analysis, both in finite and infinite - dimensional settings. we establish new relationships between second - order growth conditions on functions, the basic properties of metric regularity and subregularity of the limiting subdifferential, tilt - stability of local minimizers, and positive - definiteness / semidefiniteness properties of the second - order subdifferential ( or generalized hessian ).
arxiv:1304.7385
the authors apply the generalized master equation to analyze time - dependent transport through a finite quantum wire with an embedded subsystem. the parabolic quantum wire and the leads with several subbands are described by a continuous model. we use an approach originally developed for a tight - binding description selecting the relevant states for transport around the bias - window defined around the values of the chemical potential in the left and right leads in order to capture the effects of the nontrivial geometry of the system in the transport. we observe a partial current reflection as a manifestation of a quasi - bound state in an embedded well and the formation of a resonance state between an off - set potential hill and the boundary of the system.
arxiv:0903.3491
inference over tails is performed by applying only the results of extreme value theory. whilst such theory is well defined and flexible enough in the univariate case, multivariate inferential methods often require the imposition of arbitrary constraints not fully justifed by the underlying theory. in contrast, our approach uses only the constraints imposed by theory. we build on previous, theoretically justified work for marginal exceedances over a high, unknown threshold, by combining it with flexible, semiparametric copulae specifications to investigate extreme dependence. whilst giving probabilistic judgements about the extreme regime of all marginal variables, our approach formally uses the full dataset and allows for a variety of patterns of dependence, be them extremal or not. a new probabilistic criterion quantifying the possibility that the data exhibits asymptotic independence is introduced and its robustness empirically studied. estimation of functions of interest in extreme value analyses is performed via mcmc algorithms. attention is also devoted to the prediction of new extreme observations. our approach is evaluated through a series of simulations, applied to real data sets and assessed against competing approaches. evidence demonstrates that the bulk of the data does not bias and improves the inferential process for the extremal dependence.
arxiv:1707.00877
in this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis - tics : ( negative ) polarity items. we briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. we show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. with this research, we hope to pave the way for other studies linking formal linguistics to deep learning.
arxiv:1808.10627
kernel - based classification methods, particularly the support vector machine ( svm ), are among the most common algorithms for hyperspectral data classification. the radial basis function ( rbf ) kernel has earned great popularity in hyperspectral data classification due to its superior performance among other available kernel functions. nonetheless, the cross - validation technique usually used for tunning the rbf parameter can be time - consuming and may result in sub - optimal values for the parameter. this paper proposed the cluster - based random radial basis function ( crrbf ) kernel function as an alternative to the rbf kernel to achieve similar performance with a more manageable parameter, which is the number of clusters. the crrbf kernel initially clusters the hyperspectral bands and then constructs an rbf kernel with a randomly assigned value as the kernel parameter from each cluster of bands. the final crrbf kernel is constructed by adding up these basis rbf kernels. we have designed several experiments to evaluate the svm performance trained with the crrbf kernel considering a different number of clusters and training samples, using three hyperspectral data sets. the obtained results showed that the crrbf kernel could provide comparable or better results than the rbf. the results also showed that the classification performance is pretty robust to the number of clusters, as the only open parameter of the crrbf kernel.
arxiv:2409.05013
we provide a homological model for a family of quantum representations of mapping class groups arising from non - semisimple tqfts ( topological quantum field theories ). our approach gives a new geometric point of view on these representations, and it gathers into one theory two of the most promising constructions for investigating linearity of mapping class groups. more precisely, if $ \ varsigma _ { g, 1 } $ is a surface of genus $ g $ with $ 1 $ boundary component, we consider a ( crossed ) action of its mapping class group $ \ mathrm { mod } ( \ varsigma _ { g, 1 } ) $ on the homology of its configuration space $ \ mathrm { conf } _ n ( \ varsigma _ { g, 1 } ) $ with twisted coefficients in the heisenberg quotient $ \ mathbb { h } _ g $ of its surface braid group $ \ pi _ 1 ( \ mathrm { conf } _ n ( \ varsigma _ { g, 1 } ) ) $. we show that this action intertwines an action of the quantum group of $ \ mathfrak { sl } _ 2 $, that we define by purely homological means. for a finite - dimensional linear representation of $ \ mathbb { h } _ g $ ( depending on a root of unity $ \ zeta $ ), we tweak the construction to obtain a projective representation of $ \ mathrm { mod } ( \ varsigma _ { g, 1 } ) $. finally, we identify, by an explicit isomorphism, a subrepresentation of $ \ mathrm { mod } ( \ varsigma _ { g, 1 } ) $ that is equivalent to the quantum representation arising from the non - semisimple tqft associated with quantum $ \ mathfrak { sl } _ 2 $ at $ \ zeta $. in the process, we provide concrete bases and explicit formulas for the actions of all the standard generators of $ \ mathrm { mod } ( \ varsigma _ { g, 1 } ) $ and of quantum $ \ mathfrak { sl } _ 2 $ on both sides of the equivalence, and answer a question by crivelli, felder, and wieczerkowski. we also make sure that the restriction of these representations to the torelli group $ \ mathcal { i } ( \ varsigma _ { g, 1 }
arxiv:2212.10940
galaxies obey a number of empirical correlations between their radio, { \ gamma } - ray, and infrared emission, but the physical origins of these correlations remain uncertain. here we use the congruents model for broadband non - thermal emission from star - forming galaxies, which self - consistently calculates energy - dependent transport and non - thermal emission from cosmic ray hadrons and leptons, to predict radio and { \ gamma } - ray emission for a synthetic galaxy population with properties drawn from a large deep - field survey. we show that our synthetic galaxies reproduce observed relations such as the fir - radio correlation, the fir - { \ gamma } correlation, and the distribution of radio spectral indices, and we use the model to explain the physical origins of these relations. our results show that the fir - radio correlation arises because the amount of cosmic ray electron power ultimately radiated as synchrotron emission varies only weakly with galaxy star formation rate as a result of the constraints imposed on gas properties by hydrostatic balance and turbulent dynamo action ; the same physics dictates the extent of proton calorimetry in different galaxies, and thus sets the fir - { \ gamma } - ray correlation. we further show that galactic radio spectral indices result primarily from competition between thermal free - free emission and energy - dependent loss of cosmic ray electrons to bremsstrahlung and escape into galactic halos, with shaping of the spectrum by inverse compton, synchrotron, and ionisation processes typically playing a sub - dominant role. in addition to explaining existing observations, we use our analysis to predict a heretofore unseen correlation between the curvature of galaxies ' radio spectra and their pion - driven { \ gamma } - ray emission, a prediction that will be testable with upcoming facilities.
arxiv:2310.05693
the recent success of the clip model has shown its potential to be applied to a wide range of vision and language tasks. however this only establishes embedding space relationship of language to images, not to the video domain. in this paper, we propose a novel approach to map video embedding space to natural langugage. we propose a two - stage approach that first extracts visual features from each frame of a video using a pre - trained cnn, and then uses the clip model to encode the visual features for the video domain, along with the corresponding text descriptions. we evaluate our method on two benchmark datasets, ucf101 and hmdb51, and achieve state - of - the - art performance on both tasks.
arxiv:2303.14584
this paper is motivated by recent research in the $ d $ - dimensional stochastic linear bandit literature, which has revealed an unsettling discrepancy : algorithms like thompson sampling and greedy demonstrate promising empirical performance, yet this contrasts with their pessimistic theoretical regret bounds. the challenge arises from the fact that while these algorithms may perform poorly in certain problem instances, they generally excel in typical instances. to address this, we propose a new data - driven technique that tracks the geometric properties of the uncertainty ellipsoid around the main problem parameter. this methodology enables us to formulate a data - driven frequentist regret bound, which incorporates the geometric information, for a broad class of base algorithms, including greedy, oful, and thompson sampling. this result allows us to identify and ` ` course - correct " problem instances in which the base algorithms perform poorly. the course - corrected algorithms achieve the minimax optimal regret of order $ \ tilde { \ mathcal { o } } ( d \ sqrt { t } ) $ for a $ t $ - period decision - making scenario, effectively maintaining the desirable attributes of the base algorithms, including their empirical efficacy. we present simulation results to validate our findings using synthetic and real data.
arxiv:2306.14872
we present the results of theoretical study of surface state properties in a two - dimensional model for triplet $ p $ - wave superconductors. we derive boundary conditions for eilenberger equations at rough interfaces and develop the approach for self - consistent solution for the spatial dependence of $ p _ { x } $ and $ p _ { x } + i ~ p _ { y } $ - wave pair potentials. in the $ p _ { x } $ case we demonstrate the robustness of the zero - energy peak in the density of states ( dos ) with respect to surface roughness, in contrast to the suppression of such a peak in the case of $ d _ { xy } $ symmetry. this effect is due to stability of odd - frequency pairing state at the surface with respect to disorder. in the case of the chiral $ p _ { x } + i ~ p _ { y } $ state we demonstrate the appearance of a complex multi - peak subgap structure in the spectrum with increasing surface roughness.
arxiv:1408.5480
we consider the application of relative self - calibration using overlap regions to spectroscopic galaxy surveys that use slit - less spectroscopy. this method is based on that developed for the sdss by padmanabhan at al. ( 2008 ) in that we consider jointly fitting and marginalising over calibrator brightness, rather than treating these as free parameters. however, we separate the calibration of the detector - to - detector from the full - focal - plane exposure - to - exposure calibration. to demonstrate how the calibration procedure will work, we simulate the procedure for a potential implementation of the spectroscopic component of the wide euclid survey. we study the change of coverage and the determination of relative multiplicative errors in flux measurements for different dithering configurations. we use the new method to study the case where the flat - field across each exposure or detector is measured precisely and only exposure - to - exposure or detector - to - detector variation in the flux error remains. we consider several base dither patterns and find that they strongly influence the ability to calibrate, using this methodology. to enable self - calibration, it is important that the survey strategy connects different observations with at least a minimum amount of overlap, and we propose an " s " - pattern for dithering that fulfills this requirement. the final survey strategy adopted by euclid will have to optimise for a number of different science goals and requirements. the large - scale calibration of the spectroscopic galaxy survey is clearly cosmologically crucial, but is not the only one.
arxiv:1606.07061
a number of supernova remnants ( snrs ) show nonthermal x - rays assumed to be synchrotron emission from shock accelerated tev electrons. the existence of these tev electrons strongly suggests that the shocks in snrs are sources of galactic cosmic rays ( crs ). in addition, there is convincing evidence from broad - band studies of individual snrs and elsewhere that the particle acceleration process in snrs can be efficient and nonlinear. if snr shocks are efficient particle accelerators, the production of crs impacts the thermal properties of the shock heated, x - ray emitting gas and the snr evolution. we report on a technique that couples nonlinear diffusive shock acceleration, including the backreaction of the accelerated particles on the structure of the forward and reverse shocks, with a hydrodynamic simulation of snr evolution. compared to models which ignore crs, the most important hydrodynamical effects of placing a significant fraction of shock energy into crs are larger shock compression ratios and lower temperatures in the shocked gas. we compare our results, which use an approximate description of the acceleration process, with a more complete model where the full cr transport equations are solved ( i. e., berezhko et al., 2002 ), and find excellent agreement for the cr spectrum summed over the snr lifetime and the evolving shock compression ratio. the importance of the coupling between particle acceleration and snr dynamics for the interpretation of broad - band continuum and thermal x - ray observations is discussed.
arxiv:astro-ph/0308308
comparing data defined over space and time is notoriously hard, because it involves quantifying both spatial and temporal variability, while at the same time taking into account the chronological structure of data. dynamic time warping ( dtw ) computes an optimal alignment between time series in agreement with the chronological order, but is inherently blind to spatial shifts. in this paper, we propose spatio - temporal alignments ( sta ), a new differentiable formulation of dtw, in which spatial differences between time samples are accounted for using regularized optimal transport ( ot ). our temporal alignments are handled through a smooth variant of dtw called soft - dtw, for which we prove a new property : soft - dtw increases quadratically with time shifts. the cost matrix within soft - dtw that we use are computed using unbalanced ot, to handle the case in which observations are not normalized probabilities. experiments on handwritten letters and brain imaging data confirm our theoretical findings and illustrate the effectiveness of sta as a dissimilarity for spatio - temporal data.
arxiv:1910.03860
in this article, we study the radiative decays among the charmonium states with the heavy quark effective theory, and make predictions for the ratios among the radiative decay widths of an special multiplet to another multiplet. the predictions can be confronted with the experimental data in the future and put additional constraints in identifying the $ x $, $ y $, $ z $ charmonium - like mesons.
arxiv:1101.0474
we explore the possibility of detecting many - body entanglement using time - of - flight ( tof ) momentum correlations in ultracold atomic fermi gases. in analogy to the vacuum correlations responsible for bekenstein - hawking black hole entropy, a partitioned atomic gas will exhibit particle - hole correlations responsible for entanglement entropy. the signature of these momentum correlations might be detected by a sensitive tof type experiment.
arxiv:1008.1258
this paper proposes for every $ n $, linear time reductions of the word and conjugacy problems on the braid groups $ b _ n $ to the corresponding problems on the braid monoids $ b _ n ^ + $ and moreover only using positive words representations.
arxiv:0709.3887
the impulsive phase of solar flares is a time of rapid energy deposition and heating in the lower solar atmosphere, leading to changes in the temperature and density structure of the region. we use an o v density diagnostic formed of the 192 to 248 line ratio, provided by hinode eis, to determine the density of flare footpoint plasma, at o v formation temperatures of 250, 000 k, giving a constraint on the properties of the heated transition region. hinode eis rasters from 2 small flare events in december 2007 were used. raster images were co - aligned to identify and establish the footpoint pixels, multiple - component gaussian line fitting of the spectra was carried out to isolate the diagnostic pair, and the density was calculated for several footpoint areas. the assumptions of equilibrium ionization and optically thin radiation for the o v lines were found to be acceptable. properties of the electron distribution, for one event, were deduced from earlier rhessi hard x - ray observations and used to calculate the plasma heating rate, delivered by an electron beam adopting collisional thick - target assumptions, for 2 model atmospheres. electron number densities of at least log n = 12. 3 cm - 3 were measured during the flare impulsive phase, far higher than previously expected. for one footpoint, the radiative loss rate for this plasma was found to exceed that which can be delivered by an electron beam implied by the rhessi data. however, when assuming a completely ionised target atmosphere the heating rate exceeded the losses. a chromospheric thickness of 70 - 700 km was found to be required to balance a conductive input to the o v - emitting region with radiative losses. the analysis shows that for heating by collisional electrons, it is difficult, or impossible to raise the temperature of the chromosphere to explain the observed densities without assuming a completely ionised atmosphere.
arxiv:1411.4603
applying machine learning techniques to graph drawing has become an emergent area of research in visualization. in this paper, we interpret graph drawing as a multi - agent reinforcement learning ( marl ) problem. we first demonstrate that a large number of classic graph drawing algorithms, including force - directed layouts and stress majorization, can be interpreted within the framework of marl. using this interpretation, a node in the graph is assigned to an agent with a reward function. via multi - agent reward maximization, we obtain an aesthetically pleasing graph layout that is comparable to the outputs of classic algorithms. the main strength of a marl framework for graph drawing is that it not only unifies a number of classic drawing algorithms in a general formulation but also supports the creation of novel graph drawing algorithms by introducing a diverse set of reward functions.
arxiv:2011.00748
with the boom in modern software development, open - source software has become an integral part of various industries, driving progress in computer science. however, the immense complexity and diversity of the open - source ecosystem also pose a series of challenges, including issues of quality, security, management, maintenance, compliance, and sustainability. existing open - source governance approaches, while excelling in community building and collaboration, still face shortcomings in decentralized management, security, and maintenance. to address these challenges, inspired by the human genome project, we treat the software source code as software dna and propose the \ textbf { software genome project }, which is geared towards the secure monitoring and exploitation of open - source software. by identifying and labeling integrated and classified code features at a fine - grained level, and effectively identifying safeguards for functional implementations and non - functional requirements at different levels of granularity, software genome project builds a complete set of software genome maps to help developers and managers gain a deeper understanding of software complexity and diversity. by dissecting and summarizing functional and undesirable genes, software genome project helps facilitate targeted software remediation and optimization, provides valuable insight and understanding of the entire software ecosystem, and supports critical development tasks such as technology selection and open source governance. this project is expected to drive the evolution of software development towards more efficient, reliable, and sustainable software solutions.
arxiv:2311.09881
one of the most common things that a genealogist is tasked with is the gathering of a person ' s initial family history, normally via in - person interviews or with the use of a platform such as ancestry. com, as this can provide a strong foundation upon which a genealogist may build. however, the ability to conduct these interviews can often be hindered by both geographical constraints and the technical proficiency of the interviewee, as the interviewee in these types of interviews is most often an elderly person with a lower than average level of technical proficiency. with this in mind, this study presents what we believe, based on prior research, to be the first chatbot geared entirely towards the gathering of family histories, and explores the viability of utilising such a chatbot by comparing the performance and usability of such a method with the aforementioned alternatives. with a chatbot - based approach, we show that, though the average time taken to conduct an interview may be longer than if the user had used ancestry. com or participated in an in - person interview, the number of mistakes made and the level of confusion from the user regarding the ui and process required is lower than the other two methods. note that the final metric regarding the user ' s confusion is not applicable for the in - person interview sessions due to its lack of a ui. with refinement, we believe this use of a chatbot could be a valuable tool for genealogists, especially when dealing with interviewees who are based in other countries where it is not possible to conduct an in - person interview.
arxiv:2309.03223
brightness variations due to dark spots on the stellar surface encode information about stellar surface rotation and magnetic activity. in this work, we analyze the kepler long - cadence data of 26, 521 main - sequence stars of spectral types m and k in order to measure their surface rotation and photometric activity level. rotation - period estimates are obtained by the combination of a wavelet analysis and autocorrelation function of the light curves. reliable rotation estimates are determined by comparing the results from the different rotation diagnostics and four data sets. we also measure the photometric activity proxy sph using the amplitude of the flux variations on an appropriate timescale. we report rotation periods and photometric activity proxies for about 60 per cent of the sample, including 4, 431 targets for which mcquillan et al. ( 2013a, 2014 ) did not report a rotation period. for the common targets with rotation estimates in this study and in mcquillan et al. ( 2013a, 2014 ), our rotation periods agree within 99 per cent. in this work, we also identify potential polluters, such as misclassified red giants and classical pulsator candidates. within the parameter range we study, there is a mild tendency for hotter stars to have shorter rotation periods. the photometric activity proxy spans a wider range of values with increasing effective temperature. the rotation period and photometric activity proxy are also related, with sph being larger for fast rotators. similar to mcquillan et al. ( 2013a, 2014 ), we find a bimodal distribution of rotation periods.
arxiv:1908.05222
in this survey article we consider the directed last - passage percolation model on the planar square lattice with nearest - neighbor steps and general i. i. d. weights on the vertices, outside of the class of exactly solvable models. we show how stationary cocycles are constructed from queueing fixed points and how these cocycles characterize the limit shape, yield existence of busemann functions in directions where the shape has some regularity, describe the direction of the competition interface, and answer questions on existence, uniqueness, and coalescence of directional semi - infinite geodesics, and on nonexistence of doubly infinite geodesics.
arxiv:1804.05715
this paper studies the fundamental learning problem of the energy - based model ( ebm ). learning the ebm can be achieved using the maximum likelihood estimation ( mle ), which typically involves the markov chain monte carlo ( mcmc ) sampling, such as the langevin dynamics. however, the noise - initialized langevin dynamics can be challenging in practice and hard to mix. this motivates the exploration of joint training with the generator model where the generator model serves as a complementary model to bypass mcmc sampling. however, such a method can be less accurate than the mcmc and result in biased ebm learning. while the generator can also serve as an initializer model for better mcmc sampling, its learning can be biased since it only matches the ebm and has no access to empirical training examples. such biased generator learning may limit the potential of learning the ebm. to address this issue, we present a joint learning framework that interweaves the maximum likelihood learning algorithm for both the ebm and the complementary generator model. in particular, the generator model is learned by mle to match both the ebm and the empirical data distribution, making it a more informative initializer for mcmc sampling of ebm. learning generator with observed examples typically requires inference of the generator posterior. to ensure accurate and efficient inference, we adopt the mcmc posterior sampling and introduce a complementary inference model to initialize such latent mcmc sampling. we show that three separate models can be seamlessly integrated into our joint framework through two ( dual - ) mcmc teaching, enabling effective and efficient ebm learning.
arxiv:2312.02469
we prove the validity over $ \ mathbb { r } $ of a commutative differential graded algebra model of configuration spaces for simply connected closed smooth manifolds, answering a conjecture of lambrechts - - stanley. we get as a result that the real homotopy type of such configuration spaces only depends on the real homotopy type of the manifold. we moreover prove, if the dimension of the manifold is at least $ 4 $, that our model is compatible with the action of the fulton - - macpherson operad ( weakly equivalent to the little disks operad ) when the manifold is framed. we use this more precise result to get a complex computing factorization homology of framed manifolds. our proofs use the same ideas as kontsevich ' s proof of the formality of the little disks operads.
arxiv:1608.08054
we study the fir and uv - visible properties of nearby star forming galaxies. this comparison is performed using the local luminosity functions at uv and fir wavelengths and on individual starburst galaxies for which photometric data from uv to nir and fir are available. the comparison of the fir and uv local luminosity functions argues for a moderate extinction in nearby disk galaxies. for a sample of 22 starburst galaxies, it is found that the uv ( 912 - 3650aa ), the visible ( 3600 - 12500 aa ) and the nir ( 12500 - 22000 aa ) wavelength range contribute 30 %, 50 % and 20 % respectively to the total emerging stellar emission. the mean ratio of the dust to bolometric luminosity of these galaxies is 0. 37 + / - 0. 22 similar to the ratio found for normal spiral galaxies. the mean extinction at 2000aa is found to be ~ 1. 2mag although with a large dispersion. the conversion factor of the stellar emission into dust emission is found to correlate with the luminosity of the galaxies, brighter galaxies having a higher conversion factor. we conclude that a very large conversion of the stellar light into dust emission can no longer be assumed as a general property of starburst galaxies at least in the local universe. we compare the uv properties of our local starburst galaxies to those of high redshift galaxies. the larger extinction found in the distant galaxies is consistent with the trend we find for the nearby starburst galaxies namely the brighter the galaxies the lower the escape fraction of stellar light.
arxiv:astro-ph/9803156
the modeling of nonlinear dynamics based on koopman operator theory, which is originally applicable only to autonomous systems with no control, is extended to non - autonomous control system without approximation to input matrix b. prevailing methods using a least square estimate of the b matrix may result in an erroneous input matrix, misinforming the controller about the structure of the input matrix in a lifted space. here, a new method for constructing a koopman model that comprises the exact input matrix b is presented. a set of state variables are introduced so that the control inputs are linearly involved in the dynamics of actuators. with these variables, a lifted linear model with the exact control matrix, called a control - coherent koopman model, is constructed by superposing control input terms, which are linear in local actuator dynamics, to the koopman operator of the associated autonomous nonlinear system. the proposed method is applied to multi degree - of - freedom robotic arms and multi - cable manipulation systems. model predictive control is applied to the former. it is demonstrated that the prevailing dynamic mode decomposition with control ( dmdc ) using an approximate control matrix b does not provide a satisfactory result, while the control - coherent koopman model performs well with the correct b matrix.
arxiv:2403.16306
we study the gravitational field of a spinning radiation beam - pulse ( a gyraton ) in a d - dimensional asymptotically ads spacetime. it is shown that the einstein equations for such a system reduce to a set of two linear equations in a ( d - 2 ) - dimensional space. by solving these equations we obtain a metric which is an exact solution of gravitational equations with the ( negative ) cosmological constant. the explicit metrics for 4d and 5d gyratons in asymptotically ads spacetime are given and their properties are discussed.
arxiv:hep-th/0509044
suborbital and orbital space launch capabilities and sustains ground station solutions to support its evolving fleet of spacecraft and remote systems. = = = = deep space network ( 1963 – present ) = = = = the nasa deep space network ( dsn ) serves as the primary ground station solution for nasa ' s interplanetary spacecraft and select earth - orbiting missions. the system employs ground station complexes near barstow, california, in spain near madrid, and in australia near canberra. the placement of these ground stations approximately 120 degrees apart around the planet provides the ability for communications to spacecraft throughout the solar system even as the earth rotates about its axis on a daily basis. the system is controlled at a 24x7 operations center at jpl in pasadena, california, which manages recurring communications linkages with up to 40 spacecraft. the system is managed by the jet propulsion laboratory. = = = = near space network ( 1983 – present ) = = = = the near space network ( nsn ) provides telemetry, commanding, ground - based tracking, data and communications services to a wide range of customers with satellites in low earth orbit ( leo ), geosynchronous orbit ( geo ), highly elliptical orbits ( heo ), and lunar orbits. the nsn accumulates ground station and antenna assets from the near - earth network and the tracking and data relay satellite system ( tdrs ) which operates in geosynchronous orbit providing continuous real - time coverage for launch vehicles and low earth orbit nasa missions. the nsn consists of 19 ground stations worldwide operated by the us government and by contractors including kongsberg satellite services ( ksat ), swedish space corporation ( ssc ), and south african national space agency ( sansa ). the ground network averages between 120 and 150 spacecraft contacts a day with tdrs engaging with systems on a near - continuous basis as needed ; the system is managed and operated by the goddard space flight center. = = = = sounding rocket program ( 1959 – present ) = = = = the nasa sounding rocket program ( nsrp ) is located at the wallops flight facility and provides launch capability, payload development and integration, and field operations support to execute suborbital missions. the program has been in operation since 1959 and is managed by the goddard space flight center using a combined us government and contractor team. the nsrp team conducts approximately 20 missions per year from both wallops and other launch locations worldwide to allow scientists to collect data " where it occurs ". the program supports the strategic vision of the
https://en.wikipedia.org/wiki/NASA
the landauer principle sets a fundamental thermodynamic constraint on the minimum amount of heat that must be dissipated to erase one logical bit of information through a quasi - statically slow protocol. for finite time information erasure, the thermodynamic costs depend on the specific physical realization of the logical memory and how the information is erased. here we treat the problem within the paradigm of a brownian particle in a symmetric double - well potential. the two minima represent the two values of a logical bit, 0 and 1, and the particle ' s position is the current state of the memory. the erasure protocol is realized by applying an external time - dependent tilting force. we derive analytical tools to evaluate the work required to erase a classical bit of information in finite time via an arbitrary continuous erasure protocol, which is a relevant setting for practical applications. importantly, our method is not restricted to the average work, but instead gives access to the full work distribution arising from many independent realizations of the erasure process. using the common example of an erasure protocol that changes linearly with time acting on a double - parabolic potential, we explicitly calculate all relevant quantities and verify them numerically.
arxiv:2206.02064
a new extended gamma - ray source, which was named as source a, in the southwest of galactic supernova remnant ( snr ) g306. 3 $ - $ 0. 9 was detected with a significance of $ \ sim $ 13 $ \ sigma $ at the location of r. a. ( j2000 ) = 13 $ ^ { \ rm { h } } $ 17 $ ^ { \ rm { m } } $ 52 $ ^ { \ rm { s } \! \! } $. 80, decl. ( j2000 ) = $ - $ 63 $ ^ { \ circ } $ 55 $ ' $ 48 $ " \! \! $. 00 using about 9 years of fermi - lat data. in order to investigate this unidentified gamma - ray source in multi - wavelengths, we performed swift observations of source a. in this presentation we summarize the published gamma - ray results, report about the recent too swift observations of source a, and show our preliminary results of the gamma - ray analysis that we conducted using the new x - ray data.
arxiv:1712.06415
we propose a new algorithm to identify a wiener - hammerstein system. this model represents a communication channel where two linear filters are separated by a non - linear function modelling an amplifier. the algorithm enables to recover each parameter of the model, namely the two linear filters and the non - linear function. this is to be opposed with estimation algorithms which identify the equivalent volterra system. the algorithm is composed of three main steps and uses three distinct pilot sequences. the estimation of the parameters is done in the time domain via several instances of the least - square algorithm. however, arguments based on the spectral representation of the signals and filters are used to design the pilot sequences. we also provide an analysis of the proposed algorithm. we estimate, via the theory and simulations, the minimum required size of the pilot sequences to achieve a target mean squared error between the output of the true channel and the output of the estimated model. we obtain that the new method requires reduced - size pilot sequences : the sum of the length of the pilot sequences is approximately the one needed to estimate the convolutional product of the two linear filters with a back - off. a comparison with the volterra approach is also provided.
arxiv:2408.17269
within ten years, the era of large - scale systematics surveys will decay thanks to a complete census of exoplanetary systems within 200 pc from the sun. with the first lights foreseen between 2024 and 2028, the new generation of extremely large telescopes and planet imagers will arrive at a propitious time to exploit this manna of discoveries to characterize the formation, the evolution, and the physics of giant and telluric planets with the ultimate goal to search and discover bio - signatures. in that perspective, i will briefly summarize the main characteristics of the direct imaging instruments of the elts dedicated to the study of exoplanets, and i will review the key science cases ( from the initial conditions of planetary formation, the architecture of planetary systems and the physics and atmospheres of giant and telluric planets ) that they will address given their predicted performances.
arxiv:1810.02031
an analysis of free - recall datasets from two independent experiments allows to identify two anomalous instances of non - monotonicity in free recall : a maximum in the dependence of the inter - response intervals on the serial - position lags, and a minimum in the rate of contiguous recall near the beginning of the recall process. both effects, it is argued, may stem from a hierarchical search protocol in the space of memories. an elementary random - walk model on binary strings is used to test this hypothesis.
arxiv:1612.03649
the problem of identifying a probabilistic context free grammar has two aspects : the first is determining the grammar ' s topology ( the rules of the grammar ) and the second is estimating probabilistic weights for each rule. given the hardness results for learning context - free grammars in general, and probabilistic grammars in particular, most of the literature has concentrated on the second problem. in this work we address the first problem. we restrict attention to structurally unambiguous weighted context - free grammars ( suwcfg ) and provide a query learning algorithm for structurally unambiguous probabilistic context - free grammars ( supcfg ). we show that suwcfg can be represented using co - linear multiplicity tree automata ( cmta ), and provide a polynomial learning algorithm that learns cmtas. we show that the learned cmta can be converted into a probabilistic grammar, thus providing a complete algorithm for learning a structurally unambiguous probabilistic context free grammar ( both the grammar topology and the probabilistic weights ) using structured membership queries and structured equivalence queries. we demonstrate the usefulness of our algorithm in learning pcfgs over genomic data.
arxiv:2011.07472
we specialise the construction of orbifold graph tqfts introduced in carqueville et al., arxiv : 2101. 02482 to reshetikhin - turaev defect tqfts. we explain that the modular fusion category $ { \ mathcal { c } } _ { \ mathcal { a } } $ constructed in mulevi \ v { c } ius - runkel, arxiv : 2002. 00663 from an orbifold datum $ \ mathcal { a } $ in a given modular fusion category $ \ mathcal { c } $ is a special case of the wilson line ribbon categories introduced as part of the general theory of orbifold graph tqfts. using this, we prove that the reshetikhin - turaev tqft obtained from $ { \ mathcal { c } } _ { \ mathcal { a } } $ is equivalent to the orbifold of the tqft for $ \ mathcal { c } $ with respect to the orbifold datum $ \ mathcal { a } $.
arxiv:2109.04754
the chiral structure of supersymmetric particle couplings involving third generation standard model fermions depends on left - right squark and slepton mixings as well as gaugino - higgsino mixings. the shapes and intercorrelations of invariant mass distributions of a first or second generation lepton with bottoms and taus arising from adjacent branches of susy cascade decays are shown to be a sensitive probe of this chiral structure. all possible cascade decays that can give rise to such correlations within the mssm are considered. for bottom - lepton correlations the distinctive structure of the invariant mass distributions distinguishes between decays originating from stop or sbottom squarks through either an intermediate chargino or neutralino. for decay through a chargino the spins of the stop and chargino are established by the form of the distribution. when the bottom charge is signed through soft muon tagging, the structure of the same - sign and opposite - sign invariant mass distributions depends on a set function of left - right and gaugino - higgsino mixings, as well as establishes the spins of all the superpartners in the sequential two - body cascade decay. tau - lepton and tau - tau invariant mass distributions arising from mssm cascade decays are likewise systematically considered with particular attention to their dependence on tau polarization. all possible tau - lepton and tau - tau distributions are plotted using a semi - analytic model for hadronic one - prong taus. algorithms for fitting tau - tau and tau - lepton distributions to data are suggested.
arxiv:0811.4445
beryllium was recently discovered to harbor a dirac nodal line ( dnl ) in its bulk phase and the dnl - induced non - trivial drumhead - like surface states ( dnsss ) on its ( 0001 ) surface, rationalizing several already - existing historic puzzles [ phys. rev. lett., \ textbf { 117 }, 096401 ( 2016 ) ]. however, to date the underlying mechanism, as to why its ( 0001 ) surface exhibits an anomalously large electron - phonon coupling effect ( $ \ lambda _ { e - ph } ^ s $ $ \ approx $ 1. 0 ), remains unresolved. here, by means of first - principles calculations we have evidenced that the coupling of the dnsss with the phononic states mainly contributes to its novel surface \ emph { e - ph } enhancement. besides that the experimentally observed $ \ lambda _ { e - ph } ^ s $ and the main eliashberg coupling function ( ecf ) peaks have been reproduced well, we have decomposed the ecf, $ \ alpha ^ { 2 } $ $ f $ ( \ emph { k }, \ textbf { \ emph { q } } ; \ emph { v } ), and the \ emph { e - ph } coupling strength $ \ lambda ( \ emph { k }, \ textbf { \ emph { q } } ; \ emph { v } ) $ as a function of each electron momentum ( \ emph { k } ), each phonon momentum ( \ textbf { \ emph { q } } ) and each phonon mode ( $ v $ ), evidencing the robust connection between the dnsss and both $ \ alpha ^ { 2 } $ $ f $ ( \ emph { k }, \ textbf { \ emph { q } } ; \ emph { v } ) and $ \ lambda ( \ emph { k }, \ textbf { \ emph { q } } ; \ emph { v } ) $. the results reveal the strong \ emph { e - ph } coupling between the dnsss and the phonon modes, which contributes over 80 $ \ % $ of the $ \ lambda _ { e - ph } ^ s $ coefficient on the be ( 0001 ) surface. it highlights that the anomalously large \ emph {
arxiv:1907.08554
protein interaction networks are a promising type of data for studying complex biological systems. however, despite the rich information embedded in these networks, they face important data quality challenges of noise and incompleteness that adversely affect the results obtained from their analysis. here, we explore the use of the concept of common neighborhood similarity ( cns ), which is a form of local structure in networks, to address these issues. although several cns measures have been proposed in the literature, an understanding of their relative efficacies for the analysis of interaction networks has been lacking. we follow the framework of graph transformation to convert the given interaction network into a transformed network corresponding to a variety of cns measures evaluated. the effectiveness of each measure is then estimated by comparing the quality of protein function predictions obtained from its corresponding transformed network with those from the original network. using a large set of s. cerevisiae interactions, and a set of 136 go terms, we find that several of the transformed networks produce more accurate predictions than those obtained from the original network. in particular, the $ hc. cont $ measure proposed here performs particularly well for this task. further investigation reveals that the two major factors contributing to this improvement are the abilities of cns measures, especially $ hc. cont $, to prune out noisy edges and introduce new links between functionally related proteins.
arxiv:1210.6912
the shapley additive global importance ( sage ) value is a theoretically appealing interpretability method that fairly attributes global importance to a model ' s features. however, its exact calculation requires the computation of the feature ' s surplus performance contributions over an exponential number of feature sets. this is computationally expensive, particularly because estimating the surplus contributions requires sampling from conditional distributions. thus, sage approximation algorithms only take a fraction of the feature sets into account. we propose $ d $ - sage, a method that accelerates sage approximation. $ d $ - sage is motivated by the observation that conditional independencies ( cis ) between a feature and the model target imply zero surplus contributions, such that their computation can be skipped. to identify cis, we leverage causal structure learning ( csl ) to infer a graph that encodes ( conditional ) independencies in the data as $ d $ - separations. this is computationally more efficient because the expense of the one - time graph inference and the $ d $ - separation queries is negligible compared to the expense of surplus contribution evaluations. empirically we demonstrate that $ d $ - sage enables the efficient and accurate estimation of sage values.
arxiv:2304.03113
we consider the inverse problem of determining the time independent scalar potential of the dynamic schr \ " odinger equation in an infinite cylindrical domain from one boundary neumann observation of the solution. we prove h \ " older stability by choosing the dirichlet boundary condition suitably.
arxiv:1311.5323
vortex rings are ubiquitous in fluids, with smoke rings being a familiar example. the interaction of multiple vortex rings produces complex dynamical behaviour, such as the leapfrogging motion first analysed by helmholtz more than a century and a half ago. here we report on numerical investigations of vortex ring dynamics in a different setting from fluids, namely, as solutions of the landau - lifshitz equation that models the evolution of the local magnetization in a ferromagnetic medium. we present the results of the first study on the dynamics of interacting magnetic vortex rings and provide a novel link between fluids and magnetism, by showing that a range of phenomena familiar in fluids are reproduced in ferromagnets. this includes the leapfrogging motion of a pair of vortex rings and evidence for the chaotic dynamics of a trio of rings.
arxiv:1402.6165
we provide an alternative to the gauge covariant horizontality condition which is responsible for the derivation of the nilpotent ( anti - ) brst symmetry transformations for the gauge and ( anti - ) ghost fields of a ( 3 + 1 ) - dimensional ( 4d ) interacting 1 - form non - abelian gauge theory in the framework of the usual superfield approach to becchi - rouet - stora - tyutin ( brst ) formalism. the above covariant horizontality condition is replaced by a gauge invariant restriction on the ( 4, 2 ) - dimensional supermanifold, parameterized by a set of four spacetime coordinates x ^ \ mu ( \ mu = 0, 1, 2, 3 ) and a pair of grassmannian variables \ theta and \ bar \ theta. the latter condition enables us to derive the nilpotent ( anti - ) brst symmetry transformations for all the fields of an interacting 4d 1 - form non - abelian gauge theory where there is an explicit coupling between the gauge field and the dirac fields. the key differences and striking similarities between the above two conditions are pointed out clearly.
arxiv:hep-th/0603049
in the present study, we generalize the possible ghost field configurations within the framework of $ k $ - essence theory to the simpson - visser metric area function $ \ sigma ^ 2 = x ^ 2 + a ^ 2 $. our analysis encompasses field configurations for the region - defined metric function $ da _ \ pm $ as well as the general solution that asymptotically behaves as schwarzschild - de sitter for $ x \ to - \ infty $. specifically, we investigate two scalar field configurations and define the associated potential for each one. through rigorous calculations, we verify that all equations of motion are satisfied. notably, our findings indicate that even when proposing new configurations of ghost scalar fields, the energy conditions remain unchanged. this result serves to validate the wormhole solutions obtained in previous studies.
arxiv:2405.07455
in this paper, we consider approximability issues of the following four problems : triangle packing, full sibling reconstruction, maximum profit coverage and 2 - coverage. all of them are generalized or specialized versions of set - cover and have applications in biology ranging from full - sibling reconstructions in wild populations to biomolecular clusterings ; however, as this paper shows, their approximability properties differ considerably. our inapproximability constant for the triangle packing problem improves upon the previous results ; this is done by directly transforming the inapproximability gap of haastad for the problem of maximizing the number of satisfied equations for a set of equations over gf ( 2 ) and is interesting in its own right. our approximability results on the full siblings reconstruction problems answers questions originally posed by berger - wolf et al. and our results on the maximum profit coverage problem provides almost matching upper and lower bounds on the approximation ratio, answering a question posed by hassin and or.
arxiv:1102.1006
we investigate the paramagnetic - metal - to - antiferromagnetic - metal and antiferromagnetic - metal - to - antiferromagnetic - insulator transitions using a slave - boson mean - field theory. to this effect, we discuss the ground state of the half - filled hubbard model as a function of t ' / t and correlation strength u, where t and t ' are the hopping amplitudes between nearest and next - nearest neighbors, respectively. the metal - insulator transition at a critical u _ { mit } is of second order for small levels of magnetic frustration, t ' / t < 0. 06, and of first order for large ones, t ' / t > 0. 06. the insulator is always antiferromagnetically ordered, while the metal exhibits a second - order transition from a paramagnetic to an antiferromagnetic state up to t ' / t = 0. 14, as u is increased. we also contrast these findings with what we obtain in hartree - fock approximation.
arxiv:cond-mat/9908074
in this paper, we study a projectable ho \ v { r } ava - lifshitz cosmology without the detailed balance condition minimally coupled to a non - linear self - coupling scalar field. in the minisuperspace framework, the super hamiltonian of the presented model is constructed by means of which, some classical solutions for scale factor and scalar field are obtained. since these solutions exhibit various types of singularities, we came up with the quantization of the model in the context of the wheeler - dewitt approach of quantum cosmology. the resulting quantum wave functions are then used to investigate the possibility of the avoidance of classical singularities due to quantum effects which show themselves important near these singularities.
arxiv:2102.00187
smart cities often rely on technological innovation to improve citizens ' safety and quality of life. this paper presents a novel smart mobility system that aims to facilitate people accessing public mobility while preserving their privacy. the system is based on a zero interaction approach whereby a person can use public transport services without any need to perform explicit actions. operations related to ticket purchase and validation have been fully automated. the system is also designed with the privacy - by - design paradigm in mind, to preserve user privacy as much as possible. throughout the paper several technical details are discussed as well to describe a prototype version of the system that was implemented. the prototype has been successfully tested in the city of imola ( emilia romagna, italy ) in order to prove the system validity on the field.
arxiv:2111.10307
nonlinear topological photonics, which explores topics common to the fields of topological phases and nonlinear optics, is expected to open up a new paradigm in topological photonics. here, we demonstrate second - harmonic generation ( shg ) via nonlinear interaction of double topological valley - hall kink modes in all - dielectric photonic crystals ( phcs ). we first show that two topological frequency bandgaps can be created around a pair of frequencies, $ \ omega _ 0 $ and $ 2 \ omega _ 0 $, by gapping out the corresponding dirac points in two - dimensional honeycomb phcs. valley - hall kink modes along a kink - type domain wall interface between two phcs placed together in a mirror - symmetric manner are generated within the two frequency bandgaps. importantly, through full - wave simulations and mode dispersion analysis, we demonstrate that tunable, bi - directional phase - matched shg via nonlinear interaction of the valley - hall kink modes inside the two bandgaps can be achieved. in particular, by using stokes parameters associated to the magnetic part of the valley - hall kink modes, we introduce a new concept, shg directional dichroism, which is employed to characterize optical probes for sensing chiral molecules. our work opens up new avenues towards topologically protected nonlinear frequency mixing and active photonic devices implemented in all - dielectric material platforms.
arxiv:2007.04875
rational approximation of fractional order ( fo ) differ - integrators via continued fraction expansion ( cfe ) is a well known technique. in this paper, the nominal structures of various generating functions are optimized using genetic algorithm ( ga ) to minimize the deviation in magnitude and phase response between the original fo element and the rationalized discrete time filter in infinite impulse response ( iir ) structure. the optimized filter based realizations show better approximation of the fo elements in comparison with the existing methods and is demonstrated by the frequency response of the iir filters.
arxiv:1202.5693
we discuss the low - energy analysis of models involving quarks and four - fermion couplings. the relation with qcd and with other models of mesons and meson plus quarks at low energies is discussed. a short description of how the heat - kernel expansion can be used to get regularization independent information, is given. the anomaly within this class of models and a physical prescription to obtain the correct flavour anomaly while keeping as much of the vmd aspects as possible is discussed. the major part is the discussion within this framework of the order $ p ^ 4 $ action and of two and some three - point functions to all orders in momenta and quark masses. some results on hadronic matrix elements are given.
arxiv:hep-ph/9502335
flat - spectrum radio quasar pks ~ 1229 $ - $ 02 with a knotty and asymmetric radio morphology was identified as the optical and radio counterpart of a $ \ gamma $ - ray source. in this paper, we study the properties, e. g. morphology, opacity, polarization and kinematics of the jet in pks ~ 1229 $ - $ 02 using radio interferometry. with our results, we find that the knotty and asymmetric morphology of this source may probably shaped by the interaction between its anterograde jet and the nonuniform dense ambient medium. by reproducing a spectral energy distribution of pks ~ 1229 $ - $ 02 with the obtained kinematic parameters, we find that the relativistic beaming effect in pks ~ 1229 $ - $ 02 is not strong enough to produce the reported $ \ gamma $ - ray emission, i. e. pks ~ 1229 $ - $ 02 may not be a $ \ gamma $ - ray agn. the misidentification may probably due to the poor spatial resolution of the $ \ gamma $ - ray detector of the previous generation.
arxiv:1907.03442
we perform a set of 38 numerical simulations of equal - mass bh binaries in a configuration where the bh spins in the binary are equal in both magnitude and direction, to study precession effects. we vary the initial direction of the total spin s with respect to the orbital angular momentum l, covering the 2 dimensional space of orientation angles with 38 configurations consisting of 36 configurations distributed in the azimuthal angle phi and polar angle theta, and two configurations on the poles. in all cases, we set the initial dimensionless bh spins to 0. 8. we observe that during the late - inspiral stage, the total angular momentum of the system j remains within 5 deg of its original direction, with the largest changes in direction occurring when the spins are nearly counter - aligned with the orbital angular momentum. we also observe that the angle between s and l is nearly conserved during the inspiral phase. these two dynamical properties allow us to propose a new phenomenological formula for the final mass and spin of merged bhs in terms of the individual masses and spins of the progenitor binary at far separations. we determine coefficients of this formula ( in the equal - mass limit ) using a least - squares fit to the results of this new set of 38 runs, an additional set of five new configurations with spins aligned / counteraligned with the orbital angular momentum, and over 100 recent simulations. we find that our formulas reproduce the remnant mass and spin of these simulations to within a relative error of 2. 5 %. we discuss the region of validity of this dynamical picture for precessing unequal - mass binaries. finally, we perform a statistical study to see the consequence of this new formula for distributions of spin - magnitudes and remnant masses with applications to bh - spin distributions and gravitational radiation in cosmological scenarios involving several mergers.
arxiv:1312.5775
a method for selecting events with densely populated narrow regions or spikes in a given data sample is discussed. applying this method to 200 a gev / c 32s - agbr and 32s - gold collision data, a few events having " hot regions " are chosen for further analysis. the finding reveals that a systematic study of particle density fluctuations, if carried out in terms of scaled factorial moments, and the results are compared with those for the analysis of correlation free monte carlo events, would be useful in identifying events with large dynamical fluctuations. formation of clusters or jet - like structure in multihadronic final states in the selected spiky events is also looked into and compared with the predictions of ampt and independent emission hypothesis models by carrying out monte carlo simulation. the findings suggest that clustering or jet - like algorithm adopted in the present study may also serve as an important tool for triggering different classes of events.
arxiv:1510.03176
selecting the optimal recommender via online exploration - exploitation is catching increasing attention where the traditional a / b testing can be slow and costly, and offline evaluations are prone to the bias of history data. finding the optimal online experiment is nontrivial since both the users and displayed recommendations carry contextual features that are informative to the reward. while the problem can be formalized via the lens of multi - armed bandits, the existing solutions are found less satisfactorily because the general methodologies do not account for the case - specific structures, particularly for the e - commerce recommendation we study. to fill in the gap, we leverage the \ emph { d - optimal design } from the classical statistics literature to achieve the maximum information gain during exploration, and reveal how it fits seamlessly with the modern infrastructure of online inference. to demonstrate the effectiveness of the optimal designs, we provide semi - synthetic simulation studies with published code and data for reproducibility purposes. we then use our deployment example on walmart. com to fully illustrate the practical insights and effectiveness of the proposed methods.
arxiv:2110.12132
two intermetallic feal compounds with al content of 70. 68 and 72. 17 at. pct were studied using m \ " ossbauer spectroscopy ( 5 to 296 k ) and x - ray diffraction ( 15 to 300 k ). the compounds were found to crystallize in the orthorhombic cmcm space group ( eta - phase ). the collected data revealed that dynamics of the fe atoms ( harmonic in entire temperature range ) is significantly different that al atoms. for the latter strong anharmonicity was evidenced. moreover, it was found that partial filling of the different al sites leads to occurrence of low and high symmetry coordination of fe atoms, which was reflected in occurrence of two distinct doublets in m \ " ossbauer spectra. all spectral parameters of the doublets as well as the debye temperature, force constant, kinetic and potential energies of vibrations were determined. those results revealed significant differences between both alloys, likely originating from approaching the stability boundary of the eta - phase for fe - al 72. 17 at. pct alloy.
arxiv:2101.03158
we present an analysis of the mass of the x ( 3872 ) reconstructed via its decay to j / psi pi + pi - using 2. 4 fb ^ - 1 of integrated luminosity from ppbar collisions at sqrt ( s ) = 1. 96 tev, collected with the cdf ii detector at the fermilab tevatron. the possible existence of two nearby mass states is investigated. within the limits of our experimental resolution the data are consistent with a single state, and having no evidence for two states we set upper limits on the mass difference between two hypothetical states for different assumed ratios of contributions to the observed peak. for equal contributions, the 95 % confidence level upper limit on the mass difference is 3. 6 mev / c ^ 2. under the single - state model the x ( 3872 ) mass is measured to be 3871. 61 + - 0. 16 ( stat ) + - 0. 19 ( syst ) mev / c ^ 2, which is the most precise determination to date.
arxiv:0906.5218
the geometric optics approximation provides an interpretation for eikonal correspondence that, in black - hole - containing spacetimes, connects high - frequency black hole quasinormal modes with closed photon orbits around said black hole. this correspondence has been identified explicitly for schwarzschild, reissner - nordstr \ " om, kerr, and kerr - newman black holes, the violation of which can be a potential hint toward physics beyond general relativity. notably, the aforementioned black hole spacetimes have sufficient symmetries such that both the geodesic equations and the master wave equations are separable. the identification of the correspondence seems to largely rely on these symmetries. one naturally asks how the eikonal correspondence would appear if the spacetime were less symmetric. for a pioneering work in this direction, we consider in this paper a deformed schwarzschild spacetime retaining only axisymmetry and stationarity. we show that up to the first order of spacetime deformations the eikonal correspondence manifests through the definition of the \ textit { averaged } radius of trapped photon orbits along their one period. this averaged radius overlaps the potential peak in the master wave equation, which can be defined up to the first order of spacetime deformations, allowing the explicit identification of the eikonal correspondence.
arxiv:2205.02433
we present in this paper a novel query formulation using dynamic anchor boxes for detr ( detection transformer ) and offer a deeper understanding of the role of queries in detr. this new formulation directly uses box coordinates as queries in transformer decoders and dynamically updates them layer - by - layer. using box coordinates not only helps using explicit positional priors to improve the query - to - feature similarity and eliminate the slow training convergence issue in detr, but also allows us to modulate the positional attention map using the box width and height information. such a design makes it clear that queries in detr can be implemented as performing soft roi pooling layer - by - layer in a cascade manner. as a result, it leads to the best performance on ms - coco benchmark among the detr - like detection models under the same setting, e. g., ap 45. 7 \ % using resnet50 - dc5 as backbone trained in 50 epochs. we also conducted extensive experiments to confirm our analysis and verify the effectiveness of our methods. code is available at \ url { https : / / github. com / slongliu / dab - detr }.
arxiv:2201.12329
the self - annihilation of dark matter particles with mass in the mev range can produce gamma rays via prompt or secondary radiation. the annihilation rate for such light dark matter particles is however tightly constrained by cosmic microwave background ( cmb ) data. here we explore the possibility of discovering mev dark matter annihilation with future mev gamma - ray telescopes taking into account the latest and future cmb constraints. we study the optimal energy window as a function of the dominant annihilation final state. we consider both the ( conservative ) case of the dwarf spheroidal galaxy draco and the ( more optimistic ) case of the galactic center. we find that for certain channels, including those with one or two monochromatic photon ( s ) and one or two neutral pion ( s ), a detectable gamma - ray signal is possible for both targets under consideration, and compatible with cmb constraints. for other annihilation channels, however, including all leptonic annihilation channels and two charged pions, cmb data rule out any significant signal of dark matter annihilation at future mev gamma - ray telescopes from dwarf galaxies, but possibly not for the galactic center.
arxiv:1705.00777
we show that cosmological quantum relaxation predicts an anisotropic primordial power spectrum with a specific dependence on wavenumber k. we explore some of the consequences for precision measurements of the cosmic microwave background ( cmb ). quantum relaxation is a feature of the de broglie - bohm pilot - wave formulation of quantum theory, which allows the existence of more general physical states that violate the born probability rule. recent work has shown that relaxation to the born rule is suppressed for long - wavelength field modes on expanding space, resulting in a large - scale power deficit with a characteristic inverse - tangent dependence on k. because the quantum relaxation dynamics is independent of the direction of the wave vector for the relaxing field mode, in the limit of weak anisotropy we are able to derive an expression for the anisotropic power spectrum that is determined by the power deficit function. as a result, the off - diagonal terms in the cmb covariance matrix are also determined by the power deficit. we show that the lowest - order l - ( l + 1 ) inter - multipole correlations have a characteristic scaling with multipole moment l. our derived spectrum also predicts a residual statistical anisotropy at small scales, with an approximate consistency relation between the scaling of the l - ( l + 1 ) correlations and the scaling of the angular power spectrum at high l. we also predict a relationship between the l - ( l + 1 ) correlations at large and small scales. cosmological quantum relaxation appears to provide a single physical mechanism that predicts both a large - scale power deficit and a range of statistical anisotropies, together with potentially testable relationships between them.
arxiv:1510.02523
shape inference is classically ill - posed, because it involves a map from the ( 2d ) image domain to the ( 3d ) world. standard approaches regularize this problem by either assuming a prior on lighting and rendering or restricting the domain, and develop differential equations or optimization solutions. while elegant, the solutions that emerge in these situations are remarkably fragile. we exploit the observation that people infer shape qualitatively ; that there are quantitative differences between individuals. the consequence is a topological approach based on critical contours and the morse - smale complex. this paper provides a developmental review of that theory, emphasizing the motivation at different stages of the research.
arxiv:2008.08622
no singular point is called regular or non - singular. the study of surfaces near their singular points and the classification of the singular points is singularity theory. a singular point is isolated if there is no other singular point in a neighborhood of it. otherwise, the singular points may form a curve. this is in particular the case for self - crossing surfaces. = = algebraic surface = = originally, an algebraic surface was a surface which could be defined by an implicit equation f ( x, y, z ) = 0, { \ displaystyle f ( x, y, z ) = 0, } where f is a polynomial in three indeterminates, with real coefficients. the concept has been extended in several directions, by defining surfaces over arbitrary fields, and by considering surfaces in spaces of arbitrary dimension or in projective spaces. abstract algebraic surfaces, which are not explicitly embedded in another space, are also considered. = = = surfaces over arbitrary fields = = = polynomials with coefficients in any field are accepted for defining an algebraic surface. however, the field of coefficients of a polynomial is not well defined, as, for example, a polynomial with rational coefficients may also be considered as a polynomial with real or complex coefficients. therefore, the concept of point of the surface has been generalized in the following way. given a polynomial f ( x, y, z ), let k be the smallest field containing the coefficients, and k be an algebraically closed extension of k, of infinite transcendence degree. then a point of the surface is an element of k3 which is a solution of the equation f ( x, y, z ) = 0. { \ displaystyle f ( x, y, z ) = 0. } if the polynomial has real coefficients, the field k is the complex field, and a point of the surface that belongs to r 3 { \ displaystyle \ mathbb { r } ^ { 3 } } ( a usual point ) is called a real point. a point that belongs to k3 is called rational over k, or simply a rational point, if k is the field of rational numbers. = = = projective surface = = = a projective surface in a projective space of dimension three is the set of points whose homogeneous coordinates are zeros of a single homogeneous polynomial in four variables. more generally, a projective surface is a subset of a projective space, which is a projective variety of dimension two. projective surfaces are strongly related to affine surfaces ( that is, ordinary algebraic surfaces ). one passes from a projective surface
https://en.wikipedia.org/wiki/Surface_(mathematics)
3d gaussian splatting ( 3dgs ) has demonstrated outstanding performance in novel view synthesis, achieving a balance between rendering quality and real - time performance. 3dgs employs adaptive density control ( adc ) to increase the number of gaussians. however, the clone and split operations within adc are not sufficiently efficient, impacting optimization speed and detail recovery. additionally, overfitted gaussians that affect rendering quality may exist, and the original adc is unable to remove them. to address these issues, we propose two key innovations : ( 1 ) long - axis split, which precisely controls the position, shape, and opacity of child gaussians to minimize the difference before and after splitting. ( 2 ) recovery - aware pruning, which leverages differences in recovery speed after resetting opacity to prune overfitted gaussians, thereby improving generalization performance. experimental results show that our method significantly enhances rendering quality. code is available at https : / / github. com / xiaobin2001 / edc.
arxiv:2411.10133
particle transport in markov mixtures can be addressed by the so - called chord length sampling ( cls ) methods, a family of monte carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on - the - fly the material interfaces crossed by the random walkers during their trajectories. such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the boltzmann equation for a large number of realizations of random media. cls solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. in this work we propose a new family of algorithms ( called ' poisson box sampling ', pbs ) aimed at improving the accuracy of the cls approach for transport in $ d $ - dimensional binary markov mixtures. in order to probe the features of pbs methods, we will focus on three - dimensional markov media and revisit the benchmark problem originally proposed by adams, larsen and pomraning and extended by brantley : for these configurations we will compare reference solutions, standard cls solutions and the new pbs solutions for scalar particle flux, transmission and reflection coefficients. pbs will be shown to perform better than cls at the expense of a reasonable increase in computational time.
arxiv:1708.04260
we apply the migdal - eliashberg theory of superconductivity to heavy - fermion and mixed valence materials. specifically, we extend the anderson lattice model to a case when there exists a strong coupling between itinerant electrons and lattice vibrations. using the saddle - point approximation, we derive a set of coupled nonlinear equations which describe competition between the crossover to a heavy - fermion or mixed - valence regimes and conventional superconductivity. we find that superconductivity at strong coupling emerges on par with the development of the many - body coherence in a kondo lattice. superconductivity is gradually suppressed with the onset of the kondo screening and for strong electron - phonon coupling the kondo screening exhibits a characteristic re - entrant behavior. even though for both weak and strong coupling limits the suppression of superconductivity is weaker in the mixed - valence regime compared to the local moment one, superconducting critical temperature still remains nonzero. in the weak coupling limit the onset of the many body coherence develops gradually, in the strong coupling limit it emerges abruptly in the mixed valence regime while in the local moment regime the $ f $ - electrons remain effectively decoupled from the conduction electrons. possibility of experimental realization of these effects in ce - based compounds is also discussed.
arxiv:2401.05486
microcanonical thermodynamics ( mcth ) is contrasted to canonical thermodynamics ( cth ). at phase transitions of 1. order the two ensembles are not equivalent even in the thermodynamic limit. energy fluctuations do not vanish and phase separations are suppressed in cth. a proper treatment of fluctuations is neccessary. mcth allows to address even isolated small systems where phase transitions can be clearly classified into first order and continuous ones. the microcanonical caloric curve t ( e ) determines the transition temperature, latent heat and surface entropy / tension. for systems of ca. 1000 na -, k -, or fe - atoms at 1 atm. all 3 quantities can be calculated. the three parameters approach with rising size the known bulk values. there is nothing that demands the use of the thermodynamic limit. within microcanonical thermodynamics of finite systems there are fundamental differences between conserved extensive variables and ensemble related ones like entropy, temperature and pressure. this is discussed in detail.
arxiv:cond-mat/9805391
unsupervised anomaly detection is a challenging computer vision task, in which 2d - based anomaly detection methods have been extensively studied. however, multimodal anomaly detection based on rgb images and 3d point clouds requires further investigation. the existing methods are mainly inspired by memory bank based methods commonly used in 2d - based anomaly detection, which may cost extra memory for storing mutimodal features. in present study, a novel memoryless method mdss is proposed for multimodal anomaly detection, which employs a light - weighted student - teacher network and a signed distance function to learn from rgb images and 3d point clouds respectively, and complements the anomaly information from the two modalities. specifically, a student - teacher network is trained with normal rgb images and masks generated from point clouds by a dynamic loss, and the anomaly score map could be obtained from the discrepancy between the output of student and teacher. furthermore, the signed distance function learns from normal point clouds to predict the signed distances between points and surface, and the obtained signed distances are used to generate anomaly score map. subsequently, the anomaly score maps are aligned to generate the final anomaly score map for detection. the experimental results indicate that mdss is comparable but more stable than the sota memory bank based method shape - guided, and furthermore performs better than other baseline methods.
arxiv:2409.05378
in - context learning ( icl ) enables large language models ( llms ) to achieve rapid task adaptation by learning from demonstrations. with the increase in available context length of llms, recent experiments have shown that the performance of icl does not necessarily scale well in many - shot ( demonstration ) settings. we theoretically and experimentally confirm that the reason lies in more demonstrations dispersing the model attention from the query, hindering its understanding of key content. inspired by how humans learn from examples, we propose a training - free method focusicl, which conducts triviality filtering to avoid attention being diverted by unimportant contents at token - level and operates hierarchical attention to further ensure sufficient attention towards current query at demonstration - level. we also design an efficient hyperparameter searching strategy for focusicl based on model perplexity of demonstrations. comprehensive experiments validate that focusicl achieves an average performance improvement of 5. 2 % over vanilla icl and scales well with many - shot demonstrations.
arxiv:2408.13987
in this paper, we develop the kinetic and hydrodynamic theories of the convective mesoscale flows driven by the spatially inhomogeneous electrostatic ion cyclotron parametric microturbulence in the pedestal plasma with a sheared poloidal flow. the developed kinetic theory predicts the generation of the sheared poloidal convective flow, and of the radial compressed flow with radial flow velocity gradient. the developed hydrodynamic theory of the convective flows reveals the radial compressed convective flow as the dominant factor in the formation of the steep pedestal density profile with density gradient exponentially growing with time. this gradient density growth is limited by the formation of the radial oscillating with time ion outflow of pedestal plasma to scrape - off layer.
arxiv:2202.00983
text { b } } t } { v } }, } where now n and v are also regarded as constants. mathematically, this constitutes a partial application of the earlier function p. this illustrates how independent variables and constants are largely dependent on the point of view taken. one could even regard kb as a variable to obtain a function p ( v, n, t, k b ) = n k b t v. { \ displaystyle p ( v, n, t, k _ { \ text { b } } ) = { \ frac { nk _ { \ text { b } } t } { v } }. } = = moduli spaces = = considering constants and variables can lead to the concept of moduli spaces. for illustration, consider the equation for a parabola, y = a x 2 + b x + c, { \ displaystyle y = ax ^ { 2 } + bx + c, } where a, b, c, x and y are all considered to be real. the set of points ( x, y ) in the 2d plane satisfying this equation trace out the graph of a parabola. here, a, b and c are regarded as constants, which specify the parabola, while x and y are variables. then instead regarding a, b and c as variables, we observe that each set of 3 - tuples ( a, b, c ) corresponds to a different parabola. that is, they specify coordinates on the ' space of parabolas ' : this is known as a moduli space of parabolas. = = see also = = lambda calculus observable variable physical constant propositional variable = = references = = = = bibliography = =
https://en.wikipedia.org/wiki/Variable_(mathematics)
this paper proposes a novel hypothesis about the foundation of tenochtitlan by combining digital elevation modeling with historical and symbolic analysis. using geospatial data from earthexplorer, we simulate various historical water levels in the valley of mexico. the resulting lake configurations reveal possible locations for ancient settlements near now - vanished shorelines, suggesting a dynamic transformation of sacred geography that aligns with key mexica myths. we identify santa mar \ ' ia aztahuacan as a strong candidate for the historical aztlan and propose a reinterpretation of foundational codices in light of geomythical correlations.
arxiv:2504.03787
we study recurrence in the real quadratic family and give a sufficient condition on the recurrence rate $ ( \ delta _ n ) $ of the critical orbit such that, for almost every nonregular parameter $ a $, the set of $ n $ such that $ \ vert f ^ n ( 0 ; a ) \ vert < \ delta _ n $ is infinite. in particular, when $ \ delta _ n = n ^ { - 1 } $, this extends an earlier result of avila and moreira.
arxiv:2103.17200
vision - language models ( vlms ) like clip have demonstrated remarkable applicability across a variety of downstream tasks, including zero - shot image classification. recently, the use of prompts or adapters for efficient transfer learning ( etl ) has gained significant attention for effectively adapting to downstream tasks. however, previous studies have overlooked the challenge of varying transfer difficulty of downstream tasks. in this paper, we empirically analyze how each etl method behaves with respect to transfer difficulty. our observations indicate that utilizing vision prompts and text adapters is crucial for adaptability and generalizability in domains with high difficulty. also, by applying an adaptive ensemble approach that integrates task - adapted vlms with pre - trained vlms and strategically leverages more general knowledge in low - difficulty and less in high - difficulty domains, we consistently enhance performance across both types of domains. based on these observations, we propose an adaptive ensemble method that combines visual prompts and text adapters with pre - trained vlms, tailored by transfer difficulty, to achieve optimal performance for any target domain. upon experimenting with extensive benchmarks, our method consistently outperforms all baselines, particularly on unseen tasks, demonstrating its effectiveness.
arxiv:2311.15569
further mathematics is the title given to a number of advanced secondary mathematics courses. the term " higher and further mathematics ", and the term " advanced level mathematics ", may also refer to any of several advanced mathematics courses at many institutions. in the united kingdom, further mathematics describes a course studied in addition to the standard mathematics as - level and a - level courses. in the state of victoria in australia, it describes a course delivered as part of the victorian certificate of education ( see § australia ( victoria ) for a more detailed explanation ). globally, it describes a course studied in addition to gce as - level and a - level mathematics, or one which is delivered as part of the international baccalaureate diploma. in other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math. = = united kingdom = = = = = background = = = a qualification in further mathematics involves studying both pure and applied modules. whilst the pure modules ( formerly known as pure 4 – 6 or core 4 – 6, now known as further pure 1 – 3, where 4 exists for the aqa board ) build on knowledge from the core mathematics modules, the applied modules may start from first principles. the structure of the qualification varies between exam boards. with regard to mathematics degrees, most universities do not require further mathematics, and may incorporate foundation math modules or offer " catch - up " classes covering any additional content. exceptions are the university of warwick, the university of cambridge which requires further mathematics to at least as level ; university college london requires or recommends an a2 in further maths for its maths courses ; imperial college requires an a in a level further maths, while other universities may recommend it or may promise lower offers in return. some schools and colleges may not offer further mathematics, but online resources are available. although the subject has about 60 % of its cohort obtaining " a " grades, students choosing the subject are assumed to be more proficient in mathematics, and there is much more overlap of topics compared to base mathematics courses at a level. some medicine courses do not count maths and further maths as separate subjects for the purposes of making offers. this is due to the overlap in content, and the potentially narrow education a candidate with maths, further maths and just one other subject may have. = = = support = = = there are numerous sources of support for both teachers and students. the amsp ( formerly fmsp ) is a government - funded organisation that offers professional development, enrichment activities and is a source
https://en.wikipedia.org/wiki/Further_Mathematics
generative retrieval introduces a new approach to information retrieval by reframing it as a constrained generation task, leveraging recent advancements in autoregressive ( ar ) language models. however, ar - based generative retrieval methods suffer from high inference latency and cost compared to traditional dense retrieval techniques, limiting their practical applicability. this paper investigates fully non - autoregressive ( nar ) language models as a more efficient alternative for generative retrieval. while standard nar models alleviate latency and cost concerns, they exhibit a significant drop in retrieval performance ( compared to ar models ) due to their inability to capture dependencies between target tokens. to address this, we question the conventional choice of limiting the target token space to solely words or sub - words. we propose pixar, a novel approach that expands the target vocabulary of nar models to include multi - word entities and common phrases ( up to 5 million tokens ), thereby reducing token dependencies. pixar employs inference optimization strategies to maintain low inference latency despite the significantly larger vocabulary. our results demonstrate that pixar achieves a relative improvement of 31. 0 % in mrr @ 10 on ms marco and 23. 2 % in hits @ 5 on natural questions compared to standard nar models with similar latency and cost. furthermore, online a / b experiments on a large commercial search engine show that pixar increases ad clicks by 5. 08 % and revenue by 4. 02 %.
arxiv:2406.06739
we investigate the existence and non - existence of maximal green sequences for quivers arising from weighted projective lines. let $ q $ be the gabreil quiver of the endomorphism algebra of a basic cluster - tilting object in the cluster category $ \ mathcal { c } _ \ mathbb { x } $ of a weighted projective line $ \ mathbb { x } $. it is proved that there exists a quiver $ q ' $ in the mutation equivalence class $ \ operatorname { mut } ( q ) $ such that $ q ' $ admits a maximal green sequence. on the other hand, there is a quiver in $ \ operatorname { mut } ( q ) $ which does not admit a maximal green sequence if and only if $ \ mathbb { x } $ is of wild type.
arxiv:2106.06985
the paper introduces supervised embedding and clustering anomaly detection ( semc - ad ), a method designed to efficiently identify faulty alarm logs in a mobile network and alleviate the challenges of manual monitoring caused by the growing volume of alarm logs. semc - ad employs a supervised embedding approach based on deep neural networks, utilizing historical alarm logs and their labels to extract numerical representations for each log, effectively addressing the issue of imbalanced classification due to a small proportion of anomalies in the dataset without employing one - hot encoding. the robustness of the embedding is evaluated by plotting the two most significant principle components of the embedded alarm logs, revealing that anomalies form distinct clusters with similar embeddings. multivariate normal gaussian clustering is then applied to these components, identifying clusters with a high ratio of anomalies to normal alarms ( above 90 % ) and labeling them as the anomaly group. to classify new alarm logs, we check if their embedded vectors ' two most significant principle components fall within the anomaly - labeled clusters. if so, the log is classified as an anomaly. performance evaluation demonstrates that semc - ad outperforms conventional random forest and gradient boosting methods without embedding. semc - ad achieves 99 % anomaly detection, whereas random forest and xgboost only detect 86 % and 81 % of anomalies, respectively. while supervised classification methods may excel in labeled datasets, the results demonstrate that semc - ad is more efficient in classifying anomalies in datasets with numerous categorical features, significantly enhancing anomaly detection, reducing operator burden, and improving network maintenance.
arxiv:2310.06779
the minimal supersymmetric extension of standard model ( mssm ) is examined by analyzing its quantum effects on the precision electroweak measurements and the muon g - 2. we examine carefully the effects of light charginos and neutralinos that are found to improve the fit to the electroweak data. we identify two distinct regions on the $ ( \ mu, m _ 2 ) $ - plane that fit well to the electroweak data and give significant contribution to muon g - 2.
arxiv:hep-ph/0001229
nonparametric mixture models based on the pitman - yor process represent a flexible tool for density estimation and clustering. natural generalization of the popular class of dirichlet process mixture models, they allow for more robust inference on the number of components characterizing the distribution of the data. we propose a new sampling strategy for such models, named importance conditional sampling ( ics ), which combines appealing properties of existing methods, including easy interpretability and a within - iteration parallelizable structure. an extensive simulation study highlights the efficiency of the proposed method which, unlike other conditional samplers, shows stable performances for different specifications of the parameters characterizing the pitman - yor process. we further show that the ics approach can be naturally extended to other classes of computationally demanding models, such as nonparametric mixture models for partially exchangeable data.
arxiv:1906.08147
we introduce the polymer analysis and discovery array ( panda ), an automated system for high - throughput electrodeposition and functional characterization of polymer films. the panda is a custom, modular, and low - cost system based on a cnc gantry that we have modified to include a syringe pump, potentiostat, and camera with a telecentric lens. this system can perform fluid handling, electrochemistry, and transmission optical measurements on samples in custom 96 - well plates that feature transparent and conducting bottoms. we begin by validating this platform through a series of control fluid handling and electrochemistry experiments to quantify the repeatability, lack of cross - contamination, and accuracy of the system. as a proof - of - concept experimental campaign to study the functional properties of a model polymer film, we optimize the electrochromic switching of electrodeposited poly ( 3, 4 - ethylenedioxythiophene ) : poly ( styrene sulfonate ) ( pedot : pss ) films. in particular, we explore the monomer concentration, deposition time, and deposition voltage using an array of experiments selected by latin hypercube sampling. subsequently, we run an active learning campaign based upon bayesian optimization to find the processing conditions that lead to the highest electrochromic switching of pedot : pss. this self - driving lab integrates optical and electrochemical characterization to constitute a novel, automated approach for studying functional polymer films.
arxiv:2406.17725
as a special class of array codes, $ ( n, k, m ) $ piggybacking codes are mds codes ( i. e., any $ k $ out of $ n $ nodes can retrieve all data symbols ) that can achieve low repair bandwidth for single - node failure with low sub - packetization $ m $. in this paper, we propose two new piggybacking codes that have lower repair bandwidth than the existing piggybacking codes given the same parameters. our first piggybacking codes can support flexible sub - packetization $ m $ with $ 2 \ leq m \ leq n - k $, where $ n - k > 3 $. we show that our first piggybacking codes have lower repair bandwidth for any single - node failure than the existing piggybacking codes when $ n - k = 8, 9 $, $ m = 6 $ and $ 30 \ leq k \ leq 100 $. moreover, we propose second piggybacking codes such that the sub - packetization is a multiple of the number of parity nodes ( i. e., $ ( n - k ) | m $ ), by jointly designing the piggyback function for data node repair and transformation function for parity node repair. we show that the proposed second piggybacking codes have lowest repair bandwidth for any single - node failure among all the existing piggybacking codes for the evaluated parameters $ k / n = 0. 75, 0. 8, 0. 9 $ and $ n - k \ geq 4 $.
arxiv:2209.09691
we propose and demonstrate a method to achieve large effective soret coefficient in colloids by suitably mixing two different particles, e. g., silica beads and fe3o4 nanoparticles. it is shown that the thermophoretic motion of fe3o4 nanoparticles out of the heating region results in a large nonequlibrium depletion force for silica beads. consequently, silica beads are driven quickly to the heating region, forming a three - dimensional crystal with few defects and dislocations. the binding of silica beads is so tight that a colloidal photonic crystal can be achieved after the complete evaporation of solvent, water. thus, for fabrication of defect free colloidal pcs, periodic structures for molecular sieves, among others, the proposed technique could be a low cost alternative. in addition as we use biocompatible materials, this technique could be a tool for biophysics studies where the potential of large effective soret coefficient could be useful.
arxiv:1012.3025
we give a construction of a nuclear $ c ^ \ ast $ - algebra associated with an amalgamated free product of groups, generalizing spielberg ' s construction of a certain cuntz - krieger algebra associated with a finitely generated free product of cyclic groups. our nuclear $ c ^ \ ast $ - algebras can be identified with certain cuntz - krieger - pimsner algebras. we will also show that our algebras can be obtained by the crossed product construction of the canonical actions on the hyperbolic boundaries, which proves a special case of adams ' result about amenability of the boundary action for hyperbolic groups. we will also give an explicit formula of the $ k $ - groups of our algebras. finally we will investigate the relationship between the kms states of the generalized gauge actions on our $ c ^ \ ast $ algebras and random walks on the groups.
arxiv:math/0010097
we numerically investigate the deterministic generation of a perfect soliton crystal ( psc ) in an optical microresonator functionalized with a saturable absorber ( sa ). the sa allows the direct formation of a psc from an initial, periodic turing roll. it prevents passage through a chaotic state, which induces a stochastic nature as regards the number of generated dissipative kerr solitons. we show that pscs form deterministically, and the number is controlled by adjusting the input power and sa parameter. our work provides a simple approach for obtaining a stable psc that offers an ultra - high repetition rate and a high comb output power.
arxiv:2112.12336
in this work, a modified neural architecture search method ( nas ) based physics - informed deep learning model is presented for stochastic analysis in heterogeneous porous material. monte carlo method based on a randomized spectral representation is first employed to construct a stochastic model for simulation of flow through porous media. to solve the governing equations for stochastic groundwater flow problem, we build a modified nas model based on physics - informed neural networks ( pinns ) with transfer learning in this paper that will be able to fit different partial differential equations ( pdes ) with less calculation. the performance estimation strategies adopted is constructed from an error estimation model using the method of manufactured solutions. a sensitivity analysis is performed to obtain the prior knowledge of the pinns model and narrow down the range of parameters for search space and use hyper - parameter optimization algorithms to further determine the values of the parameters. further the nas based pinns model also saves the weights and biases of the most favorable architectures, then used in the fine - tuning process. it is found that the log - conductivity field using gaussian correlation function will perform much better than exponential correlation case, which is more fitted to the pinns model and the modified neural architecture search based pinns model shows a great potential in approximating solutions to pdes. moreover, a three dimensional stochastic flow model is built to provide a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers. the nas model based deep collocation method is verified to be effective and accurate through numerical examples in different dimensions using different manufactured solutions.
arxiv:2010.12344
in this paper, we consider the entire solutions of nonlinear difference equation $ $ f ^ 3 + q ( z ) \ delta f = p _ 1 e ^ { \ alpha _ 1 z } + p _ 2 e ^ { \ alpha _ 2 z } $ $ where $ q $ is a polynomial, and $ p _ 1, p _ 2, \ alpha _ 1, \ alpha _ 2 $ are nonzero constants with $ \ alpha _ 1 \ neq \ alpha _ 2 $. it is showed that if $ f $ is a non - constant entire solution of $ \ rho _ 2 ( f ) < 1 $ to the above equation, then $ f ( z ) = e _ 1e ^ { \ frac { \ alpha _ 1 z } { 3 } } + e _ 2e ^ { \ frac { \ alpha _ 2 z } { 3 } }, $ where $ e _ 1 $ and $ e _ 2 $ are two constants. meanwhile, we give an affirmative answer to the conjecture posed by zhang et al in [ 18 ].
arxiv:2007.12311
we study the interactions of an elementary pion with a nucleon made of constituent quarks and show that the enforcement of chiral symmetry requires the use of a two - body operator, whose form does not depend on the choice of the pion - quark coupling. the coordinate space nn effective potential in the pion exchange channel is given as a sum of terms involving two gradients, that operate on both the usual yukawa function and the confining potential. we also consider an application to the case of quarks bound by a harmonic potential and show that corrections due to the symmetry are important.
arxiv:hep-ph/9612230
the use of laboratory automation by all researchers may substantially accelerate scientific activities by humans, including those in the life sciences. however, computer programs to operate robots should be written to implement laboratory automation, which requires technical knowledge and skills that may not be part of a researcher ' s training or expertise. in the last few years, there has been remarkable development in large language models ( llms ) such as gpt - 4, which can generate computer codes based on natural language instructions. in this study, we used llms, including gpt - 4, to generate scripts for robot operations in biological experiments based on ambiguous instructions. gpt - 4 successfully generates scripts for ot - 2, an automated liquid - handling robot, from simple instructions in natural language without specifying the robotic actions. conventionally, translating the nuances of biological experiments into low - level robot actions requires researchers to understand both biology and robotics, imagine robot actions, and write robotic scripts. our results showed that gpt - 4 can connect the context of biological experiments with robot operation through simple prompts with expert - level contextual understanding and inherent knowledge. replacing robot script programming, which is a tedious task for biological researchers, with natural - language llm instructions that do not consider robot behavior significantly increases the number of researchers who can benefit from automating biological experiments.
arxiv:2304.10267
class activation maps are widely used for explaining deep neural networks. due to its ability to highlight regions of interest, it has evolved in recent years as a key step in weakly supervised learning. a major limitation to the performance of the class activation maps is the small spatial resolution of the feature maps in the last layer of the convolutional neural network. therefore, we expect to generate high - resolution feature maps that result in high - quality semantic information. in this paper, we rethink the properties of semantic information in shallow feature maps. we find that the shallow feature maps still have fine - grained non - discriminative features while mixing considerable non - target noise. furthermore, we propose a simple gradient - based denoising method to filter the noise by truncating the positive gradient. our proposed scheme can be easily deployed in other cam - related methods, facilitating these methods to obtain higher - quality class activation maps. we evaluate the proposed approach through a weakly - supervised semantic segmentation task, and a large number of experiments demonstrate the effectiveness of our approach.
arxiv:2308.02118
having a perfect model to compute the optimal policy is often infeasible in reinforcement learning. it is important in high - stakes domains to quantify and manage risk induced by model uncertainties. entropic risk measure is an exponential utility - based convex risk measure that satisfies many reasonable properties. in this paper, we propose an entropic risk constrained policy gradient and actor - critic algorithms that are risk - averse to the model uncertainty. we demonstrate the usefulness of our algorithms on several problem domains.
arxiv:2006.11679
we report the abundance analysis of new high s / n spectra of the most metal - poor ( [ fe / h ] $ = - 2. 95 $ ) star presently known to be a member of a dwarf galaxy, the draco dsph red giant, d119. no absorption lines for elements heavier than ni are detected in two keck hires spectra covering the $ \ lambda \ lambda $ 3850 - - 6655 \ aa { } wavelength range, phenomenon not previously noted in any other metal - poor star. we present upper limits for several heavy element abundances. the most stringent limits, based on the non - detection of \ ion { sr } { 2 } and \ ion { ba } { 2 } lines, indicate that the total s - and r - process enrichment of d119 is at least 100 times smaller than galactic stars of similar metallicity. the light element abundances are consistent with the star having formed out of material enciched primarily by massive type ii supernovae ( m $ > 20 $ - - 25 m $ _ { \ odot } $ ). if this is the case, we are forced to conclude that massive, metal - poor type ii supernovae did not contribute to the r - process in the proto - draco environment. we compare the abundance pattern observed in d119 to current predictions of prompt enrichement and pair - instability supernovae and find that the model predictions fail by an order or maginitude or more for many elements.
arxiv:astro-ph/0409646
the performance of the atlas inner detector alignment has been studied using $ pp $ collision data at $ \ sqrt { s } = 13 $ tev collected by the atlas experiment during run 2 ( 2015 to 2018 ) of the large hadron collider ( lhc ). the goal of the detector alignment is to determine the detector geometry as accurately as possible and correct for time - dependent movements. the inner detector alignment is based on the minimization of track - hit residuals in a sequence of hierarchical levels, from global mechanical assembly structures to local sensors. subsequent levels have increasing numbers of degrees of freedom ; in total there are almost 750000. the alignment determines detector geometry on both short and long timescales, where short timescales describe movements within an lhc fill. the performance and possible track parameter biases originating from systematic detector deformations are evaluated. momentum biases are studied using resonances decaying to muons or to electrons. the residual sagitta bias and momentum scale bias after alignment are reduced to less than $ \ sim $ 0. 1 tev $ ^ { - 1 } $ and 0. 9 $ \ times10 ^ { - 3 } $, respectively. impact parameter biases are also evaluated using tracks within jets.
arxiv:2007.07624
as robots are deployed in human spaces, it is important that they are able to coordinate their actions with the people around them. part of such coordination involves ensuring that people have a good understanding of how a robot will act in the environment. this can be achieved through explanations of the robot ' s policy. much prior work in explainable ai and rl focuses on generating explanations for single - agent policies, but little has been explored in generating explanations for collaborative policies. in this work, we investigate how to generate multi - agent strategy explanations for human - robot collaboration. we formulate the problem using a generic multi - agent planner, show how to generate visual explanations through strategy - conditioned landmark states and generate textual explanations by giving the landmarks to an llm. through a user study, we find that when presented with explanations from our proposed framework, users are able to better explore the full space of strategies and collaborate more efficiently with new robot partners.
arxiv:2311.11955
narrow linewidth is a long - pursuing goal in precision measurement and sensing. we propose a parity - time ( pt ) - symmetric feedback method to narrow the linewidths of resonance systems. by using a quadrature measurement - feedback loop, we transform a dissipative resonance system into a pt - symmetric system. unlike the conventional pt - symmetric systems which typically require two or more modes, here the pt - symmetric feedback system contains only a single resonance mode, which greatly extends the scope of applications. the method enables remarkable linewidth narrowing and enhancement of measurement sensitivity. we illustrate the concept in a thermal ensemble of atoms, achieving a 48 - fold narrowing of the magnetic resonance linewidth. by applying the method in magnetometry, we realize a 22 - times improvement of the measurement sensitivity. this work opens the avenue for studying non - hermitian physics and high - precision measurements in resonance systems with feedback.
arxiv:2304.07475
paper withdrawn due to a crucial algebraic error in section 3.
arxiv:hep-th/0205049
the following short note provides an alternative proof of a result of coornaert : namely, that given a non - elementary word - hyperbolic group $ g $ with a finite generating set $ x $, there exist constants $ \ lambda, d > 1 $ such that \ [ d ^ { - 1 } \ lambda ^ n \ leq | b _ { g, x } ( n ) | \ leq d \ lambda ^ n \ ] for all $ n \ geq 0 $, where $ b _ { g, x } ( n ) $ is the ball of radius $ n $ in the cayley graph $ \ gamma ( g, x ) $.
arxiv:1901.10321
to enhance the reasoning capabilities of large language models ( llms ), self - consistency has gained significant popularity by combining multiple sampling with majority voting. however, the state - of - the - art self - consistency approaches consume substantial computational resources and lead to significant additional time costs due to the multiple sampling. this prevents its full potential from being realized in scenarios where computational resources are critical. to improve the inference efficiency, this paper introduces \ textit { path - consistency }, a method that leverages the confidence of answers generated in earlier branches to identify the prefix of the most promising path. by dynamically guiding the generation of subsequent branches based on this prefix, the \ textit { path - consistency } mitigates both the errors and redundancies from random or less useful sampling in self - consistency. as a result, it can significantly accelerate the inference process by reducing the number of tokens generated. our extensive empirical evaluation shows that the \ textit { path - consistency } achieves significant acceleration in inference latency ranging from $ 7. 8 \ % $ to $ 40. 5 \ % $, while maintaining or even improving task accuracy across different datasets, including mathematical reasoning, common sense reasoning, symbolic reasoning, and code generation.
arxiv:2409.01281
in this article, we study the de rham cohomology of the first cover in the drinfel ' d tower. in particular, we get a purely local proof that the supercuspidal part realizes the local jacquet - langlands correspondence for $ { \ rm gl } _ n $ by comparing it to the rigid cohomology of some deligne - lusztig varieties. the representations obtained are analogous to the ones appearing in the $ \ ell $ - adic cohomology if we forget the action of the weil group. the proof relies on the generalization of an excision result of grosse - kl \ " onne and on the explicit description of the first cover as a cyclic cover obtained by the author on a previous work.
arxiv:2204.06363