text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
automatic feature extraction domain has witnessed the application of many intelligent methodologies over past decade ; however detection accuracy of these approaches were limited as object geometry and contextual knowledge were not given enough consideration. in this paper, we propose a frame work for accurate detection of features along with automatic interpolation, and interpretation by modeling feature shape as well as contextual knowledge using advanced techniques such as svrf, cellular neural network, core set, and maca. developed methodology has been compared with contemporary methods using different statistical measures. investigations over various satellite images revealed that considerable success was achieved with the cnn approach. cnn has been effective in modeling different complex features effectively and complexity of the approach has been considerably reduced using corset optimization. the system has dynamically used spectral and spatial information for representing contextual knowledge using cnn - prolog approach. system has been also proved to be effective in providing intelligent interpolation and interpretation of random features. | arxiv:1303.6711 |
mixed valence spinels provide a fertile playground for the interplay between charge, spin, and orbital degrees of freedom in strongly correlated electrons on a geometrically frustrated lattice. among them, alv $ _ 2 $ o $ _ 4 $ and liv $ _ 2 $ o $ _ 4 $ exhibit contrasting and puzzling behavior : self - organization of seven - site clusters and heavy fermion behavior. we theoretically perform a comparative study of charge - spin - orbital fluctuations in these two compounds, on the basis of the multiband hubbard models constructed by using the maximally - localized wannier functions obtained from the ab initio band calculations. performing the eigenmode analysis of the generalized susceptibility, we find that, in alv $ _ 2 $ o $ _ 4 $, the relevant fluctuation appears in the charge sector in $ \ sigma $ - bonding type orbitals. in contrast, in liv $ _ 2 $ o $ _ 4 $, optical - type spin fluctuations in the $ a _ { \ rm 1g } $ orbital are enhanced at an incommensurate wave number at low temperature. implications from the comparative study are discussed for the contrasting behavior, including the metal - insulator transition under pressure in liv $ _ 2 $ o $ _ 4 $. | arxiv:1506.06023 |
we consider a wire network of ferromagnetic impurities on the surface of an $ s $ - wave superconductor with strong rashba spin - orbit interaction. within the topological phase, zero - energy majorana fermions appear at wire end - points as well as at junctions between an odd number of wire segments, while no low - energy states are present at junctions between an even number of wire segments, providing strong experimentally accessible signatures for majorana fermions. we also investigate the quasiparticle energy gap with respect to varying the rashba spin - orbit coupling and magnetic impurity strength. | arxiv:1605.00696 |
let \ gamma be a lattice in g = sl ( n, r ) and x = g / s a homogeneous space of g, where s is a closed subgroup of g which contains a real algebraic subgroup h such that g / h is compact. we establish uniform distribution of orbits of \ gamma in x analogous to the classical equidistribution on torus. to obtain this result, we first prove an ergodic theorem along balls in the connected component of borel subgroup of g. | arxiv:math/0310233 |
context. the origin of giant planets at moderate separations $ \ simeq $ $ 1 $ $ - $ $ 10 $ au is still not fully understood because numerical studies of type ii migration in protoplanetary disks often predict a decay of the semi - major axis that is too fast. according to recent 2d simulations, inward migration of a gap - opening planet can be slowed down or even reversed if the outer gap edge becomes heated by irradiation from the central star, and puffed up. aims. here we study how stellar irradiation reduces the disk - driven torque and affects migration in more realistic 3d disks. methods. using 3d hydrodynamic simulations with radiation transfer, we investigated the static torque acting on a single gap - opening planet embedded in a passively heated accretion disk. results. our simulations confirm that a temperature inversion is established at the irradiated outer gap edge and the local increase of the scale height reduces the magnitude of the negative outer lindblad torque. however, the temperature excess is smaller than assumed in 2d simulations and the torque reduction only becomes prominent for specific parameters. for the viscosity $ \ alpha = 10 ^ { - 3 } $, the total torque is reduced for planetary masses ranging from $ 0. 1 $ to $ 0. 7 $ jupiter mass, with the strongest reduction being by a factor of $ - 0. 17 $ ( implying outward migration ) for a saturn - mass planet. for a jupiter - mass planet, the torque reduction becomes stronger with increasing $ \ alpha $ ( the torque is halved when $ \ alpha = 5 \ times10 ^ { - 3 } $ ). conclusions. we conclude that planets that open moderately wide and deep gaps are subject to the largest torque modifications and their type ii migration can be stalled due to gap edge illumination. we then argue that the torque reduction can help to stabilize the orbits of giant planets forming at $ \ gtrsim $ $ 1 $ au. | arxiv:2009.14142 |
the proliferation of open knowledge graphs has led to a surge in scholarly research on the topic over the past decade. this paper presents a bibliometric analysis of the scholarly literature on open knowledge graphs published between 2013 and 2023. the study aims to identify the trends, patterns, and impact of research in this field, as well as the key topics and research questions that have emerged. the work uses bibliometric techniques to analyze a sample of 4445 scholarly articles retrieved from scopus. the findings reveal an ever - increasing number of publications on open knowledge graphs published every year, particularly in developed countries ( + 50 per year ). these outputs are published in highly - referred scholarly journals and conferences. the study identifies three main research themes : ( 1 ) knowledge graph construction and enrichment, ( 2 ) evaluation and reuse, and ( 3 ) fusion of knowledge graphs into nlp systems. within these themes, the study identifies specific tasks that have received considerable attention, including entity linking, knowledge graph embedding, and graph neural networks. | arxiv:2306.13186 |
we suggest a relatively simple and totally geometric conjectural description of uncolored daha superpolynomials of arbitrary algebraic knots ( conjecturally coinciding with the reduced stable khovanov - rozansky polynomials ) via the flagged jacobian factors ( new objects ) of the corresponding unibranch plane curve singularities. this generalizes the cherednik - danilenko conjecture on the betti numbers of jacobian factors, the gorsky combinatorial conjectural interpretation of superpolynomials of torus knots and that by gorsky - mazin for their constant term. the paper mainly focuses on non - torus algebraic knots. a connection with the conjecture due to oblomkov - rasmussen - shende is possible, but our approach is different. a motivic version of our conjecture is related to p - adic orbital a - type integrals for anisotropic centralizers. | arxiv:1605.00978 |
we present the open - source pyratbay framework for exoplanet atmospheric modeling, spectral synthesis, and bayesian retrieval. the modular design of the code allows the users to generate atmospheric 1d parametric models of the temperature, abundances ( in thermochemical equilibrium or constant - with - altitude ), and altitude profiles in hydrostatic equilibrium ; sample exomol and hitran line - by - line cross sections with custom resolving power and line - wing cutoff values ; compute emission or transmission spectra considering cross sections from molecular line transitions, collision - induced absorption, rayleigh scattering, gray clouds, and alkali resonance lines ; and perform markov chain monte carlo atmospheric retrievals for a given transit or eclipse dataset. we benchmarked the pyratbay framework by reproducing line - by - line cross - section sampling of exomol cross sections, producing transmission and emission spectra consistent with petitradtrans models, accurately retrieving the atmospheric properties of simulated transmission and emission observations generated with taurex models, and closely reproducing aura retrieval analyses of the space - based transmission spectrum of hd 209458b. finally, we present a retrieval analysis of a population of transiting exoplanets, focusing on those observed in transmission with the hst wfc3 / g141 grism. we found that this instrument alone can confidently identify when a dataset shows h2o - absorption features ; however, it cannot distinguish whether a muted h2o feature is caused by clouds, high atmospheric metallicity, or low h2o abundance. our results are consistent with previous retrieval analyses. the pyratbay code is available at pypi ( pip install pyratbay ) and conda. the code is heavily documented ( https : / / pyratbay. readthedocs. io ) and tested to provide maximum accessibility to the community and long - term development stability. | arxiv:2105.05598 |
we introduce a hybrid " modified genetic algorithm - multilevel stochastic gradient descent " ( mga - msgd ) training algorithm that considerably improves accuracy and efficiency of solving 3d mechanical problems described, in strong - form, by pdes via anns ( artificial neural networks ). this presented approach allows the selection of a number of locations of interest at which the state variables are expected to fulfil the governing equations associated with a physical problem. unlike classical pde approximation methods such as finite differences or the finite element method, there is no need to establish and reconstruct the physical field quantity throughout the computational domain in order to predict the mechanical response at specific locations of interest. the basic idea of mga - msgd is the manipulation of the learnable parameters ' components responsible for the error explosion so that we can train the network with relatively larger learning rates which avoids trapping in local minima. the proposed training approach is less sensitive to the learning rate value, training points density and distribution, and the random initial parameters. the distance function to minimise is where we introduce the pdes including any physical laws and conditions ( so - called, physics informed ann ). the genetic algorithm is modified to be suitable for this type of ann in which a coarse - level stochastic gradient descent ( csgd ) is exploited to make the decision of the offspring qualification. employing the presented approach, a considerable improvement in both accuracy and efficiency, compared with standard training algorithms such as classical sgd and adam optimiser, is observed. the local displacement accuracy is studied and ensured by introducing the results of finite element method ( fem ) at sufficiently fine mesh as the reference displacements. a slightly more complex problem is solved ensuring its feasibility. | arxiv:2012.11517 |
given an ideal $ \ mathfrak { a } $ in $ a [ x _ 1, \ ldots, x _ n ] $, where $ a $ is a noetherian integral domain, we propose an approach to compute the krull dimension of $ a [ x _ 1, \ ldots, x _ n ] / \ mathfrak { a } $, when the residue class polynomial ring is a free $ a $ - module. when $ a $ is a field, the krull dimension of $ a [ x _ 1, \ ldots, x _ n ] / \ mathfrak { a } $ has several equivalent algorithmic definitions by which it can be computed. but this is not true in the case of arbitrary noetherian rings. for a noetherian integral domain, $ a $ we introduce the notion of combinatorial dimension of $ a [ x _ 1, \ ldots, x _ n ] / \ mathfrak { a } $ and give a gr \ " obner basis method to compute it for residue class polynomial rings that have a free $ a $ - module representation w. r. t. a lexicographic ordering. for such $ a $ - algebras, we derive a relation between krull dimension and combinatorial dimension of $ a [ x _ 1, \ ldots, x _ n ] / \ mathfrak { a } $. an immediate application of this relation is that it gives a uniform method, the first of its kind, to compute the dimension of $ a [ x _ 1, \ ldots, x _ n ] / \ mathfrak { a } $ without having to consider individual properties of the ideal. for $ a $ - algebras that have a free $ a $ - module representation w. r. t. degree compatible monomial orderings, we introduce the concepts of hilbert function, hilbert series and hilbert polynomials and show that gr \ " obner basis methods can be used to compute these quantities. we then proceed to show that the combinatorial dimension of such $ a $ - algebras is equal to the degree of the hilbert polynomial. this enables us to extend the relation between krull dimension and combinatorial dimension to $ a $ - algebras with a free $ a $ - module representation w. r. t. a degree compatible ordering as well. | arxiv:1602.04300 |
we explore the possibility of simultaneous determination of neutrino mass hierarchy and the cp violating phase by using two identical detectors placed at different baseline distances. we focus on a possible experimental setup using neutrino beam from j - parc facility in japan with beam power of 4mw and megaton ( mton ) - class water cherenkov detectors, one placed in kamioka and the other at somewhere in korea. we demonstrate, under reasonable assumptions of systematic uncertainties, that the two - detector complex with each fiducial volume of 0. 27 mton has potential of resolving neutrino mass hierarchy up to sin ^ 2 2theta _ { 13 } > 0. 03 ( 0. 055 ) at 2 \ sigma ( 3 \ sigma ) cl for any values of delta and at the same time has the sensitivity to cp violation by 4 + 4 years running of nu _ e and nu _ e - bar appearance measurement. the significantly enhanced sensitivity is due to clean detection of modulation of neutrino energy spectrum, which is enabled by cancellation of systematic uncertainties between two identical detectors which receive the neutrino beam with the same energy spectrum in the absence of oscillations. | arxiv:hep-ph/0504026 |
quadrature formulas for $ \ int _ a ^ b f ( x ) dx $ where derivative terms need only be evaluated at $ a $ and $ b $ in the composite rule are identified. error bounds are given when $ f : [ a, b ] \ to \ mathbb { r } $ satisfies $ f ^ { ( n - 1 ) } $ is absolutely continuous so that $ f ^ { ( n ) } \ in l ^ p ( [ a, b ] ) $, and when $ f ^ { ( n - 1 ) } $ is merely continuous. | arxiv:1109.0326 |
in light of the recent neutrino experiment results from daya bay and reno collaborations, we study phenomenology of neutrino mixing angles in the type iii seesaw model with an discrete $ a _ 4 \ times z _ 2 $ symmetry, whose spontaneously breaking scale is much higher than the electroweak scale. at tree level, the tri - bimaximal ( tbm ) form of the lepton mixing matrix can be obtained from leptonic yukawa interactions in a natural way. we introduce all possible effective dimension - 5 operators, invariant under the standard model gauge group and $ a _ 4 \ times z _ 2 $, and explicitly show that they induce a deviation of the lepton mixing from the tbm mixing matrix, which can explain a large mixing angle $ \ theta _ { 13 } $ together with small deviations of the solar and atmospheric mixing angles from the tbm. two possible scenarios are investigated, by taking into account either negligible or sizable contributions from the light charged lepton sector to the lepton mixing matrix. especially it is found in the latter scenario that all the neutrino experimental data, including the recent best - fit value of $ \ theta _ { 13 } = 8. 68 ^ { \ circ } $, can be accommodated. the leptonic cp violation characterized by the jarlskog invariant $ j _ { cp } $ has a non - vanishing value, indicating a signal of maximal cp violation. | arxiv:1103.0657 |
this article is the author ' s phd thesis. after a review of string vacua obtained through compactification ( with and wothout fluxes ), it presents and describes various aspects of the landscape of string vacua. at first it gives an introduction and an overview of the statistical study of the set of four dimensional string vacua, giving the detailed study of one corner of this set ( g2 - holonomy compactifications of m - theory ). then it presents the ten dimensional approach to string vacua, concentrating on the ten dimensional description of the type iia flux vacua. finally it gives two examples of models having some interesting and characteristic phenomenological features, and that belong to two different corners of the landscape : warped compactifications of type iib string theory and m - theory compactifications on g2 - holonomy manifolds. | arxiv:0801.0584 |
the problem of robust extraction of visual odometry from a sequence of images obtained by an eye in hand camera configuration is addressed. a novel approach toward solving planar template based tracking is proposed which performs a non - linear image alignment for successful retrieval of camera transformations. in order to obtain global optimum a bio - metaheuristic is used for optimization of similarity among the planar regions. the proposed method is validated on image sequences with real as well as synthetic transformations and found to be resilient to intensity variations. a comparative analysis of the various similarity measures as well as various state - of - art methods reveal that the algorithm succeeds in tracking the planar regions robustly and has good potential to be used in real applications. | arxiv:1401.4648 |
the visual crowding makes it difficult to identify the patterns in peripheral vision, but the neural mechanism for this phenomenon is still unclear because of different opinions. in order to study the separation effect of v1 under different crowding conditions, single - pulse transcranial magnetic stimulation is applied within the right v1. the experimental design includes two factors : tms intensity ( 10 %, 65 %, and 90 % of the phosphene threshold ) and crowding ( high and low ) conditions. the accuracy results show that there is a strong interaction between crowding condition and tms condition. when the tms stimulation intensity is lower than the phosphene threshold, more crowding will be perceived under the high crowding condition, and less crowding will be perceived under the low crowding condition. the above results conclude that the high and low crowding condition separate by tms stimulation. the results support the assumption that the crowding is related to v1 and occurs in the visual coding phase. | arxiv:1905.10023 |
we investigate the monogamy relations related to the concurrence, the entanglement of formation, convex - roof extended negativity, tsallis - q entanglement and r ' enyi - { \ alpha } entanglement, the polygamy relations related to the entanglement of formation, tsallis - q entanglement and r ' enyi - { \ alpha } entanglement. monogamy and polygamy inequalities are obtained for arbitrary multipartite qubit systems, which are proved to be tighter than the existing ones. detailed examples are presented. | arxiv:2112.15410 |
we consider a natural generalization of an abelian hidden subgroup problem where the subgroups and their cosets correspond to graphs of linear functions over a finite field f with d elements. the hidden functions of the generalized problem are not restricted to be linear but can also be m - variate polynomial functions of total degree n > = 2. the problem of identifying hidden m - variate polynomials of degree less or equal to n for fixed n and m is hard on a classical computer since omega ( sqrt { d } ) black - box queries are required to guarantee a constant success probability. in contrast, we present a quantum algorithm that correctly identifies such hidden polynomials for all but a finite number of values of d with constant probability and that has a running time that is only polylogarithmic in d. | arxiv:0706.1219 |
network flow interdiction analysis studies by how much the value of a maximum flow in a network can be diminished by removing components of the network constrained to some budget. although this problem is strongly np - complete on general networks, pseudo - polynomial algorithms were found for planar networks with a single source and a single sink and without the possibility to remove vertices. in this work we introduce pseudo - polynomial algorithms which overcome some of the restrictions of previous methods. we propose a planarity - preserving transformation that allows to incorporate vertex removals and vertex capacities in pseudo - polynomial interdiction algorithms for planar graphs. additionally, a pseudo - polynomial algorithm is introduced for the problem of determining the minimal interdiction budget which is at least needed to make it impossible to satisfy the demand of all sink nodes, on planar networks with multiple sources and sinks satisfying that the sum of the supplies at the source nodes equals the sum of the demands at the sink nodes. furthermore we show that the k - densest subgraph problem on planar graphs can be reduced to a network flow interdiction problem on a planar graph with multiple sources and sinks and polynomially bounded input numbers. however it is still not known if either of these problems can be solved in polynomial time. | arxiv:0801.1737 |
collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients. although training data entirely resides with the clients, recent work shows that training data can be reconstructed from such exchanged gradient information. to enhance privacy, gradient perturbation techniques have been proposed. however, they come at the cost of reduced model performance, increased convergence time, or increased data demand. in this paper, we introduce precode, a privacy enhancing module that can be used as generic extension for arbitrary model architectures. we propose a simple yet effective realization of precode using variational modeling. the stochastic sampling induced by variational modeling effectively prevents privacy leakage from gradients and in turn preserves privacy of data owners. we evaluate precode using state of the art gradient inversion attacks on two different model architectures trained on three datasets. in contrast to commonly used defense mechanisms, we find that our proposed modification consistently reduces the attack success rate to 0 % while having almost no negative impact on model training and final performance. as a result, precode reveals a promising path towards privacy enhancing model extensions. | arxiv:2108.04725 |
gj 1214b is one of the few known transiting super - earth - sized exoplanets with a measured mass and radius. it orbits an m - dwarf, only 14. 55 pc away, making it a favorable candidate for follow - up studies. however, the composition of gj 1214b ' s mysterious atmosphere has yet to be fully unveiled. our goal is to distinguish between the various proposed atmospheric models to explain the properties of gj 1214b : hydrogen - rich or hydrogen - he mix, or a heavy molecular weight atmosphere with reflecting high clouds, as latest studies have suggested. wavelength - dependent planetary radii measurements from the transit depths in the optical / nir are the best tool to investigate the atmosphere of gj 1214b. we present here ( i ) photometric transit observations with a narrow - band filter centered on 2. 14 microns and a broad - band i - bessel filter centered on 0. 8665 microns, and ( ii ) transmission spectroscopy in the h and k atmospheric windows that cover three transits. the obtained photometric and spectrophotometric time series were analyzed with mcmc simulations to measure the planetary radii as a function of wavelength. we determined radii ratios of 0. 1173 for i - bessel and 0. 11735 at 2. 14 microns. our measurements indicate a flat transmission spectrum, in agreement with last atmospheric models that favor featureless spectra with clouds and high molecular weight compositions. | arxiv:1403.2723 |
because substitutions of bh4 - anion with br can stabilize the hexagonal structure of the libh4 at room temperature, leading to a high li - ion conductivity, its thermodynamic stability has been investigated in this work. the binary libh4 - libr system has been explored by means of x - ray diffraction and differential scanning calorimetry, combined with an assessment of thermodynamic properties. the monophasic zone of the hexagonal li ( bh4 ) 1 - x ( br ) x solid solution has been defined from x = 0. 30 to x = 0. 55 at room temperature. solubility limits have been determined by in - situ x - ray diffraction at various temperatures. for the formation of the h - li ( bh4 ) 0. 6 ( br ) 0. 4 solid solution, a value of the enthalpy of mixing has been determined experimentally equal to 1. 0 kj / mol. in addition, the enthalpy of melting has been measured for various compositions. lattice stabilities of libh4 and libr have been determined by ab initio calculations, using crystal and vasp codes. combining results of experiments and theoretical calculations, the libh4 - libr phase diagram has been determined in all composition and temperature range by the calphad method. | arxiv:2105.07677 |
chest x - rays ( cxr ) often reveal rare diseases, demanding precise diagnosis. however, current computer - aided diagnosis ( cad ) methods focus on common diseases, leading to inadequate detection of rare conditions due to the absence of comprehensive datasets. to overcome this, we present a novel benchmark for long - tailed multi - label classification in cxrs, encapsulating both common and rare thoracic diseases. our approach includes developing the " ltml - mimic - cxr " dataset, an augmentation of mimic - cxr with 26 additional rare diseases. we propose a baseline method for this classification challenge, integrating adaptive negative regularization to address negative logits ' over - suppression in tail classes, and a large loss reconsideration strategy for correcting noisy labels from automated annotations. our evaluation on ltml - mimic - cxr demonstrates significant advancements in rare disease detection. this work establishes a foundation for robust cad methods, achieving a balance in identifying a spectrum of thoracic diseases in cxrs. access to our code and dataset is provided at : https : / / github. com / laihaoran / ltml - mimic - cxr. | arxiv:2311.17334 |
a stationary josephson effect in a weak - link between misorientated nonunitary triplet superconductors is investigated theoretically. the non - self - consistent quasiclassical eilenberger equation for this system has been solved analytically. as an application of this analytical calculation, the current - phase diagrams are plotted for the junction between two nonunitary bipolar $ f - $ wave superconducting banks. a spontaneous current parallel to the interface between superconductors has been observed. also, the effect of misorientation between crystals on the josephson and spontaneous currents is studied. such experimental investigations of the current - phase diagrams can be used to test the pairing symmetry in the above - mentioned superconductors. | arxiv:cond-mat/0505120 |
the first phase of the myrrha ( multi - purpose hybrid research reactor for high - tech applications ) project, minerva, was launched in september 2018. through collaboration with the sck - cen, in2p3 laboratories take in charge the developments of several parts of the accelerator, including a fully equipped spoke cryomodule prototype and a cold valves box. this cryomodule will integrate two superconducting single spoke cavities operating at 2k, the rf power couplers and the cold tuning systems associated. for control and regulation purpose, a mtca llrf system prototype is being implemented and will be presented here alongside with the hardware, vhdl and epics developments that aim to fulfil myrrha ' s ambitious requirements. | arxiv:1909.08887 |
we construct a virtual quandle for links in lens spaces $ l ( p, q ) $, with $ q = 1 $. this invariant has two valuable advantages over an ordinary fundamental quandle for links in lens spaces : the virtual quandle is an essential invariant and the presentation of the virtual quandle can be easily written from the band diagram of a link. | arxiv:1702.05964 |
transverse magneto - thermoelectric effects are studied in permalloy thin films grown on mgo and gaas substrates and compared to those grown on suspended sin membranes. the transverse voltage along platinum strips patterned on top of the permalloy films is measured vs. the external magnetic field as a function of angle and temperature gradient. after the identification of the contribution of the planar and anomalous nernst effects, we find an upper limit for the transverse spin seebeck effect, which is several orders of magnitude smaller than previously reported. | arxiv:1310.4045 |
this paper presents a mixed - computation neural network processing approach for edge applications that incorporates low - precision ( low - width ) posit and low - precision fixed point ( fixp ) number systems. this mixed - computation approach employs 4 - bit posit ( posit4 ), which has higher precision around zero, for representing weights with high sensitivity, while it uses 4 - bit fixp ( fixp4 ) for representing other weights. a heuristic for analyzing the importance and the quantization error of the weights is presented to assign the proper number system to different weights. additionally, a gradient approximation for posit representation is introduced to improve the quality of weight updates in the backpropagation process. due to the high energy consumption of the fully posit - based computations, neural network operations are carried out in fixp or posit / fixp. an efficient hardware implementation of a mac operation with a first posit operand and fixp for a second operand and accumulator is presented. the efficacy of the proposed low - precision mixed - computation approach is extensively assessed on vision and language models. the results show that, on average, the accuracy of the mixed - computation is about 1. 5 % higher than that of fixp with a cost of 0. 19 % energy overhead. | arxiv:2312.02210 |
textual redundancy is one of the main challenges to ensuring that legal texts remain comprehensible and maintainable. drawing inspiration from the refactoring literature in software engineering, which has developed methods to expose and eliminate duplicated code, we introduce the duplicated phrase detection problem for legal texts and propose the dupex algorithm to solve it. leveraging the minimum description length principle from information theory, dupex identifies a set of duplicated phrases, called patterns, that together best compress a given input text. through an extensive set of experiments on the titles of the united states code, we confirm that our algorithm works well in practice : dupex will help you simplify your law. | arxiv:2110.00735 |
at levels of laser intensity below threshold for multiphoton ionization, the parametric generation of optical harmonics in gases and other isotropic media is subject to selection rules with origins in angular momentum conservation. the recently developed optics of vector polarization modes provides an unprecedented opportunity to exploit these principles in the production of high harmonic beams with distinctive forms of transverse intensity profile, comprising discrete sub - wavelength filaments in crown - like arrays. a detailed analysis of the fundamental electrodynamics elicits the mechanism, and delivers results illustrating the transverse structures and spatial dimensions of harmonic output that can be achieved. | arxiv:1906.04978 |
$ r _ + ^ { n \ times n } $ denotes the set of $ n \ times n $ non - negative matrices. for $ a \ in r _ + ^ { n \ times n } $ let $ \ omega ( a ) $ be the set of all matrices that can be formed by permuting the elements within each row of $ a $. formally : $ $ \ omega ( a ) = \ { b \ in r _ + ^ { n \ times n } : \ forall i \ ; \ exists \ text { a permutation } \ phi _ i \ ; \ text { s. t. } \ b _ { i, j } = a _ { i, \ phi _ i ( j ) } \ ; \ forall j \ }. $ $ for $ b \ in \ omega ( a ) $ let $ \ rho ( b ) $ denote the spectral radius or largest non negative eigenvalue of $ b $. we show that the arithmetic mean of the row sums of $ a $ is bounded by the maximum and minimum spectral radius of the matrices in $ \ omega ( a ) $ formally, we are showing that $ $ \ min _ { b \ in \ omega ( a ) } \ rho ( b ) \ leq \ frac { 1 } { n } \ sum _ { i = 1 } ^ n \ sum _ { j = 1 } ^ n a _ { i, j } \ leq \ max _ { b \ in \ omega ( a ) } \ rho ( b ). $ $ for positive $ a $ we also obtain necessary and sufficient conditions for one of these inequalities ( or, equivalently, both of them ) to become an equality. we also give criteria which an irreducible matrix $ c $ should satisfy to have $ \ rho ( c ) = \ min _ { b \ in \ omega ( a ) } \ rho ( b ) $ or $ \ rho ( c ) = \ max _ { b \ in \ omega ( a ) } \ rho ( b ) $. these criteria are used to derive algorithms for finding such $ c $ when all the entries of $ a $ are positive. | arxiv:2209.01991 |
we introduce and systematically study an expansive class of " orbifold higgs " theories in which the weak scale is protected by accidental symmetries arising from the orbifold reduction of continuous symmetries. the protection mechanism eliminates quadratic sensitivity of the higgs mass to higher scales at one loop ( or more ) and does not involve any new states charged under the standard model. the structures of the higgs and top sectors are universal and determined exclusively by group theoretical considerations. the twin higgs model fits within our framework as the simplest example of an orbifold higgs. our models admit uv completions as geometric orbifolds in higher dimensions, and fit naturally within frameworks of low scale gauge coupling unification. | arxiv:1411.7393 |
purpose : to investigate the use of a vision transformer ( vit ) to reconstruct / denoise gaba - edited magnetic resonance spectroscopy ( mrs ) from a quarter of the typically acquired number of transients using spectrograms. theory and methods : a quarter of the typically acquired number of transients collected in gaba - edited mrs scans are pre - processed and converted to a spectrogram image representation using the short - time fourier transform ( stft ). the image representation of the data allows the adaptation of a pre - trained vit for reconstructing gaba - edited mrs spectra ( spectro - vit ). the spectro - vit is fine - tuned and then tested using \ textit { in vivo } gaba - edited mrs data. the spectro - vit performance is compared against other models in the literature using spectral quality metrics and estimated metabolite concentration values. results : the spectro - vit model significantly outperformed all other models in four out of five quantitative metrics ( mean squared error, shape score, gaba + / water fit error, and full width at half maximum ). the metabolite concentrations estimated ( gaba + / water, gaba + / cr, and glx / water ) were consistent with the metabolite concentrations estimated using typical gaba - edited mrs scans reconstructed with the full amount of typically collected transients. conclusion : the proposed spectro - vit model achieved state - of - the - art results in reconstructing gaba - edited mrs, and the results indicate these scans could be up to four times faster. | arxiv:2311.15386 |
we establish minimax optimal rates of convergence for estimation in a high dimensional additive model assuming that it is approximately sparse. our results reveal an interesting phase transition behavior universal to this class of high dimensional problems. in the { \ it sparse regime } when the components are sufficiently smooth or the dimensionality is sufficiently large, the optimal rates are identical to those for high dimensional linear regression, and therefore there is no additional cost to entertain a nonparametric model. otherwise, in the so - called { \ it smooth regime }, the rates coincide with the optimal rates for estimating a univariate function, and therefore they are immune to the " curse of dimensionality ". | arxiv:1503.02817 |
let $ \ mathbb { f } _ q $ denote the finite field of $ q $ elements and $ \ mathbb { f } _ { q ^ n } $ the degree $ n $ extension of $ \ mathbb { f } _ q $. a normal basis of $ \ mathbb { f } _ { q ^ n } $ over $ \ mathbb { f } _ q $ is a basis of the form $ \ { \ alpha, \ alpha ^ q, \ dots, \ alpha ^ { q ^ { n - 1 } } \ } $. an irreducible polynomial in $ \ mathbb { f } _ q [ x ] $ is called an $ n $ - polynomial if its roots are linearly independent over $ \ mathbb { f } _ q $. let $ p $ be the characteristic of $ \ mathbb { f } _ q $. pelis et al. showed that every monic irreducible polynomial with degree $ n $ and nonzero trace is an $ n $ - polynomial provided that $ n $ is either a power of $ p $ or a prime different from $ p $ and $ q $ is a primitive root modulo $ n $. chang et al. proved that the converse is also true. by comparing the number of $ n $ - polynomials with that of irreducible polynomials with nonzero traces, we present an alternative treatment to this problem and show that all the results mentioned above can be easily deduced from our main theorem. | arxiv:1807.09927 |
certain operator algebras a on a hilbert space have the property that every densely defined linear transformation commuting with a is closable. such algebras are said to have the closability property. they are important in the study of the transitive algebra problem. more precisely, if a is a two - transitive algebra with the closability property, then a is dense in the algebra of all bounded operators, in the weak operator topology. in this paper we focus on algebras generated by a completely nonunitary contraction, and produce several new classes of algebras with the closability property. we show that this property follows from a certain strict cyclicity property, and we give very detailed information on the class of completely nonunitary contractions satisfying this property, as well as a stronger property which we call confluence. | arxiv:0908.0729 |
we discuss the equivalence relation between the euclidean bipartite matching problem on the line and on the circumference and the brownian bridge process on the same domains. the equivalence allows us to compute the correlation function and the optimal cost of the original combinatoric problem in the thermodynamic limit ; moreover, we solve also the minimax problem on the line and on the circumference. the properties of the average cost and correlation functions are discussed. | arxiv:1406.7565 |
we establish the existence of the phase transition in site percolation on pseudo - random $ d $ - regular graphs. let $ g = ( v, e ) $ be an $ ( n, d, \ lambda ) $ - graph, that is, a $ d $ - regular graph on $ n $ vertices in which all eigenvalues of the adjacency matrix, but the first one, are at most $ \ lambda $ in their absolute values. form a random subset $ r $ of $ v $ by putting every vertex $ v \ in v $ into $ r $ independently with probability $ p $. then for any small enough constant $ \ epsilon > 0 $, if $ p = \ frac { 1 - \ epsilon } { d } $, then with high probability all connected components of the subgraph of $ g $ induced by $ r $ are of size at most logarithmic in $ n $, while for $ p = \ frac { 1 + \ epsilon } { d } $, if the eigenvalue ratio $ \ lambda / d $ is small enough as a function of $ \ epsilon $, then typically $ r $ spans a connected component of size at least $ \ frac { \ epsilon n } { d } $ and a path of length proportional to $ \ frac { \ epsilon ^ 2n } { d } $. | arxiv:1404.5731 |
according to conventional wisdom, ambiguity accelerates optimal timing by decreasing the value of waiting in comparison with the unambiguous benchmark case. we study this mechanism in a multidimensional setting and show that in a multifactor model ambiguity does not only influence the rate at which the underlying processes are expected to grow, it also affects the rate at which the problem is discounted. this mechanism where nature also selects the rate at which the problem is discounted cannot appear in a one - dimensional setting and as such we identify an indirect way of how ambiguity affects optimal timing. | arxiv:1905.05429 |
the audio source separation tasks, such as speech enhancement, speech separation, and music source separation, have achieved impressive performance in recent studies. the powerful modeling capabilities of deep neural networks give us hope for more challenging tasks. this paper launches a new multi - task audio source separation ( mtass ) challenge to separate the speech, music, and noise signals from the monaural mixture. first, we introduce the details of this task and generate a dataset of mixtures containing speech, music, and background noises. then, we propose an mtass model in the complex domain to fully utilize the differences in spectral characteristics of the three audio signals. in detail, the proposed model follows a two - stage pipeline, which separates the three types of audio signals and then performs signal compensation separately. after comparing different training targets, the complex ratio mask is selected as a more suitable target for the mtass. the experimental results also indicate that the residual signal compensation module helps to recover the signals further. the proposed model shows significant advantages in separation performance over several well - known separation models. | arxiv:2107.06467 |
we discuss the construction of toric kaehler metrics on symplectic 2n - manifolds with a hamiltonian n - torus action and present a simple derivation of the guillemin formula for a distinguished kaehler metric on any such manifold. the results also apply to orbifolds. | arxiv:math/0310243 |
with an increasing focus on precision medicine in medical research, numerous studies have been conducted in recent years to clarify the relationship between treatment effects and patient characteristics. the treatment effects for patients with different characteristics are always heterogeneous, and various heterogeneous treatment effect machine learning estimation methods have been proposed owing to their flexibility and high prediction accuracy. however, most machine learning methods rely on black - box models, preventing direct interpretation of the relationship between patient characteristics and treatment effects. moreover, most of these studies have focused on continuous or binary outcomes, although survival outcomes are also important in medical research. to address these challenges, we propose a heterogeneous treatment effect estimation method for survival data based on rulefit, an interpretable machine learning method. numerical simulation results confirmed that the prediction performance of the proposed method was comparable to that of existing methods. we also applied a dataset from an hiv study, the aids clinical trials group protocol 175 dataset, to illustrate the interpretability of the proposed method using real data. consequently, the proposed method established an interpretable model with sufficient prediction accuracy. | arxiv:2309.11914 |
self - dual yang - mills theory admits an underlying infinite dimensional symmetry algebra, which has been obtained from mode expansion of mellin transformed 4d scattering amplitudes and separately, koszul duality on twistor space. in this paper, we propose to derive an explicit 2d realization of the algebra by performing a particular gauge transformation on the twistor action for self - dual yang - mills. the gauge parameter used in the transformation generates pure gauge connections corresponding to large gauge transformations on 4d minkowski space, which localises part of the twistor action to a cp1 after scaling reduction of twistor space. under a projection, it can be mapped to the celestial sphere at the light - cone cut of the origin on null infinity. geometrically, this is the common boundary celestial sphere shared by euclidean ads3 or lorentzian ds3 slices of minkowski space. we comment on the geometric meaning of the derivation from the perspective of minitwistor spaces of the 3d slices embedded in 4d minkowski space. using the action functional of this 2d cft, we compute its stress - energy tensor and central charge. by a further marginal deformation, we calculate correlation functions of current algebra generators purely from the 2d side which reproduce 4d mhv form factors. | arxiv:2310.17457 |
an effective operational approach to quantum mechanics is to focus on the evolution of wave - packets, for which the wave - function can be seen in the semi - classical regime as representing a classical motion dressed with extra degrees of freedom describing the shape of the wave - packet and its fluctuations. these quantum dressing are independent degrees of freedom, mathematically encoded in the higher moments of the wave - function. we review how to extract the effective dynamics for gaussian wave - packets evolving according to the schrodinger equation with time - dependent potential in a 1 + 1 - dimensional spacetime, and derive the equations of motion for the quadratic uncertainty. we then show how to integrate the evolution of all the higher moments for a general wave - function in a time - dependent harmonic potential. | arxiv:2305.03847 |
we theoretically study scanning gate microscopy ( sgm ) of electron and hole trajectories in a quantum point contact ( qpc ) embedded in a normal - superconductor ( ns ) junction. at zero voltage bias, the electrons and holes transported through the qpc form angular lobes and are subject to self - interference, which marks the sgm conductance maps with interference fringes analogously as in normal systems. we predict that for an ns junction at non - zero bias a beating pattern is to occur in the conductance probed with the use of the sgm technique owing to a mismatch of the fermi wavevectors of electrons and holes. moreover, the sgm technique exposes a pronounced disturbance in the angular conductance pattern, as the retroreflected hole does not retrace the electron path due to wavevector difference. | arxiv:2310.03523 |
personal health devices can enable continuous monitoring of health parameters. however, the benefit of these devices is often directly related to the frequency of use. therefore, adherence to personal health devices is critical. this paper takes a data mining approach to study continuous glucose monitor use in diabetes management. we evaluate two independent datasets from a total of 44 subjects for 60 - 270 days. our results show that : 1 ) missed target goals ( i. e. suboptimal outcomes ) is a factor that is associated with wearing behavior of personal health devices, and 2 ) longer duration of non - adherence, identified through missing data or data gaps, is significantly associated with poorer outcomes. more specifically, we found that up to 33 % of data gaps occurred when users were in abnormal blood glucose categories. the longest data gaps occurred in the most severe ( i. e. very low / very high ) glucose categories. additionally, subjects with poorly - controlled diabetes had longer average data gap duration than subjects with well - controlled diabetes. this work contributes to the literature on the design of context - aware systems that can leverage data - driven approaches to understand factors that influence non - wearing behavior. the results can also support targeted interventions to improve health outcomes. | arxiv:2006.04947 |
the low - lying isovector dipole strengths in neutron rich nuclei $ ^ { 26 } $ ne and $ ^ { 28 } $ ne are investigated in the quasiparticle relativistic random phase approximation. nuclear ground state properties are calculated in an extended relativistic mean - field theory plus bcs method where the contribution of the resonant continuum to pairing correlations is properly treated. numerical calculations are tested in the case of isovector dipole and isoscalar quadrupole modes in the neutron rich nucleus $ ^ { 22 } $ o. it is found that in present calculation low - lying isovector dipole strengths at $ e _ x < 10 $ mev in nuclei $ ^ { 26 } $ ne and $ ^ { 28 } $ ne exhaust about 4. 9 % and 5. 8 % of the thomas - reiche - kuhn dipole sum rule, respectively. the centroid energy of the low - lying dipole excitation is located at 8. 3 mev in $ ^ { 26 } $ ne and 7. 9 mev in $ ^ { 28 } $ ne. | arxiv:nucl-th/0501016 |
we study the following fractional schr \ " { o } dinger equation \ begin { equation } \ label { eq0. 1 } \ varepsilon ^ { 2s } ( - \ delta ) ^ s u + vu = | u | ^ { p - 2 } u, \ \ x \ in \, \, \ mathbb { r } ^ n. \ end { equation } we show that if the external potential $ v \ in c ( \ mathbb { r } ^ n ; [ 0, \ infty ) ) $ has a local minimum and $ p \ in ( 2 + 2s / ( n - 2s ), 2 ^ * _ s ) $, where $ 2 ^ * _ s = 2n / ( n - 2s ), \, n \ ge 2s $, the problem has a family of solutions concentrating at the local minimum of $ v $ provided that $ \ liminf _ { | x | \ to \ infty } v ( x ) | x | ^ { 2s } > 0 $. the proof is based on variational methods and penalized technique. { \ textbf { key words } : } fractional schr \ " { o } dinger ; vanishing potential ; penalized technique ; variational methods. | arxiv:1711.10655 |
in this paper we explore the connection between the ranks of the magnitude homology groups of a graph and the structure of its subgraphs. to this end, we introduce variants of magnitude homology called eulerian magnitude homology and discriminant magnitude homology. leveraging the combinatorics of the differential in magnitude homology, we illustrate a close relationship between the ranks of the eulerian magnitude homology groups on the first diagonal and counts of subgraphs which fall in specific classes. we leverage these tools to study limiting behavior of the eulerian magnitude homology groups for erdos - renyi random graphs and random geometric graphs, producing for both models a vanishing threshold for the eulerian magnitude homology groups on the first diagonal. this in turn provides a characterization of the generators for the corresponding magnitude homology groups. finally, we develop an explicit asymptotic estimate the expected rank of eulerian magnitude homology along the first diagonal for these random graph models. | arxiv:2403.09248 |
in this paper we establish that the well - known arithmetic system is consistent in the traditional sense. the proof is done within this arithmetic system. | arxiv:1803.11072 |
in 2013, beck and braun proved and generalized multiple identities involving permutation statistics via discrete geometry. namely, they recognized the identities as specializations of integer point transform identities for certain polyhedral cones. they extended many of their proof techniques to obtain identities involving wreath products, but some identities were resistant to their proof attempts. in this article, we provide a geometric justification of one of these wreath product identities, which was first established by biagioli and zeng. | arxiv:1712.00839 |
campos et al. [ phys. rev. lett. 97 ( 2006 ) 217204 ] claim that in the 3d heisenberg spin glass, chiral and spin sector ordering temperatures are identical. we point out that in their analysis of their numerical data key assumptions are made which are unjustified. | arxiv:cond-mat/0703369 |
growing interests in rgb - d salient object detection ( rgb - d sod ) have been witnessed in recent years, owing partly to the popularity of depth sensors and the rapid progress of deep learning techniques. unfortunately, existing rgb - d sod methods typically demand large quantity of training images being thoroughly annotated at pixel - level. the laborious and time - consuming manual annotation has become a real bottleneck in various practical scenarios. on the other hand, current unsupervised rgb - d sod methods still heavily rely on handcrafted feature representations. this inspires us to propose in this paper a deep unsupervised rgb - d saliency detection approach, which requires no manual pixel - level annotation during training. it is realized by two key ingredients in our training pipeline. first, a depth - disentangled saliency update ( dsu ) framework is designed to automatically produce pseudo - labels with iterative follow - up refinements, which provides more trustworthy supervision signals for training the saliency network. second, an attentive training strategy is introduced to tackle the issue of noisy pseudo - labels, by properly re - weighting to highlight the more reliable pseudo - labels. extensive experiments demonstrate the superior efficiency and effectiveness of our approach in tackling the challenging unsupervised rgb - d sod scenarios. moreover, our approach can also be adapted to work in fully - supervised situation. empirical studies show the incorporation of our approach gives rise to notably performance improvement in existing supervised rgb - d sod models. | arxiv:2205.07179 |
x - ray grating spectra provide the confirmation of continued mass loss from novae in the super - soft source ( sss ) phase of the outburst. in this work expanding nova atmosphere models are developed and used to study the effect of mass loss on the sss spectra. the very high temperatures combined with high expansion velocities and large radial extension make nova in the sss phase very interesting but also difficult objects to model. the radiation transport code phoenix was applied to sss novae before, but careful analysis of the old results has revealed a number of problems which lead to new methods and improvements to the code : 1 ) an improved nlte module ( a new opacity formalism, rate matrix solver, global iteration scheme, and temperature correction method ) ; 2 ) a new hybrid hydrostatic - dynamic nova atmosphere setup ; 3 ) the models are treated in pure nlte ( no lte approximation for any opacity ). with the new framework a modest amount of models ( limited by computation time ) are calculated. these show : 1 ) systematic behaviour for various atmospheric conditions, 2 ) the effect of expansion on the model spectrum is significant, and 3 ) the spectra are sensitive to the details of the atmospheric structure. the models are compared to the ten well - exposed grating spectra presently available : 5x v4743 sgr, 3x rs oph, and 2x v2491 cyg. although the models are on a coarse grid they do match the observations surprisingly well. also, hydrostatic models are computed. the reproduction of the data is clearly inferior to the expanding models and, more importantly, their interpretation with hydrostatic models leads to conclusions opposite to those from expanding models. the models enable the derivation of accurate constraints on the physical conditions deep in the nova atmosphere that are revealed only in the sss phase. | arxiv:1208.0846 |
although there is a clear indication that stages of residential decision making are characterized by their own stakeholders, activities, and outcomes, many studies on residential low - carbon technology adoption only implicitly address stage - specific dynamics. this paper explores stakeholder influences on residential photovoltaic adoption from a procedural perspective, so - called stakeholder dynamics. the major objective is the understanding of underlying mechanisms to better exploit the potential for residential photovoltaic uptake. four focus groups have been conducted in close collaboration with the independent institute for social science research sinus markt - und sozialforschung in east germany. by applying a qualitative content analysis, major influence dynamics within three decision stages are synthesized with the help of egocentric network maps from the perspective of residential decision - makers. results indicate that actors closest in terms of emotional and spatial proximity such as members of the social network represent the major influence on residential pv decision - making throughout the stages. furthermore, decision - makers with a higher level of knowledge are more likely to move on to the subsequent stage. a shift from passive exposure to proactive search takes place through the process, but this shift is less pronounced among risk - averse decision - makers who continuously request proactive influences. the discussions revealed largely unexploited potential regarding the stakeholders local utilities and local governments who are perceived as independent, trustworthy and credible stakeholders. public stakeholders must fulfill their responsibility in achieving climate goals by advising, assisting, and financing services for low - carbon technology adoption at the local level. supporting community initiatives through political frameworks appears to be another promising step. | arxiv:2104.14240 |
let ( k ( n ) ) n = 1, 2,... be a strictly increasing sequence of positive integers. we consider a specific sequence of differential operators tk ( n ), { \ lambda }, n = 1, 2,... on the space of entire functions, that depend on the sequence ( k ( n ) ) n = 1, 2,... and the non - zero complex number { \ lambda }. we establish the existence of an entire function f, such that for every positive number { \ lambda } the set { tk ( n ), { \ lambda }, n = 1, 2,... } is dense in the space of entire functions endowed with the topology of uniform convergence on compact subsets of the complex plane. this provides the best possible strenghthened version of a corresponding result due to costakis and sambarino [ 9 ]. from this and using a non - trivial result of weyl which concerns the uniform distribution modulo 1 of certain sequences and cavalieri principle we can extend our result for a subset of the set of complex numbers with full 2 - dimentional lebesque measure. | arxiv:1506.05241 |
healthcare data is sensitive and requires great protection. encrypted electronic health records ( ehrs ) contain personal and sensitive data such as names and addresses. having access to patient data benefits all of them. this paper proposes a blockchain - based distributed healthcare application platform for bangladeshi public and private healthcare providers. using data immutability and smart contracts, the suggested application framework allows users to create safe digital agreements for commerce or collaboration. thus, all enterprises may securely collaborate using the same blockchain network, gaining data openness and read / write capacity. the proposed application consists of various application interfaces for various system users. for data integrity, privacy, permission and service availability, the proposed solution leverages hyperledger fabric and blockchain as a service. everyone will also have their own profile in the portal. a unique identity for each person and the installation of digital information centres across the country have greatly eased the process. it will collect systematic health data from each person which will be beneficial for research institutes and health - related organisations. a national data warehouse in bangladesh is feasible for this application and it is also possible to keep a clean health sector by analysing data stored in this warehouse and conducting various purification algorithms using technologies like data science. given that bangladesh has both public and private health care, a straightforward digital strategy for all organisations is essential. | arxiv:2205.15416 |
the gibbons - maeda - garfinkle - horowitz - strominger ( gmghs ) black hole is an influential solution of the low energy heterotic string theory. as it is well known, it presents a singular extremal limit. we construct a regular extension of the gmghs extremal black hole in a model with $ \ mathcal { o } ( \ alpha ' ) $ corrections in the action, by solving the fully non - linear equations of motion. the de - singularization is supported by the $ \ mathcal { o } ( \ alpha ' ) $ - terms. the regularised extremal gmghs bhs are asymptotically flat, possess a regular ( non - zero size ) horizon of spherical topology, with an $ ads _ 2 \ times s ^ 2 $ near horizon geometry, and their entropy is proportional to the electric charge. the near horizon solution is obtained analytically and some illustrative bulk solutions are constructed numerically. | arxiv:2103.00884 |
we investigate the response of a photonic gas interacting with a reservoir of pumped dye - molecules to quenches in the pump power. in addition to the expected dramatic critical slowing down of the equilibration time around phase transitions we find extremely slow equilibration even far away from phase transitions. this non - critical slowing down can be accounted for quantitatively by fierce competition among cavity modes for access to the molecular environment, and we provide a quantitative explanation for this non - critical slowing down. | arxiv:1809.08772 |
we consider the question of when a semigroup is the semigroup of a valuation dominating a two dimensional noetherian local domain, giving some surprising examples. we give a necessary and sufficient condition for the pair of a semigroup s and a field extension l / k to be the semigroup and residue field of a valaution dominating a regular local ring r of dimension two with residue field k, generalizing the theorem of spivakovsky for the case when there is no residue field extension. | arxiv:1105.1448 |
phenanthrene, a three ring aromatic hydrocarbon, is used as a model substance for study of intermolecular interactions in dilute solutions of organic solvents and water by spectroscopic method. temperature dependencies of shift of the s $ _ 2 \ leftarrow $ s $ _ 0 $ spectrum of phenanthrene dissolved in apolar solvents and in liquid water are studied in this work. the spectroscopic data are used for analysis of the cavity radius $ r $ for phenanthrene molecule in solvents. it is shown that the value of $ r $ increases with temperature in organic solvents. in contrast, for water solution, the value of $ r $ initially grows with temperature increase ( in the interval of 273. 3 - 296 k ), but becomes constant at higher temperatures. the changes of water structure in the neighbourhood of a phenanthrene molecule and probable hypothesis about causes of $ r $ value constancy in high - temperature region are discussed. conclusion about strengthening of the water structure in an aqueous cage with phenanthrene molecule inside is done. | arxiv:cond-mat/0306385 |
based on discrete truncated powers, the beautiful popoviciu ' s formulation for restricted integer partition function is generalized. an explicit formulation for two dimensional multivariate truncated power functions is presented. therefore, a simplified explicit formulation for two dimensional vector partition functions is given. moreover, the generalized frobenius problem is also discussed. | arxiv:math/0511196 |
this paper presents an algorithm that makes novel use of distance measurements alongside a constrained kalman filter to accurately estimate pelvis, thigh, and shank kinematics for both legs during walking and other body movements using only three wearable inertial measurement units ( imus ). the distance measurement formulation also assumes hinge knee joint and constant body segment length, helping produce estimates that are near or in the constraint space for better estimator stability. simulated experiments shown that inter - imu distance measurement is indeed a promising new source of information to improve the pose estimation of inertial motion capture systems under a reduced sensor count configuration. furthermore, experiments show that performance improved dramatically for dynamic movements even at high noise levels ( e. g., $ \ sigma _ { dist } = 0. 2 $ m ), and that acceptable performance for normal walking was achieved at $ \ sigma _ { dist } = 0. 1 $ m. nevertheless, further validation is recommended using actual distance measurement sensors. | arxiv:2003.10228 |
lindel \ " of topological groups $ g _ 1 $, $ h _ 1 $, $ g _ 2 $, $ h _ 2 $ are constructed in such a way that the products of $ g _ 1 \ times h _ 1 $ and $ g _ 2 \ times h _ 2 $ are not $ \ mathbb r $ - factorizable groups and ( 1 ) the group $ g _ 1 \ times h _ 1 $ is not pseudo - $ \ aleph _ 1 $ - compact ; ( 2 ) the group $ g _ 2 \ times h _ 2 $ is a separable not normal group and contains a discrete closed subset of the cardinality continuum. | arxiv:2303.06369 |
the cross section for the $ ^ 3 $ he ( e, e $ ' $ d ) p reaction has been measured as a function of the missing momentum $ p _ m $ in q $ \ omega $ - constant kinematics at beam energies of 370 and 576 mev for values of the three - momentum transfer $ q $ of 412, 504 and 604 \ mevc. the l ( + tt ), t and lt structure functions have been separated for $ q $ = 412 and 504 \ mevc. the data are compared to three - body faddeev calculations, including meson - exchange currents ( mec ), and to calculations based on a covariant diagrammatic expansion. the influence of final - state interactions and meson - exchange currents is discussed. the $ p _ m $ - dependence of the data is reasonably well described by all calculations. however, the most advanced faddeev calculations, which employ the av18 nucleon - nucleon interaction and include mec, overestimate the measured cross sections, especially the longitudinal part, and at the larger values of $ q $. the diagrammatic approach gives a fair description of the cross section, but under ( over ) estimates the longitudinal ( transverse ) structure function. | arxiv:nucl-ex/0201011 |
traditional face alignment based on machine learning usually tracks the localizations of facial landmarks employing a static model trained offline where all of the training data is available in advance. when new training samples arrive, the static model must be retrained from scratch, which is excessively time - consuming and memory - consuming. in many real - time applications, the training data is obtained one by one or batch by batch. it results in that the static model limits its performance on sequential images with extensive variations. therefore, the most critical and challenging aspect in this field is dynamically updating the tracker ' s models to enhance predictive and generalization capabilities continuously. in order to address this question, we develop a fast and accurate online learning algorithm for face alignment. particularly, we incorporate on - line sequential extreme learning machine into a parallel cascaded regression framework, coined incremental cascade regression ( icr ). to the best of our knowledge, this is the first incremental cascaded framework with the non - linear regressor. one main advantage of icr is that the tracker model can be fast updated in an incremental way without the entire retraining process when a new input is incoming. experimental results demonstrate that the proposed icr is more accurate and efficient on still or sequential images compared with the recent state - of - the - art cascade approaches. furthermore, the incremental learning proposed in this paper can update the trained model in real time. | arxiv:1905.04010 |
in this paper, we aim to address the open questions raised in various recent papers regarding characterization of circulant graphs with three or four distinct eigenvalues in their spectra. our focus is on providing characterizations and constructing classes of graphs falling under this specific category. we present a characterization of circulant graphs with prime number order and unitary cayley graphs with arbitrary order, both of which possess spectra displaying three or four distinct eigenvalues. various constructions of circulant graphs with composite orders are provided whose spectra consist of four distinct eigenvalues. these constructions primarily utilize specific subgraphs of circulant graphs that already possess two or three eigenvalues in their spectra, employing graph operations like the tensor product, the union, and the complement. finally, we characterize the iterated line graphs of unitary cayley graphs whose spectra contain three or four distinct eigenvalues, and we show their non - circulant nature. | arxiv:2310.06203 |
wide web ( springer ) : https : / / link. springer. com / journal / 11280 web coding journal : http : / / www. web - code. org / web reference : https : / / www. kevi. my / special issues web engineering, ieee multimedia, jan. – mar. 2001 ( part 1 ) and april – june 2001 ( part 2 ). http : / / csdl2. computer. org / persagen / dlpublication. jsp? pubtype = m & acronym = mu usability engineering, ieee software, january – february 2001. web engineering, cutter it journal, 14 ( 7 ), july 2001. * testing e - business applications, cutter it journal, september 2001. engineering internet software, ieee software, march – april 2002. usability and the web, ieee internet computing, march – april 2002. citations [ 1 ] | https://en.wikipedia.org/wiki/Web_engineering |
the dean - kawasaki model consists of a nonlinear stochastic partial differential equation featuring a conservative, multiplicative, stochastic term with non - lipschitz coefficient, and driven by space - time white noise ; this equation describes the evolution of the density function for a system of finitely many particles governed by langevin dynamics. well - posedness for the dean - kawasaki model is open except for specific diffusive cases, corresponding to overdamped langevin dynamics. there, it was recently shown by lehmann, konarovskyi, and von renesse that no regular ( non - atomic ) solutions exist. we derive and analyse a suitably regularised dean - kawasaki model of wave equation type driven by coloured noise, corresponding to second order langevin dynamics, in one space dimension. the regularisation can be interpreted as considering particles of finite size rather than describing them by atomic measures. we establish existence and uniqueness of a solution. specifically, we prove a high - probability result for the existence and uniqueness of mild solutions to this regularised dean - kawasaki model. | arxiv:1802.01716 |
financial potential is an important part of enterprise activities. the technique of the enterprise ' s financial potential assessment is offered in the paper. it is presented by particular stages, where each stage is related to a certain task. the characteristics of the company ' s financial potential, based on the analysis of the related literature, are determined. the implementation of each task is carried out. thus, the study proposes a mechanism for managing the financial potential of enterprises, which allows to emphasize the elements that can be useful for economic development. it is based on the general strategic principles of the enterprise management. the study results can be used to assess enterprise purposes and develop the formation goals of its financial potential. it can also help to forecast and separate main directions of accumulation, formation, and distribution of financial resources. it should be noted, that analysis and control over the financial potential formation strategy, as well as the use of analysis results for specifying the strategic directions of the enterprise development, are of high importance. therefore, the management of the financial potential is a system of rational management of business financing, which includes the formation of financial relations, emerging as a result of finance resources flow. | arxiv:1912.05635 |
we present our analysis of a significant data artifact in the official 2019 / 2021 asvspoof challenge dataset. we identify an uneven distribution of silence duration in the training and test splits, which tends to correlate with the target prediction label. bonafide instances tend to have significantly longer leading and trailing silences than spoofed instances. in this paper, we explore this phenomenon and its impact in depth. we compare several types of models trained on a ) only the duration of the leading silence and b ) only on the duration of leading and trailing silence. results show that models trained on only the duration of the leading silence perform particularly well, and achieve up to 85 % percent accuracy and an equal error rate ( eer ) of 15. 1 %. at the same time, we observe that trimming silence during pre - processing and then training established antispoofing models using signal - based features leads to comparatively worse performance. in that case, eer increases from 3. 6 % ( with silence ) to 15. 5 % ( trimmed silence ). our findings suggest that previous work may, in part, have inadvertently learned thespoof / bonafide distinction by relying on the duration of silence as it appears in the official challenge dataset. we discuss the potential consequences that this has for interpreting system scores in the challenge and discuss how the asv community may further consider this issue. | arxiv:2106.12914 |
the rapid pace of recent research in ai has been driven in part by the presence of fast and challenging simulation environments. these environments often take the form of games ; with tasks ranging from simple board games, to competitive video games. we propose a new benchmark - obstacle tower : a high fidelity, 3d, 3rd person, procedurally generated environment. an agent playing obstacle tower must learn to solve both low - level control and high - level planning problems in tandem while learning from pixels and a sparse reward signal. unlike other benchmarks such as the arcade learning environment, evaluation of agent performance in obstacle tower is based on an agent ' s ability to perform well on unseen instances of the environment. in this paper we outline the environment and provide a set of baseline results produced by current state - of - the - art deep rl methods as well as human players. these algorithms fail to produce agents capable of performing near human level. | arxiv:1902.01378 |
currently, we are in a stage where quantum computers surpass the size that can be simulated exactly on classical computers, and noise is the central issue in extracting their full potential. effective ways to characterize and measure their progress for practical applications are needed. in this work, we use the linear ramp quantum approximate optimization algorithm ( lr - qaoa ) protocol, a fixed quantum approximate optimization algorithm ( qaoa ) protocol, as an easy - to - implement scalable benchmarking methodology that assesses quantum process units ( qpus ) at different widths ( number of qubits ) and 2 - qubit gate depths. the benchmarking identifies the depth at which a fully mixed state is reached, and therefore, the results cannot be distinguished from those of a random sampler. we test this methodology using three graph topologies : 1d - chain, native layout, and fully connected graphs, on 19 different qpus from 5 vendors : ibm, iqm, ionq, quantinuum, and rigetti for problem sizes requiring from nq = 5 to nq = 156 qubits and lr - qaoa number of layers from p = 3 to p = 10, 000. in the case of 1d - chain and native graphs, ibm _ fez, the system with the largest number of qubits, performs best at p = 15 for problems involving nq = 100 and nq = 156 qubits and 1, 485 and 2, 640 2 - qubit gates, respectively. for the native graph problem, ibm _ fez still retains some coherent information at p = 200 involving 35, 200 fractional 2 - qubit gates. our largest implementation is a 1d - chain problem with p = 10, 000 involving 990, 000 2 - qubit gates on ibm _ fez. for fully connected graph problems, quantinuum _ h2 - 1 shows the best performance, passing the test with nq = 56 qubits at p = 3 involving 4, 620 2 - qubit gates with the largest 2 - qubit gate implementation for a problem with nq = 50 qubits and p = 10 involving 12, 250 2 - qubit gates but not passing the test. | arxiv:2502.06471 |
in this paper, we study the behaviour of the coupled subwavelength resonant modes when two high - contrast acoustic resonators are brought close together. we consider the case of spherical resonators and use bispherical coordinates to derive explicit representations for the capacitance coefficients which, we show, capture the system ' s resonant behaviour at leading order. we prove that the pair of resonators has two subwavelength resonant modes whose frequencies have different leading - order asymptotic behaviour. we, also, derive estimates for the rate at which the gradient of the scattered pressure wave blows up as the resonators are brought together. | arxiv:2001.04888 |
weighted logic is a powerful tool for the specification of calculations over semirings that depend on qualitative information. using a novel combination of weighted logic and here - and - there ( ht ) logic, in which this dependence is based on intuitionistic grounds, we introduce answer set programming with algebraic constraints ( asp ( ac ) ), where rules may contain constraints that compare semiring values to weighted formula evaluations. such constraints provide streamlined access to a manifold of constructs available in asp, like aggregates, choice constraints, and arithmetic operators. they extend some of them and provide a generic framework for defining programs with algebraic computation, which can be fruitfully used e. g. for provenance semantics of datalog programs. while undecidable in general, expressive fragments of asp ( ac ) can be exploited for effective problem - solving in a rich framework. this work is under consideration for acceptance in theory and practice of logic programming. | arxiv:2008.04008 |
india, the national level common entrance examination ( nlcee ) utilized educational technology to provide free online coaching and scholarship opportunities. by leveraging digital platforms during the covid - 19 pandemic, nlcee ensured students, especially those from underprivileged backgrounds, could access quality education and career guidance remotely. modern educational technology can improve access to education, including full degree programs. it enables better integration for non - full - time students, particularly in continuing education, and improved interactions between students and instructors. learning material can be used for long - distance learning and are accessible to a wider audience. course materials are easy to access. in 2010, 70. 3 % of american family households had access to the internet. in 2013, according to canadian radio - television and telecommunications commission canada, 79 % of homes have access to the internet. students can access and engage with numerous online resources at home. using online resources can help students spend more time on specific aspects of what they may be learning in school but at home. schools like the massachusetts institute of technology ( mit ) have made certain course materials free online. students appreciate the convenience of e - learning, but report greater engagement in face - to - face learning environments. colleges and universities are working towards combating this issue by utilizing web 2. 0 technologies as well as incorporating more mentorships between students and faculty members. according to james kulik, who studies the effectiveness of computers used for instruction, students usually learn more in less time when receiving computer - based instruction, and they like classes more and develop more positive attitudes toward computers in computer - based classes. students can independently solve problems. there are no intrinsic age - based restrictions on difficulty level, i. e. students can go at their own pace. students editing their written work on word processors improve the quality of their writing. according to some studies, the students are better at critiquing and editing written work that is exchanged over a computer network with students they know. studies completed in " computer intensive " settings found increases in student - centric, cooperative, and higher - order learning, writing skills, problem - solving, and using technology. in addition, attitudes toward technology as a learning tool by parents, students, and teachers are also improved. employers ' acceptance of online education has risen over time. more than 50 % of human resource managers shrm surveyed for an august 2010 report said that if two candidates with the same level of experience were applying for a job, it would not have any kind of effect whether the candidate ' | https://en.wikipedia.org/wiki/Educational_technology |
galaxy clusters are the universe ' s largest objects in the universe kept together by gravity. most of their baryonic content is made of a magnetized diffuse plasma. we investigate the impact of such magnetized environment on ultra - high - energy - cosmic - ray ( uhecr ) propagation. the intracluster medium is described according to the self - similar assumption, in which the gas density and pressure profiles are fully determined by the cluster mass and redshift. the magnetic field is scaled to the thermal components of the intracluster medium under different assumptions. we model the propagation of uhecrs in the intracluster medium using a modified version of the monte carlo code { \ it simprop }, where hadronic processes and diffusion in the turbulent magnetic field are implemented. we provide a universal parametrization that approximates the uhecr fluxes escaping from the environment as a function of the most relevant quantities, such as the mass of the cluster, the position of the source with respect to the center of the cluster and the nature of the accelerated particles. we show that galaxy clusters are an opaque environment especially for uhecr nuclei. the role of the most massive nearby clusters in the context of the emerging uhecr astronomy is finally discussed. | arxiv:2309.04380 |
the emergence of metaverse brings tremendous evolution to non - fungible tokens ( nfts ), which could certify the ownership the unique digital asset in the cyber world. the nft market has garnered unprecedented attention from investors and created billions of dollars in transaction volume. meanwhile, securing nft is still a challenging issue. recently, numerous incidents of nft theft have been reported, leading to incalculable losses for holders. we propose a decentralized nft anti - theft mechanism called tokenpatronus, which supports the general erc - 721 standard and provide the holders with strong property protection. tokenpatronus contains pre - event protection, in - event interruption, and post - event replevin enhancements for the complete nfts transactions stages. four modules are designed to make up the decentralized anti - theft mechanism, including the decentralized access control ( dac ), the decentralized risk management ( drm ), the decentralized arbitration system ( das ) and the erc - 721g standard smart contract. tokenpatronus is performing on the turtlecase nft project of ethereum and will support more blockchains in the future. | arxiv:2208.05168 |
the best algorithm for approximating steiner tree has performance ratio $ \ ln ( 4 ) + \ epsilon \ approx 1. 386 $ [ j. byrka et al., \ textit { proceedings of the 42th annual acm symposium on theory of computing ( stoc ) }, 2010, pp. 583 - 592 ], whereas the inapproximability result stays at the factor $ \ frac { 96 } { 95 } \ approx 1. 0105 $ [ m. chleb \ ' ik and j. chleb \ ' ikov \ ' a, \ textit { proceedings of the 8th scandinavian workshop on algorithm theory ( swat ) }, 2002, pp. 170 - 179 ]. in this article, we take a step forward to bridge this gap and show that there is no polynomial time algorithm approximating steiner tree with constant ratio better than $ \ frac { 19 } { 18 } \ approx 1. 0555 $ unless \ textsf { p = np }. we also relate the problem to the unique games conjecture by showing that it is \ textsf { ug } - hard to find a constant approximation ratio better than $ \ frac { 17 } { 16 } = 1. 0625 $. in the special case of quasi - bipartite graphs, we prove an inapproximability factor of $ \ frac { 25 } { 24 } \ approx 1. 0416 $ unless \ textsf { p = np }, which improves upon the previous bound of $ \ frac { 128 } { 127 } \ approx 1. 0078 $. the reductions that we present for all the cases are of the same spirit with appropriate modifications. our main technical contribution is an adaptation of a set - cover type reduction in which the long code is used to the geometric setting of the problems we consider. | arxiv:1702.02882 |
the aami ceased accepting new applicants in july 1999. the new, current clinical engineering certification ( cce ) started in 2002 under the sponsorship of the american college of clinical engineering ( acce ) and is administered by the acce healthcare technology foundation. in 2004, the first year the certification process was underway, 112 individuals were granted certification based upon their previous icc certification, and three individuals were awarded the new certification. by the time of the 2006 - 2007 ahtf annual report ( c. june 30, 2007 ), 147 individuals had become htf certified clinical engineers. = = definition and terminology = = a clinical engineer was defined by the acce in 1991 as " a professional who supports and advances patient care by applying engineering and managerial skills to healthcare technology. " clinical engineering is also recognized by the biomedical engineering society, the major professional organization for biomedical engineering, as being a branch within the field of biomedical engineering. there are at least two issues with the acce definition that often cause confusion. first, it is unclear how " clinical engineer " is a subset of " biomedical engineer ". the terms are often used interchangeably : some hospitals refer to their relevant departments as " clinical engineering " departments, while others call them " biomedical engineering " departments. the technicians are almost universally referred to as " biomedical equipment technicians, " regardless of the department they work under. however, the term biomedical engineer is generally thought to be more all - encompassing, as it includes engineers who design medical devices for manufacturers, or in academia. in contrast, clinical engineers generally work in hospitals solving problems close to where the equipment is actually used. clinical engineers in some countries, such as india, are trained to innovate and find technological solutions for clinical needs. the other issue, not evident from the acce definition, is the appropriate educational background for a clinical engineer. generally, certification programs expect applicants to hold an accredited bachelor ' s degree in engineering ( or at least engineering technology ). = = = potential new name = = = in 2011, aami arranged a meeting to discuss a new name for clinical engineering. after careful debate, the vast majority decided on " healthcare technology management ". due to confusion about the dividing line between clinical engineers ( engineers ) and bmets ( technicians ), the word engineering was deemed limiting from the administrator ' s perspective and unworkable from the educator ' s perspective. an abet - accredited college could not name an associate degree program " engineering ". also, the adjective, clinical, limited the scope of the field to hospitals. it | https://en.wikipedia.org/wiki/Clinical_engineering |
the increasing reliance on deep computer vision models that process sensitive data has raised significant privacy concerns, particularly regarding the exposure of intermediate results in hidden layers. while traditional privacy risk assessment techniques focus on protecting overall model outputs, they often overlook vulnerabilities within these intermediate representations. current privacy risk assessment techniques typically rely on specific attack simulations to assess risk, which can be computationally expensive and incomplete. this paper introduces a novel approach to measuring privacy risks in deep computer vision models based on the degrees of freedom ( dof ) and sensitivity of intermediate outputs, without requiring adversarial attack simulations. we propose a framework that leverages dof to evaluate the amount of information retained in each layer and combines this with the rank of the jacobian matrix to assess sensitivity to input variations. this dual analysis enables systematic measurement of privacy risks at various model layers. our experimental validation on real - world datasets demonstrates the effectiveness of this approach in providing deeper insights into privacy risks associated with intermediate representations. | arxiv:2412.00696 |
we study uncoordinated matching markets with additional local constraints that capture, e. g., restricted information, visibility, or externalities in markets. each agent is a node in a fixed matching network and strives to be matched to another agent. each agent has a complete preference list over all other agents it can be matched with. however, depending on the constraints and the current state of the game, not all possible partners are available for matching at all times. for correlated preferences, we propose and study a general class of hedonic coalition formation games that we call coalition formation games with constraints. this class includes and extends many recently studied variants of stable matching, such as locally stable matching, socially stable matching, or friendship matching. perhaps surprisingly, we show that all these variants are encompassed in a class of " consistent " instances that always allow a polynomial improvement sequence to a stable state. in addition, we show that for consistent instances there always exists a polynomial sequence to every reachable state. our characterization is tight in the sense that we provide exponential lower bounds when each of the requirements for consistency is violated. we also analyze matching with uncorrelated preferences, where we obtain a larger variety of results. while socially stable matching always allows a polynomial sequence to a stable state, for other classes different additional assumptions are sufficient to guarantee the same results. for the problem of reaching a given stable state, we show np - hardness in almost all considered classes of matching games. | arxiv:1409.4304 |
we have witnessed a persistent puzzling anomaly in the muon magnetic moment that cannot be accounted for in the standard model even considering the large hadronic uncertainties. a new measurement is forthcoming, and it might give rise to a $ 5 \ sigma $ claim for physics beyond the standard model. motivated by it, we explore the implications of this new result to five models based on the $ su ( 3 ) _ c \ times su ( 3 ) _ l \ times u ( 1 ) _ n $ gauge symmetry and put our conclusions into perspective with lhc bounds. we show that previous conclusions found in the context of such models change if there are more than one heavy particle running in the loop. moreover, having in mind the projected precision aimed by the g - 2 experiment at fermilab, we place lower mass bounds on the particles that contribute to muon anomalous magnetic moment assuming the anomaly is resolved otherwise. lastly, we discuss how these models could accommodate such anomaly in agreement with existing bounds. | arxiv:2003.06440 |
signature - based malware detectors have proven to be insufficient as even a small change in malignant executable code can bypass these signature - based detectors. many machine learning - based models have been proposed to efficiently detect a wide variety of malware. many of these models are found to be susceptible to adversarial attacks - attacks that work by generating intentionally designed inputs that can force these models to misclassify. our work aims to explore vulnerabilities in the current state of the art malware detectors to adversarial attacks. we train a transformers - based malware detector, carry out adversarial attacks resulting in a misclassification rate of 23. 9 % and propose defenses that reduce this misclassification rate to half. an implementation of our work can be found at https : / / github. com / yashjakhotiya / adversarial - attacks - on - transformers. | arxiv:2210.00008 |
combination of more than two fields provides constraints on the systematic error of simultaneous observations. the concept is investigated in the context of the gravitation astrometric measurement experiment ( game ), which aims at measurement of the ppn parameter $ \ gamma $ at the $ 10 ^ { - 7 } - 10 ^ { - 8 } $ level. robust self - calibration and control of systematic error is crucial to the achievement of the precision goal. the present work is focused on the concept investigation and practical implementation strategy of systematic error control over four simultaneously observed fields, implementing a " double differential " measurement technique. some basic requirements on geometry, observing and calibration strategy are derived, discussing the fundamental characteristics of the proposed concept. | arxiv:1105.2740 |
aging in spin glasses is analyzed via the probability density function ( pdf ) of the heat transfer between system and bath over a small time $ \ delta t $. the pdf contains a gaussian part, describing reversible fluctuations, and an exponential tail caused by intermittent events. we find that the relative weight of these two parts depends, for fixed $ \ delta t $, on the ratio of the total sampling time to the age $ t _ w $. fixing this ratio, the intensity of the intermittent events is proportional to $ \ delta t / t _ w $ and independent of the temperature. the gaussian part has a variance with the same temperature dependence as the variance of the equilibrium energy in a system with an exponential density of states. all these observations are explained assuming that, for any $ t _ w $, intermittent events are triggered by local energy fluctuations exceeding those previously occurred. | arxiv:cond-mat/0403212 |
in this paper, we design a controller for an interconnected system consisting of a linear stochastic differential equation ( sde ) actuated through a linear hyperbolic partial differential equation ( pde ). our approach aims to minimize the variance of the state of the sde component. we leverage a backstepping technique to transform the original pde into an uncoupled stochastic pde. as such, we reformulate our initial problem as the control of a delayed sde with a non - deterministic drift. under standard controllability assumptions, we design a controller steering the mean of the states to zero while keeping its covariance bounded. as final step, we address the optimal control of the delayed sde employing artstein ' s transformation and linear quadratic stochastic control techniques. | arxiv:2405.08600 |
rewards play a crucial role in reinforcement learning. to arrive at the desired policy, the design of a suitable reward function often requires significant domain expertise as well as trial - and - error. here, we aim to minimize the effort involved in designing reward functions for contact - rich manipulation tasks. in particular, we provide an approach capable of extracting dense reward functions algorithmically from robots ' high - dimensional observations, such as images and tactile feedback. in contrast to state - of - the - art high - dimensional reward learning methodologies, our approach does not leverage adversarial training, and is thus less prone to the associated training instabilities. instead, our approach learns rewards by estimating task progress in a self - supervised manner. we demonstrate the effectiveness and efficiency of our approach on two contact - rich manipulation tasks, namely, peg - in - hole and usb insertion. the experimental results indicate that the policies trained with the learned reward function achieves better performance and faster convergence compared to the baselines. | arxiv:2011.08458 |
big data have the characteristics of enormous volume, high velocity, diversity, value - sparsity, and uncertainty, which lead the knowledge learning from them full of challenges. with the emergence of crowdsourcing, versatile information can be obtained on - demand so that the wisdom of crowds is easily involved to facilitate the knowledge learning process. during the past thirteen years, researchers in the ai community made great efforts to remove the obstacles in the field of learning from crowds. this concentrated survey paper comprehensively reviews the technical progress in crowdsourcing learning from a systematic perspective that includes three dimensions of data, models, and learning processes. in addition to reviewing existing important work, the paper places a particular emphasis on providing some promising blueprints on each dimension as well as discussing the lessons learned from our past research work, which will light up the way for new researchers and encourage them to pursue new contributions. | arxiv:2206.09315 |
the parameters and elemental abundances of atmospheres for ten thick - disk red giants was determined from high - resolution spectra by the method of model stellar atmospheres. the results of a comparative analysis of the [ na / fe ] abundances in the atmospheres of the investigated stars and thin disk red giants are presented. sodium in the atmospheres of thick - disk red giants is shown to have no overabundances typical of thin - disk red giants. | arxiv:1311.5040 |
in the scenario where only superpartners were produced at the large hadron collider, how one could determine whether the supersymmetric model pointed out is 4 - dimensional or higher - dimensional? we propose and develop a series of tests for discriminating between a pure supersymmetry ( susy ) and a susy realized within the well - motivated warped geometry a la randall - sundrum ( rs ). two of these tests make use of some different patterns arising in the squark / slepton mass spectrum. the other distinctive rs susy feature is the possibly larger ( even dominant ) higgs boson decay branching ratios into sleptons, compared to pure susy. techniques for pinning up the presence of soft susy breaking terms on the tev - brane are also suggested, based on the analysis of stop pair production at the international linear collider. for all these phenomenological studies, we had first to derive the 4 - dimensional ( 4d ) effective couplings and mass matrices of the sfermions and higgs bosons in rs susy. the localization of higgs bosons, characteristic of rs, leads to singularities in their couplings which are regularized by the exchange contribution of infinite towers of kaluza - klein ( kk ) scalar modes with dirichlet - dirichlet boundary conditions. a general method is provided for this regularization, based on the completeness relation. the sfermion masses are obtained either from integrating out those specific kk towers or by treating their mixing effects. finally, we show at the one - loop level how all quadratic divergences in the higgs mass cancel out for any cut - off, due to 5d susy and to 5d anomaly cancellation ; the analytical way followed here also allows a justification of the infinite kk summation required for the so - called kk regularization in 5d susy, which has motivated a rich literature. | arxiv:1101.0634 |
passive synthetic aperture radar ( sar ) uses existing signals of opportunity such as communication and broadcasting signals. in our prior work, we have developed a low - rank matrix recovery ( lrmr ) method that can reconstruct scenes with extended and densely distributed point targets, overcoming shortcomings of conventional methods. the approach is based on correlating two sets of bistatic measurements, which results in a linear mapping of the tensor product of the scene reflectivity with itself. recognizing this tensor product as a rank - one positive semi - definite ( psd ) operator, we pose passive sar image reconstruction as a lrmr problem with convex relaxation. in this paper, we present a performance analysis of the convex lrmr - based passive sar image reconstruction method. we use the restricted isometry property ( rip ) and show that exact reconstruction is guaranteed under the condition that the pixel spacing or resolution satisfies a certain lower bound. we show that for sufficiently large center frequencies, our method provides superior resolution than that of fourier based methods, making it a super - resolution technique. additionally, we show that phaseless imaging is a special case of our passive sar imaging method. we present extensive numerical simulation to validate our analysis. | arxiv:1711.03232 |
generational improvements to commodity dram throughout half a century have long solidified its prevalence as main memory across the computing industry. however, overcoming today ' s dram technology scaling challenges requires new solutions driven by both dram producers and consumers. in this paper, we observe that the separation of concerns between producers and consumers specified by industry - wide dram standards is becoming a liability to progress in addressing scaling - related concerns. to understand the problem, we study four key directions for overcoming dram scaling challenges using system - memory cooperation : ( i ) improving memory access latencies ; ( ii ) reducing dram refresh overheads ; ( iii ) securely defending against the rowhammer vulnerability ; and ( iv ) addressing worsening memory errors. we find that the single most important barrier to advancement in all four cases is the consumer ' s lack of insight into dram reliability. based on an analysis of dram reliability testing, we recommend revising the separation of concerns to incorporate limited information transparency between producers and consumers. finally, we propose adopting this revision in a two - step plan, starting with immediate information release through crowdsourcing and publication and culminating in widespread modifications to dram standards. | arxiv:2401.16279 |
in this paper, we study the low temperature limit of the spherical crisanti - sommers variational problem. we identify the $ \ gamma $ - limit of the crisanti - sommers functionals, thereby establishing a rigorous variational problem for the ground state energy of spherical mixed $ p $ - spin glasses. as an application, we compute moderate deviations of the corresponding minimizers in the low temperature limit. in particular, for a large class of models this yields moderate deviations for the overlap distribution. we then analyze the ground state energy problem. we show that this variational problem is dual to an obstacle - type problem. this duality is at the heart of our analysis. we present the regularity theory of the optimizers of the primal and dual problems. this culminates in a simple method for constructing a finite dimensional space in which these optimizers live for any model. as a consequence of these results, we unify independent predictions of crisanti - leuzzi and auffinger - ben arous regarding the 1rsb phase in this limit. we find that the " positive replicon eigenvalue " and " pure - like " conditions are together necessary for optimality, but that neither are themselves sufficient, answering a question of auffinger and ben arous in the negative. we end by proving that these conditions completely characterize the 1rsb phase in $ 2 + p $ - spin models. | arxiv:1602.00657 |
we study gradient models on the lattice $ \ mathbb { z } ^ d $ with non - convex interactions. these gibbs fields ( lattice models with continuous spin ) emerge in various branches of physics and mathematics. in quantum field theory they appear as massless field theories. even though our motivation stems from considering vector valued fields as displacements for atoms of crystal structures and the study of the cauchy - born rule for these models, our attention here is mostly devoted to interfaces, with the gradient field as an \ emph { effective } interface interaction. in this case we prove the strict convexity of the surface tension ( interface free energy ) for low temperatures and sufficiently small interface tilts using muli - scale ( renormalisation group analysis ) techniques following the approach of brydges and coworkers \ cite { b07 }. this is a complement to the study of the high temperature regime in \ cite { cdm09 } and it is an extension of funaki and spohn ' s result \ cite { fs97 } valid for strictly convex interactions. | arxiv:1606.09541 |
in this letter, we consider multiple statistical classification problem where a sequence of n independent and identically distributed observations, that are generated by one of m discrete sources, need to be classified. the source distributions are not known, however one has access to labeled training sequences, of length n, from each source. we consider the case where the unknown source distributions are estimated from the training sequences, then the estimates are used as nominal distributions in a robust hypothesis test. specifically, we consider the robust dgl test due to devroye et al. and provide non - asymptotic exponential bounds, that are functions of n { n, on the error probability of classification. | arxiv:2106.04824 |
we present a model reduction approach for the real - time solution of time - dependent nonlinear partial differential equations ( pdes ) with parametric dependencies. the approach integrates several ingredients to develop efficient and accurate reduced - order models. proper orthogonal decomposition is used to construct a reduced - basis ( rb ) space which provides a rapidly convergent approximation of the parametric solution manifold. the galerkin projection is employed to reduce the dimensionality of the problem by projecting the weak formulation of the governing pdes onto the rb space. a major challenge in model reduction for nonlinear pdes is the efficient treatment of nonlinear terms, which we address by unifying the implementation of several hyperreduction methods. we introduce a first - order empirical interpolation method to approximate the nonlinear terms and recover the computational efficiency. we demonstrate the effectiveness of our methodology through its application to the allen - cahn equation, which models phase separation processes, and the buckley - leverett equation, which describes two - phase fluid flow in porous media. numerical results highlight the accuracy, efficiency, and stability of the proposed approach. | arxiv:2410.02093 |
we present a long bepposax observation of abell 754 that reports a nonthermal excess with respect to the thermal emission at energies greater than ~ 45 kev. a vla radio observation at 1. 4 ghz definitely confirms the existence of diffuse radio emission in the central region of the cluster, previously suggested by images at 74 and 330 mhz ( kassim et al 2001 ), and reports additional features. besides, our observation determines a steeper radio halo spectrum in the 330 - 1400 mhz frequency range with respect to the spectrum detected at lower frequencies, indicating the presence of a spectral cutoff. the presence of a radio halo in a754, considered the prototype of a merging cluster, reinforces the link between formation of mpc - scale radio regions and very recent or current merger processes. the radio results combined with the hard x - ray excess detected by bepposax give information on the origin of the electron population responsible for nonthermal phenomena in galaxy clusters. we discuss also the possibility that 26w20, a tailed radio galaxy with bl lac characteristics located in the field of view of the pds, could be responsible for the observed nonthermal hard x - ray emission. | arxiv:astro-ph/0212408 |
a permutation $ \ boldsymbol w $ gives rise to a graph $ g _ { \ boldsymbol w } $ ; the vertices of $ g _ { \ boldsymbol w } $ are the letters in the permutation and the edges of $ g _ { \ boldsymbol w } $ are the inversions of $ \ boldsymbol w $. we find that the number of trees among permutation graphs with $ n $ vertices is $ 2 ^ { n - 2 } $ for $ n \ ge 2 $. we then study $ t _ n $, a uniformly random tree from this set of trees. in particular, we study the number of vertices of a given degree in $ t _ n $, the maximum degree in $ t _ n $, the diameter of $ t _ n $, and the domination number of $ t _ n $. denoting the number of degree - $ k $ vertices in $ t _ n $ by $ d _ k $, we find that $ ( d _ 1, \ dots, d _ m ) $ converges to a normal distribution for any fixed $ m $ as $ n \ to \ infty $. the vertex domination number of $ t _ n $ is also asymptotically normally distributed as $ n \ to \ infty $. the diameter of $ t _ n $ shifted by $ - 2 $ is binomially distributed with parameters $ n - 3 $ and $ 1 / 2 $. finally, we find the asymptotic distribution of the maximum degree in $ t _ n $, which is concentrated around $ \ log _ 2n $. | arxiv:1406.3958 |
management science ( or managerial science ) is a wide and interdisciplinary study of solving complex problems and making strategic decisions as it pertains to institutions, corporations, governments and other types of organizational entities. it is closely related to management, economics, business, engineering, management consulting, and other fields. it uses various scientific research - based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms and aims to improve an organization ' s ability to enact rational and accurate management decisions by arriving at optimal or near optimal solutions to complex decision problems. : 113 management science looks to help businesses achieve goals using a number of scientific methods. the field was initially an outgrowth of applied mathematics, where early challenges were problems relating to the optimization of systems which could be modeled linearly, i. e., determining the optima ( maximum value of profit, assembly line performance, crop yield, bandwidth, etc. or minimum of loss, risk, costs, etc. ) of some objective function. today, the discipline of management science may encompass a diverse range of managerial and organizational activity as it regards to a problem which is structured in mathematical or other quantitative form in order to derive managerially relevant insights and solutions. = = overview = = management science is concerned with a number of areas of study : developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems. the models used can often be represented mathematically, but sometimes computer - based, visual or verbal representations are used as well or instead. designing and developing new and better models of organizational excellence. helping to improve, stabilize or otherwise manage profit margins in enterprises. management science research can be done on three levels : the fundamental level lies in three mathematical disciplines : probability, optimization, and dynamical systems theory. the modeling level is about building models, analyzing them mathematically, gathering and analyzing data, implementing models on computers, solving them, experimenting with them — all this is part of management science research on the modeling level. this level is mainly instrumental, and driven mainly by statistics and econometrics. the application level, just as in any other engineering and economics disciplines, strives to make a practical impact and be a driver for change in the real world. the management scientist ' s mandate is to use rational, systematic and science - based techniques to inform and improve decisions of all kinds. the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups | https://en.wikipedia.org/wiki/Management_science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.