text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
in a 2009 paper, dave benson gave a description in purely algebraic terms of the mod $ p $ homology of $ \ omega ( bg ^ \ wedge _ p ) $, when $ g $ is a finite group, $ bg ^ \ wedge _ p $ is the $ p $ - completion of its classifying space, and $ \ omega ( bg ^ \ wedge _ p ) $ is the loop space of $ bg ^ \ wedge _ p $. the main purpose of this work is to shed new light on benson ' s result by extending it to a more general setting. as a special case, we show that if $ \ mathcal { c } $ is a small category, $ | \ mathcal { c } | $ is the geometric realization of its nerve, $ r $ is a commutative ring, and $ | \ mathcal { c } | ^ + _ r $ is a " plus construction " for $ | \ mathcal { c } | $ in the sense of quillen ( taken with respect to $ r $ - homology ), then $ h _ * ( \ omega ( | \ mathcal { c } | ^ + _ r ) ; r ) $ can be described as the homology of a chain complex of projective $ r \ mathcal { c } $ - modules satisfying a certain list of algebraic conditions that determine it uniquely up to chain homotopy. benson ' s theorem is now the case where $ \ mathcal { c } $ is the category of a finite group $ g $, $ r = \ mathbb { f } _ p $ for some prime $ p $, and $ | \ mathcal { c } | ^ + _ r = bg ^ \ wedge _ p $. | arxiv:1807.02353 |
we consider extensive form win - lose games over a complete binary - tree of depth $ n $ where players act in an alternating manner. we study arguably the simplest random structure of payoffs over such games where 0 / 1 payoffs in the leafs are drawn according to an i. i. d. bernoulli distribution with probability $ p $. whenever $ p $ differs from the golden ratio, asymptotically as $ n \ rightarrow \ infty $, the winner of the game is determined. in the case where $ p $ equals the golden ratio, we call such a random game a \ emph { golden game }. in golden games the winner is the player that acts first with probability that is equal to the golden ratio. we suggest the notion of \ emph { fragility } as a measure for " fairness " of a game ' s rules. fragility counts how many leaves ' payoffs should be flipped in order to convert the identity of the winning player. our main result provides a recursive formula for asymptotic fragility of golden games. surprisingly, golden games are extremely fragile. for instance, with probability $ \ approx 0. 77 $ a losing player could flip a single payoff ( out of $ 2 ^ n $ ) and become a winner. with probability $ \ approx 0. 999 $ a losing player could flip 3 payoffs and become the winner. | arxiv:1909.04231 |
it is well - known that ferromagnetism can be realized along the zigzag graphene nanoribbon edges, but the armchair graphene nanoribbon edges ( agnes ) are nonmagnetic. here, we achieve heisenberg antiferromagnetic spin chains through edge reconstruction along the agnes. the reconstructed edge consists of pentagonal carbon rings or a hybrid of pentagonal and hexagonal carbon rings. the resultant nanoribbons are narrow - gap semiconductors and the band edge states are either spin - degenerate edge states or nonmagnetic bulk states. the spin is located on the outermost carbon of the pentagonal ring, and the inter - spin exchange is the nearest - neighbor antiferromagnetic interaction. for finite chain lengthes or nonzero magnetization, there are nonzero spin drude weights and thus ballistic quantum spin transport can be achieved along the reconstructed edges, these could be used for quantum spin information transfer and spintronic applications. | arxiv:2203.14288 |
sums of the form add ( ( - 1 ) ^ n q ^ ( n ( n - 1 ) / 2 ) x ^ n, n > = 0 ) are called partial theta functions. in his lost notebook, ramanujan recorded many identities for those functions. in 2003, warnaar found an elegant formula for a sum of two partial theta functions. subsequently, andrews and warnaar established a similar result for the product of two partial theta functions. in this note, i discuss the relation between the andrews - warnaar identity and the ( 1986 ) product formula due to gasper and rahman. i employ nonterminating extension of sears - carlitz transformation for 3 \ phi _ 2 to provide a new elegant proof for a companion identity for the difference of two partial theta series. this difference formula first appeared in the work of schilling - warnaar ( 2002 ). finally, i show that schilling - warnnar ( 2002 ) and warnaar ( 2003 ) formulas are, in fact, equivalent. | arxiv:0712.4087 |
high - quality annotation of fine - grained visual categories demands great expert knowledge, which is taxing and time consuming. alternatively, learning fine - grained visual representation from enormous unlabeled images ( e. g., species, brands ) by self - supervised learning becomes a feasible solution. however, recent researches find that existing self - supervised learning methods are less qualified to represent fine - grained categories. the bottleneck lies in that the pre - text representation is built from every patch - wise embedding, while fine - grained categories are only determined by several key patches of an image. in this paper, we propose a cross - level multi - instance distillation ( cmd ) framework to tackle the challenge. our key idea is to consider the importance of each image patch in determining the fine - grained pre - text representation by multiple instance learning. to comprehensively learn the relation between informative patches and fine - grained semantics, the multi - instance knowledge distillation is implemented on both the region / image crop pairs from the teacher and student net, and the region - image crops inside the teacher / student net, which we term as intra - level multi - instance distillation and inter - level multi - instance distillation. extensive experiments on cub - 200 - 2011, stanford cars and fgvc aircraft show that the proposed method outperforms the contemporary method by upto 10. 14 % and existing state - of - the - art self - supervised learning approaches by upto 19. 78 % on both top - 1 accuracy and rank - 1 retrieval metric. | arxiv:2401.08860 |
when considering navier - stokes equations on riemannian manifolds one frequently encounters situations where the manifold is embedded in the ambient euclidean space. in this context it is interesting to investigate what is the precise relationship of the diffusion operator in the ambient space to the diffusion operator on the manifold. the present paper gives a precise characterization of this situation for general surfaces in three dimensional space. | arxiv:2405.11562 |
massive objects ( clumps ) of cold dark matter ( cdm ) in galaxy can appear due to its annihilation as discrete sources of gamma - radiation. some number of unidentified regular gamma - sources, observed by egret, can be accounted for by massive cdm clumps. future gamma - ray expreriment glast in combination with data of egret will enable to probe a wide range of models of clumped annihilating cdm. | arxiv:astro-ph/0507118 |
this paper has been withdrawn by the authors. | arxiv:1102.2596 |
in studies of complex heterogeneous networks, particularly of the internet, significant attention was paid to analyzing network failures caused by hardware faults or overload, where the network reaction was modeled as rerouting of traffic away from failed or congested elements. here we model another type of the network reaction to congestion - - a sharp reduction of the input traffic rate through congested routes which occurs on much shorter time scales. we consider the onset of congestion in the internet where local mismatch between demand and capacity results in traffic losses and show that it can be described as a phase transition characterized by strong non - gaussian loss fluctuations at a mesoscopic time scale. the fluctuations, caused by noise in input traffic, are exacerbated by the heterogeneous nature of the network manifested in a scale - free load distribution. they result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. | arxiv:1110.5246 |
we propose a realistic scheme to implement discrete - time quantum walks in the brillouin zone ( i. e., in quasimomentum space ) with a spinor bose - einstein condensate. relying on a static optical lattice to suppress tunneling in real space, the condensate is displaced in quasimomentum space in discrete steps conditioned upon the internal state of the atoms, while short pulses periodically couple the internal states. we show that tunable twisted boundary conditions can be implemented in a fully natural way by exploiting the periodicity of the brillouin zone. the proposed setup does not suffer from off - resonant scattering of photons and could allow a robust implementation of quantum walks with several tens of steps at least. in addition, onsite atom - atom interactions can be used to simulate interactions with infinitely long range in the brillouin zone. | arxiv:1705.00512 |
large - scale and high - dimensional permutation operations are important for various applications in e. g., telecommunications and encryption. here, we demonstrate the use of all - optical diffractive computing to execute a set of high - dimensional permutation operations between an input and output field - of - view through layer rotations in a diffractive optical network. in this reconfigurable multiplexed material designed by deep learning, every diffractive layer has four orientations : 0, 90, 180, and 270 degrees. each unique combination of these rotatable layers represents a distinct rotation state of the diffractive design tailored for a specific permutation operation. therefore, a k - layer rotatable diffractive material is capable of all - optically performing up to 4 ^ k independent permutation operations. the original input information can be decrypted by applying the specific inverse permutation matrix to output patterns, while applying other inverse operations will lead to loss of information. we demonstrated the feasibility of this reconfigurable multiplexed diffractive design by approximating 256 randomly selected permutation matrices using k = 4 rotatable diffractive layers. we also experimentally validated this reconfigurable diffractive network using terahertz radiation and 3d - printed diffractive layers, providing a decent match to our numerical results. the presented rotation - multiplexed diffractive processor design is particularly useful due to its mechanical reconfigurability, offering multifunctional representation through a single fabrication process. | arxiv:2402.02397 |
advances in clip and large multimodal models ( lmms ) have enabled open - vocabulary and free - text segmentation, yet existing models still require predefined category prompts, limiting free - form category self - generation. most segmentation lmms also remain confined to sparse predictions, restricting their applicability in open - set environments. in contrast, we propose rose, a revolutionary open - set dense segmentation lmm, which enables dense mask prediction and open - category generation through patch - wise perception. our method treats each image patch as an independent region of interest candidate, enabling the model to predict both dense and sparse masks simultaneously. additionally, a newly designed instruction - response paradigm takes full advantage of the generation and generalization capabilities of lmms, achieving category prediction independent of closed - set constraints or predefined categories. to further enhance mask detail and category precision, we introduce a conversation - based refinement paradigm, integrating the prediction result from previous step with textual prompt for revision. extensive experiments demonstrate that rose achieves competitive performance across various segmentation tasks in a unified framework. code will be released. | arxiv:2412.00153 |
we consider the motion of several rigid bodies immersed in a two - dimensional incompressible perfect fluid. the motion of the rigid bodies is given by the newton laws with forces due to the fluid pressure and the fluid motion is described by the incompressible euler equations. our analysis covers the case where the circulations of the fluid velocity around the bodies are nonzero and where the fluid vorticity is bounded. the whole system occupies a bounded simply connected domain with an external fixed boundary which is impermeable except on an open non - empty part where one allows some fluid to go in and out the domain by controlling the normal velocity and the entering vorticity. we prove that it is possible to exactly achieve any non - colliding smooth motion of the rigid bodies by the remote action of a controlled normal velocity on the outer boundary which takes the form of state - feedback, with zero entering vorticity. this extends the result of ( glass, o., kolumb { \ ' a } n, j. j., sueur, f. ( 2017 ). external boundary control of the motion of a rigid body immersed in a perfect two - dimensional fluid. analysis \ & pde ) where the exact controllability of a single rigid body immersed in a 2d irrotational perfect incompressible fluid from an initial position and velocity to a final position and velocity was investigated. the proof relies on a nonlinear method to solve linear perturbations of nonlinear equations associated with a quadratic operator having a regular non - trivial zero. here this method is applied to a quadratic equation satisfied by a class of boundary controls, which is obtained by extending the reformulation of the newton equations performed in the uncontrolled case in ( glass, o., lacave, c., munnier, a., sueur, f. ( 2019 ). dynamics of rigid bodies in a two dimensional incompressible perfect fluid. journal of differential equations, 267 ( 6 ), 3561 - 3577 ) to the case where a control acts on the external boundary. | arxiv:2007.05235 |
we evaluate optimized parallel sparse matrix - vector operations for two representative application areas on widespread multicore - based cluster configurations. first the single - socket baseline performance is analyzed and modeled with respect to basic architectural properties of standard multicore chips. going beyond the single node, parallel sparse matrix - vector operations often suffer from an unfavorable communication to computation ratio. starting from the observation that nonblocking mpi is not able to hide communication cost using standard mpi implementations, we demonstrate that explicit overlap of communication and computation can be achieved by using a dedicated communication thread, which may run on a virtual core. we compare our approach to pure mpi and the widely used " vector - like " hybrid programming strategy. | arxiv:1101.0091 |
we introduce a training method for both better word representation and performance, which we call grover ( gradual rumination on the vector with maskers ). the method is to gradually and iteratively add random noises to word embeddings while training a model. grover first starts from conventional training process, and then extracts the fine - tuned representations. next, we gradually add random noises to the word representations and repeat the training process from scratch, but initialize with the noised word representations. through the re - training process, we can mitigate some noises to be compensated and utilize other noises to learn better representations. as a result, we can get word representations further fine - tuned and specialized on the task. when we experiment with our method on 5 text classification datasets, our method improves model performances on most of the datasets. moreover, we show that our method can be combined with other regularization techniques, further improving the model performance. | arxiv:1911.03459 |
we consider higgs massive gravity [ 1, 2 ] and investigate whether a nonlinear ghost in this theory can be avoided. we show that although the theory considered in [ 10, 11 ] is ghost free in the decoupling limit, the ghost nevertheless reappears in the fourth order away from the decoupling limit. we also demonstrate that there is no direct relation between the value of the vainshtein scale and the existence of nonlinear ghost. we discuss how massive gravity should be modified to avoid the appearance of the ghost. | arxiv:1011.0183 |
here we present the first genome wide statistical test for recessive selection. this test uses explicitly non - equilibrium demographic differences between populations to infer the mode of selection. by analyzing the transient response to a population bottleneck and subsequent re - expansion, we qualitatively distinguish between alleles under additive and recessive selection. we analyze the response of the average number of deleterious mutations per haploid individual and describe time dependence of this quantity. we introduce a statistic, $ b _ r $, to compare the number of mutations in different populations and detail its functional dependence on the strength of selection and the intensity of the population bottleneck. this test can be used to detect the predominant mode of selection on the genome wide or regional level, as well as among a sufficiently large set of medically or functionally relevant alleles. | arxiv:1312.3028 |
we define brst invariant observables in the osp invariant closed string field theory for bosonic strings. we evaluate correlation functions of these observables and show that the s - matrix elements derived from them coincide with those of the light - cone gauge string field theory. | arxiv:hep-th/0703216 |
the main purpose of the present article is to give some new hilbert ' s sum type inequalities, which in special cases yield the classical hilbert ' s inequalities. our results provide some new estimates to these types of inequalities. | arxiv:2002.08190 |
in this manuscript, we introduce ( symmetric ) tetranacci polynomials $ \ xi _ j $ as a twofold generalization of ordinary tetranacci numbers, by considering both non unity coefficients and generic initial values in their recursive definition. the issue of these polynomials arose in condensed matter physics and the diagonalization of symmetric toeplitz matrices having in total four non - zero off diagonals. for the latter, the symmetric tetranacci polynomials are the basic entities of the associated eigenvectors ; thus, treating the recursive structure determines the eigenvalues as well. subsequently, we present a complete closed form expression for any symmetric tetranacci polynomial. the key feature is a decomposition in terms of generalized fibonacci polynomials. | arxiv:2208.10527 |
we develop the qcd description of the breakup of photons into forward dijets in small - x deep inelastic scattering off nuclei in the saturation regime. based on the color dipole approach, we derive a multiple scattering expansion for intranuclear distortions of the jet - jet transverse momentum spectrum. a special attention is paid to the non - abelian aspects of the propagation of color dipoles in a nuclear medium. we report a nonlinear $ k _ { \ perp } $ - factorization formula for the breakup of photons into dijets in terms of the collective weizs \ " acker - williams ( ww ) glue of nuclei as defined in ref. \ cite { saturation, nssdijet }. for hard dijets with the transverse momenta above the saturation scale the azimuthal decorrelation ( acoplanarity ) momentum is of the order of the nuclear saturation momentum qa. for minijets with the transverse momentum below the saturation scale the nonlinear kt - factorization predicts a complete disappearance of the jet - jet correlation. we comment on a possible relevance of the nuclear decorrelation of jets to the experimental data from the star - rhic collaboration. | arxiv:hep-ph/0303024 |
we present our study on atomic, electronic, magnetic and phonon properties of one dimensional honeycomb structure of molybdenum disulfide ( mos $ _ 2 $ ) using first - principles plane wave method. calculated phonon frequencies of bare armchair nanoribbon reveal the fourth acoustic branch and indicate the stability. force constant and in - plane stiffness calculated in the harmonic elastic deformation range signify that the mos $ _ 2 $ nanoribbons are stiff quasi one dimensional structures, but not as strong as graphene and bn nanoribbons. bare mos $ _ 2 $ armchair nanoribbons are nonmagnetic, direct band gap semiconductors. bare zigzag mos $ _ 2 $ nanoribbons become half - metallic as a result of the ( 2x1 ) reconstruction of edge atoms and are semiconductor for minority spins, but metallic for the majority spins. their magnetic moments and spin - polarizations at the fermi level are reduced as a result of the passivation of edge atoms by hydrogen. the functionalization of mos $ _ 2 $ nanoribbons by adatom adsorption and vacancy defect creation are also studied. the nonmagnetic armchair nanoribbons attain net magnetic moment depending on where the foreign atoms are adsorbed and what kind of vacancy defect is created. the magnetization of zigzag nanoribbons due to the edge states is suppressed in the presence of vacancy defects. | arxiv:1009.5488 |
the bose - hubbard model in an external magnetic field is investigated with strong - coupling perturbation theory. the lowest - order secular equation leads to the problem of a charged particle moving on a lattice in the presence of a magnetic field, which was first treated by hofstadter. we present phase diagrams for the two - dimensional square and triangular lattices, showing a change in shape of the phase lobes away from the well - known power - law behavior in zero magnetic field. some qualitative agreement with experimental work on josephson - junction arrays is found for the insulating phase behavior at small fields. | arxiv:cond-mat/9812253 |
a universal relationship between scaled size and scaled energy is explored in five - body self - bound quantum systems. the ground - state binding energy and structure properties are obtained by means of the diffusion monte carlo method. we use pure estimators to eliminate any residual bias in the estimation of the cluster size. strengthening the inter - particle interaction, we extend the exploration from the halo region to classical systems. universal scaled size - scaled energy line, which does not depend on the short - range potential details and binding strength, is found for homogeneous pentamers with interaction potentials decaying at long range predominantly as $ r ^ { - 6 } $. for mixed pentamers, we discuss under which conditions the universal line can approximately describe the size - energy ratio. our data is compatible with generalized tjon lines, which assume a linear dependence between the binding energy of the pentamers and the one of tetramers, when both are divided by the trimer energies. | arxiv:2206.11946 |
in this paper, we propose to define the concept of family of local atoms and then we generalize this concept to the atomic system for operator in banach spaces by using semi - inner product. we also give a characterization of atomic systems leading to obtain new frames. in addition, a reconstruction formula is obtain. next, some new results are established. the characterization of atomic systems allows us to state some results for sampling theory in semi - inner product reproducing kernel banach spaces. finally, having used frame operator in banach spaces, new perturbation results are established. | arxiv:1506.07691 |
security properties are often focused on the technological side of the system. one implicitly assumes that the users will behave in the right way to preserve the property at hand. in real life, this cannot be taken for granted. in particular, security mechanisms that are difficult and costly to use are often ignored by the users, and do not really defend the system against possible attacks. here, we propose a graded notion of security based on the complexity of the user ' s strategic behavior. more precisely, we suggest that the level to which a security property $ \ varphi $ is satisfied can be defined in terms of ( a ) the complexity of the strategy that the voter needs to execute to make $ \ varphi $ true, and ( b ) the resources that the user must employ on the way. the simpler and cheaper to obtain $ \ varphi $, the higher the degree of security. we demonstrate how the idea works in a case study based on an electronic voting scenario. to this end, we model the vvote implementation of the \ pret voting protocol for coercion - resistant and voter - verifiable elections. then, we identify " natural " strategies for the voter to obtain receipt - freeness, and measure the voter ' s effort that they require. we also look at how hard it is for the coercer to compromise the election through a randomization attack. | arxiv:2007.12424 |
the networked nature of supply chains makes them susceptible to systemic risk, where local firm failures can propagate through firm interdependencies that can lead to cascading supply chain disruptions. the systemic risk of supply chains can be quantified and is closely related to the topology and dynamics of supply chain networks ( scn ). how different network properties contribute to this risk remains unclear. here, we ask whether systemic risk can be significantly reduced by strategically rewiring supplier - customer links. in doing so, we understand the role of specific endogenously emerged network structures and to what extent the observed systemic risk is a result of fundamental properties of the dynamical system. we minimize systemic risk through rewiring by employing a method from statistical physics that respects firm - level constraints to production. analyzing six specific subnetworks of the national scns of ecuador and hungary, we demonstrate that systemic risk can be considerably mitigated by 16 - 50 % without reducing the production output of firms. a comparison of network properties before and after rewiring reveals that this risk reduction is achieved by changing the connectivity in non - trivial ways. these results suggest that actual scn topologies carry unnecessarily high levels of systemic risk. we discuss the possibility of devising policies to reduce systemic risk through minimal, targeted interventions in supply chain networks through market - based incentives. | arxiv:2504.12955 |
misdiagnosis rates are one of the leading causes of medical errors in hospitals, affecting over 12 million adults across the us. to address the high rate of misdiagnosis, this study utilizes 4 nlp - based algorithms to determine the appropriate health condition based on an unstructured transcription report. from the logistic regression, random forest, lstm, and cnnlstm models, the cnn - lstm model performed the best with an accuracy of 97. 89 %. we packaged this model into a authenticated web platform for accessible assistance to clinicians. overall, by standardizing health care diagnosis and structuring transcription reports, our nlp platform drastically improves the clinical efficiency and accuracy of hospitals worldwide. | arxiv:2206.13516 |
the phenomenon of cp violation is crucial to understand the asymmetry between matter and antimatter that exists in the universe. dramatic experimental progress has been made, in particular in measurements of the behaviour of particles containing the b quark, where cp violation effects are predicted by the kobayashi - maskawa mechanism that is embedded in the standard model. the status of these measurements and future prospects for an understanding of cp violation beyond the standard model are reviewed. | arxiv:1607.06746 |
during an accelerated expansion of the universe, quantum fluctuations of sub - planckian size can be stretched outside the horizon and be regarded effectively classical. recently, it has been conjectured that such horizon - crossing of trans - planckian modes never happens inside theories of quantum gravity ( the trans - planckian censorship conjecture, tcc ). we point out several conceptual problems of this conjecture, which is in itself formulated as a statement on the restriction of possible scenarios in a theory : by contrast a standard swampland conjecture is a restriction of possible theories in the landscape of the quantum gravity. we emphasize the concept of swampland universality, i. e. that a swampland conjecture constrains any possible scenario in a given effective field theory. in order to illustrate the problems clearly we introduce several versions of the conjecture, where tcc condition is imposed differently to scenarios realizable in a given theory. we point out that these different versions of the conjecture lead to observable differences : a tcc violation in another universe can exclude a theory, and such reduction of the landscape restricts possible predictions in our universe. our analysis raises the question of whether or not the trans - planckian censorship conjecture can be regarded as a swampland conjecture concerning the existence of uv completion. | arxiv:1911.10445 |
this text is the preprint version of the concluding chapter for the book new directions in locally compact groups published by cambridge university press in the series lecture notes of the lms. the recent progress on locally compact groups surveyed in that volume also reveals the considerable extent of the unexplored territories. therefore, we wish to conclude it by mentioning a few open problems related to the material covered in the book and that we consider important at the time of this writing. | arxiv:1801.05187 |
we propose a way to optimize chain - of - thought with reinforcement learning, but without external reward function. our algorithm relies on viewing chain - of - thought as latent variable as part of a probabilistic inference problem. contrary to the full evidence lower bound, we propose to apply a much simpler jensen ' s lower bound, which derives tractable objectives with simple algorithmic components ( e. g., without the need for parametric approximate posterior ), making it more conducive to modern large - scale training. the lower bound approach naturally interpolates other methods such as supervised fine - tuning and online reinforcement learning, whose practical trade - offs we will illustrate. finally, we show that on mathematical reasoning problems, optimizing with jensen ' s lower bound is as effective as policy gradient with external reward. taken together, our results showcase as a proof of concept to this new algorithmic paradigm ' s potential to more generic applications. | arxiv:2503.19618 |
we explore some new off - shell and on - shell conserved quantities for a scalar field in minkowski space, using integrability condition. the off - shell conserved tensors are related to the kinematics of the field, while a linear combination of the off - shell and the on - shell conserved tensors ends up with the energy - momentum tensor for the scalar field. in the curved background, using ricci and bianchi identities, brans - dicke type field equations emerge, without requiring the principle of equivalence. further, starting from the curvature scalar and using these identities, the field equations for modified gravity ( einstein - hilbert action in the presence of higher - order terms ) follows. | arxiv:2105.13529 |
the article considers systems of interacting particles on networks with adaptively coupled dynamics. such processes appear frequently in natural processes and applications. relying on the notion of graph convergence, we prove that for large systems the dynamics can be approximated by the corresponding continuum limit. well - posedness of the latter is also established. | arxiv:2308.15433 |
i review the definition of n - point functions in loop quantum gravity, discussing what has been done and what are the main open issues. particular attention is dedicated to gauge aspects and renormalization. | arxiv:0810.1978 |
the paper presents an exponential pheromone deposition approach to improve the performance of classical ant system algorithm which employs uniform deposition rule. a simplified analysis using differential equations is carried out to study the stability of basic ant system dynamics with both exponential and constant deposition rules. a roadmap of connected cities, where the shortest path between two specified cities are to be found out, is taken as a platform to compare max - min ant system model ( an improved and popular model of ant system algorithm ) with exponential and constant deposition rules. extensive simulations are performed to find the best parameter settings for non - uniform deposition approach and experiments with these parameter settings revealed that the above approach outstripped the traditional one by a large extent in terms of both solution quality and convergence time. | arxiv:0811.0136 |
the houma alliance book is one of the national treasures of the museum in shanxi museum town in china. it has great historical significance in researching ancient history. to date, the research on the houma alliance book has been staying in the identification of paper documents, which is inefficient to identify and difficult to display, study and publicize. therefore, the digitization of the recognized ancient characters of houma league can effectively improve the efficiency of recognizing ancient characters and provide more reliable technical support and text data. this paper proposes a new database of houma alliance book ancient handwritten characters and a multi - modal fusion method to recognize ancient handwritten characters. in the database, 297 classes and 3, 547 samples of houma alliance ancient handwritten characters are collected from the original book collection and by human imitative writing. furthermore, the decision - level classifier fusion strategy is applied to fuse three well - known deep neural network architectures for ancient handwritten character recognition. experiments are performed on our new database. the experimental results first provide the baseline result of the new database to the research community and then demonstrate the efficiency of our proposed method. | arxiv:2207.05993 |
in this paper, we study a strong inverse approximation theorem and saturation order for the family of kantorovich exponential sampling operators. the class of log - uniformly continuous and bounded functions, and class of log - h \ " { o } lderian functions are considered to derive these results. we also prove some auxiliary results including voronovskaya type theorem, and a relation between the kantorovich exponential sampling series and the generalized exponential sampling series, to achieve the desired plan. moreover, some examples of kernels satisfying the conditions, which are assumed in the hypotheses of our theorems, are discussed. | arxiv:2212.06006 |
we consider two - player games played on weighted directed graphs with mean - payoff and total - payoff objectives, two classical quantitative objectives. while for single - dimensional games the complexity and memory bounds for both objectives coincide, we show that in contrast to multi - dimensional mean - payoff games that are known to be conp - complete, multi - dimensional total - payoff games are undecidable. we introduce conservative approximations of these objectives, where the payoff is considered over a local finite window sliding along a play, instead of the whole play. for single dimension, we show that ( i ) if the window size is polynomial, deciding the winner takes polynomial time, and ( ii ) the existence of a bounded window can be decided in np $ \ cap $ conp, and is at least as hard as solving mean - payoff games. for multiple dimensions, we show that ( i ) the problem with fixed window size is exptime - complete, and ( ii ) there is no primitive - recursive algorithm to decide the existence of a bounded window. | arxiv:1302.4248 |
in this paper, we consider the first negative eigenvalue of eigenforms of half - integral weight k + 1 / 2 and obtain an almost type bound. | arxiv:2003.09139 |
deep convolutional neural networks ( cnns ) have demonstrated remarkable success in computer vision by supervisedly learning strong visual feature representations. however, training cnns relies heavily on the availability of exhaustive training data annotations, limiting significantly their deployment and scalability in many application scenarios. in this work, we introduce a generic unsupervised deep learning approach to training deep models without the need for any manual label supervision. specifically, we progressively discover sample anchored / centred neighbourhoods to reason and learn the underlying class decision boundaries iteratively and accumulatively. every single neighbourhood is specially formulated so that all the member samples can share the same unseen class labels at high probability for facilitating the extraction of class discriminative feature representations during training. experiments on image classification show the performance advantages of the proposed method over the state - of - the - art unsupervised learning models on six benchmarks including both coarse - grained and fine - grained object image categorisation. | arxiv:1904.11567 |
we classify up to equivalence the gradings on hurwitz superalgebras and on symmetric composition superalgebras, over any field. also, classifications up to isomorphism are given in case the field is algebraically closed. by grading, here we mean group grading. | arxiv:1312.1259 |
in this paper, we propose computationally efficient and high - quality methods for statistical voice conversion ( vc ) with direct waveform modification based on spectral differentials. the conventional method with a minimum - phase filter achieves high - quality conversion but requires heavy computation in filtering. this is because the minimum phase using a fixed lifter of the hilbert transform often results in a long - tap filter. one of our methods is a data - driven method for lifter training. since this method takes filter truncation into account in training, it can shorten the tap length of the filter while preserving conversion accuracy. our other method is sub - band processing for extending the conventional method from narrow - band ( 16 khz ) to full - band ( 48 khz ) vc, which can convert a full - band waveform with higher converted - speech quality. experimental results indicate that 1 ) the proposed lifter - training method for narrow - band vc can shorten the tap length to 1 / 16 without degrading the converted - speech quality and 2 ) the proposed sub - band - processing method for full - band vc can improve the converted - speech quality than the conventional method. | arxiv:2002.06778 |
probing complex language models has recently revealed several insights into linguistic and semantic patterns found in the learned representations. in this article, we probe bert specifically to understand and measure the relational knowledge it captures in its parametric memory. while probing for linguistic understanding is commonly applied to all layers of bert as well as fine - tuned models, this has not been done for factual knowledge. we utilize existing knowledge base completion tasks ( lama ) to probe every layer of pre - trained as well as fine - tuned bert models ( ranking, question answering, ner ). our findings show that knowledge is not just contained in bert ' s final layers. intermediate layers contribute a significant amount ( 17 - 60 % ) to the total knowledge found. probing intermediate layers also reveals how different types of knowledge emerge at varying rates. when bert is fine - tuned, relational knowledge is forgotten. the extent of forgetting is impacted by the fine - tuning objective and the training data. we found that ranking models forget the least and retain more knowledge in their final layer compared to masked language modeling and question - answering. however, masked language modeling performed the best at acquiring new knowledge from the training data. when it comes to learning facts, we found that capacity and fact density are key factors. we hope this initial work will spur further research into understanding the parametric memory of language models and the effect of training objectives on factual knowledge. the code to repeat the experiments is publicly available on github. | arxiv:2106.02902 |
for an arbitrary integer $ r \ geq 1 $, we compute $ r $ - framed motivic pt and dt invariants of small crepant resolutions of toric calabi - yau $ 3 $ - folds, establishing a " higher rank " version of the motivic dt / pt wall - crossing formula. this generalises the work of morrison and nagao. our formulae, in particular their relationship with the $ r = 1 $ theory, fit nicely in the current development of higher rank refined dt invariants. | arxiv:2004.07837 |
x - ray binary systems are very popular objects for astrophysical investigations. compact objects in these systems are neutron stars, white dwarfs and black holes. neutron stars and white dwarfs can have intrinsic magnetic fields. there is well known, famous theorem about absence of intrinsic magnetic fields of black holes. but magnetic field can exist in the accretion disk around a black hole. we present here the real estimates of the magnetic field strength at the radius of innermost stable orbit in an accretion disk of stellar mass black holes. | arxiv:1409.2283 |
due to the immense potential of quantum computers and the significant computing overhead required in machine learning applications, the variational quantum classifier ( vqc ) has received a lot of interest recently for image classification. the performance of vqc is jeopardized by the noise in noisy intermediate - scale quantum ( nisq ) computers, which is a significant hurdle. it is crucial to remember that large error rates occur in quantum algorithms due to quantum decoherence and imprecision of quantum gates. previous studies have looked towards using ensemble learning in conventional computing to reduce quantum noise. we also point out that the simple average aggregation in classical ensemble learning may not work well for nisq computers due to the unbalanced confidence distribution in vqc. therefore, in this study, we suggest that ensemble quantum classifiers be optimized with plurality voting. on the mnist dataset and ibm quantum computers, experiments are carried out. the results show that the suggested method can outperform state - of - the - art on two - and four - class classifications by up to 16. 0 % and 6. 1 %, respectively. | arxiv:2210.01656 |
we show that a single femtosecond optical frequency comb may be used to induce two - photon transitions between molecular vibrational levels to form ultracold molecules, e. g., krb. the phase across an individual pulse in the pulse train is sinusoidally modulated with a carefully chosen modulation amplitude and frequency. piecewise adiabatic population transfer is fulfilled to the final state by each pulse in the applied pulse train providing a controlled population accumulation in the final state. detuning the pulse train carrier and modulation frequency from one - photon resonances changes the time scale of molecular dynamics but leads to the same complete population transfer to the ultracold state. a standard optical frequency comb with no modulation is shown to induce similar dynamics leading to rovibrational cooling. | arxiv:1001.3183 |
for module algebras and module coalgebras over an arbitrary bialgebra, we define two types of bivariant cyclic cohomology groups called bivariant hopf cyclic cohomology and bivariant equivariant cyclic cohomology. these groups are defined through an extension of connes ' cyclic category $ \ lambda $. we show that, in the case of module coalgebras, bivariant hopf cyclic cohomology specializes to hopf cyclic cohomology of connes and moscovici and its dual version by fixing either one of the variables as the ground field. we also prove an appropriate version of morita invariance for both of these theories. | arxiv:math/0606341 |
the durability and quality of software contributions are critical factors in the long - term maintainability of a codebase. this paper introduces the time to modification ( ttm ) theory, a novel approach for quantifying code quality by measuring the time interval between a code segment ' s introduction and its first modification. ttm serves as a proxy for code durability, with longer intervals suggesting higher - quality, more stable contributions. this work builds on previous research, including the " time - delta method for measuring software development contribution rates " dissertation, from which it heavily borrows concepts and methodologies. by leveraging version control systems such as git, ttm provides granular insights into the temporal stability of code at various levels ranging from individual lines to entire repositories. ttm theory contributes to the software engineering field by offering a dynamic metric that captures the evolution of a codebase over time, complementing traditional metrics like code churn and cyclomatic complexity. this metric is particularly useful for predicting maintenance needs, optimizing developer performance assessments, and improving the sustainability of software systems. integrating ttm into continuous integration pipelines enables real - time monitoring of code stability, helping teams identify areas of instability and reduce technical debt. | arxiv:2410.11768 |
we show that two cookie - cutter cantor sets with the same symbolic coding are differentiably equivalent if and only if their hausdorff dimensions are equal. | arxiv:1809.07414 |
within supersymmetric su ( 5 ) grand unified theory, we present several new scenarios with anomaly free flavor symmetry u ( 1 ) _ f. within each scenario, a variety of cases offer many possibilities for phenomenologically interesting model building. we present three concrete and economical models with anomaly free u ( 1 ) _ f leading to natural understanding of observed hierarchies between charged fermion masses and ckm mixing angles. | arxiv:1109.2642 |
our goal is to study the effects of the uv radiation from the first stars, quasars and decays of the hypothetical super heavy dark matter ( shdm ) particles on the formation of primordial bound objects in the universe. we trace the evolution of a spherically symmetric density perturbation in the lambda cold dark matter ( $ \ lambda $ cdm ) and mond models, solving the frequency - dependent radiative transfer equation, non - equilibrium chemistry, and one - dimensional gas hydrodynamics. we concentrate on the destruction and formation processes of the h $ _ 2 $ molecule, which is the main coolant in the primordial objects. | arxiv:astro-ph/0612689 |
context. acetone ( ch3coch3 ) is a carbonyl - bearing complex organic molecule, yet interstellar observations of acetone remain limited. studying the formation and distribution of ch3coch3 in the interstellar medium can provide valuable insights into prebiotic chemistry and the evolution of interstellar molecules. aims. we explore the spatial distribution of ch3coch3 and its correlation with the o - bearing molecules acetaldehyde ( ch3cho ) and methanol ( ch3oh ), as well as the n - bearing molecule ethyl cyanide ( c2h5cn ), in massive protostellar clumps. methods. we observed 11 massive protostellar clumps using alma at 345 ghz, with an angular resolution of 0. 7 ' ' - 1. 0 ' '. spectral line transitions were identified using the extended casa line analysis software suite. we constructed integrated intensity maps of ch3coch3, ch3cho, ch3oh, and c2h5cn and derived their rotation temperatures, column densities, and abundances under the assumption of local thermodynamic equilibrium. results. ch3coch3 is detected in 16 line - rich cores from 9 massive protostellar clumps : 12 high - mass cores, 3 intermediate - mass cores, and 1 low - mass core. ch3cho and ch3oh are also detected in all 16 cores, while c2h5cn is detected in 15. the integrated intensity maps reveal similar spatial distributions for ch3coch3, ch3cho, ch3oh, and c2h5cn. the line emission peaks of all four molecules coincide with the continuum emission peaks in regions without ultracompact hii regions. significant correlations are observed in the abundances of these molecules, which also exhibit similar average temperatures. conclusions. our observational results, supported by chemical models, suggest that ch3coch3, ch3cho, and ch3oh originate from the same gas. the observed temperatures and abundances of ch3coch3 are consistent with model predictions involving grain surface chemistry. | arxiv:2503.07052 |
we discuss the morphology, photometry and kinematics of the bars which have formed in three n - body simulations. these have initially the same disc and the same halo - to - disc mass ratio, but their haloes have very different central concentrations. the third model includes a bulge. the bar in the model with the centrally concentrated halo ( model mh ) is much stronger, longer and thinner than the bar in the model with the less centrally concentrated halo ( model md ). its shape, when viewed side - on, evolves from boxy to peanut and then to x - shaped, as opposed to that of model md, which stays boxy. the projected density profiles obtained from cuts along the bar major axis, both for the face - on and the edge - on views, show a flat part, as opposed to those of model md which are falling rapidly. a fourier analysis of the face - on density distribution of model mh shows very large m = 2, 4, 6 and 8 components. contrary to this for model md the components m = 6 and 8 are negligible. the velocity field of model mh shows strong deviations from axial symmetry, and in particular has wavy isovelocities near the end of the bar when viewed along the bar minor axis. when viewed edge - on, it shows cylindrical rotation, which the md model does not. the properties of the bar of the model with a bulge and a non - centrally concentrated halo ( mdb ) are intermediate between those of the bar of the other two models. all three models exhibit a lot of inflow of the disc material during their evolution, so that by the end of the simulations the disc dominates over the halo in the inner parts, even for model mh, for which the halo and disc contributions were initially comparable in that region. | arxiv:astro-ph/0111449 |
the analyses of ancient coins, and especially the identification of those struck with the same die, provides invaluable information for archaeologists and historians. nowadays, these die links are identified manually, which makes the process laborious, if not impossible when big treasures are discovered as the number of comparisons is too large. this study introduces advances that promise to streamline and enhance archaeological coin analysis. our contributions include : 1 ) first publicly accessible labeled dataset of coin pictures ( 329 images ) for die link detection, facilitating method benchmarking ; 2 ) novel ssim - based scoring method for rapid and accurate discrimination of coin pairs, outperforming current techniques used in this research field ; 3 ) evaluation of clustering techniques using our score, demonstrating near - perfect die link identification. we provide datasets, to foster future research and the development of even more powerful tools for archaeology, and more particularly for numismatics. | arxiv:2502.01186 |
we propose a new class of goodness - of - fit tests for the logistic distribution based on a characterisation related to the density approach in the context of stein ' s method. this characterisation based test is a first of its kind for the logistic distribution. the asymptotic null distribution of the test statistic is derived and it is shown that the test is consistent against fixed alternatives. the finite sample power performance of the newly proposed class of tests is compared to various existing tests by means of a monte carlo study. it is found that this new class of tests are especially powerful when the alternative distributions are heavy tailed, like student ' s t and cauchy, or for skew alternatives such as the log - normal, gamma and chi - square distributions. | arxiv:2108.07036 |
in a paper entitled binary lambda calculus and combinatory logic, john tromp presents a simple way of encoding lambda calculus terms as binary sequences. in what follows, we study the numbers of binary strings of a given size that represent lambda terms and derive results from their generating functions, especially that the number of terms of size n grows roughly like 1. 963447954... n. in a second part we use this approach to generate random lambda terms using boltzmann samplers. | arxiv:1511.05334 |
a new and efficient neural - network and finite - difference hybrid method is developed for solving poisson equation in a regular domain with jump discontinuities on embedded irregular interfaces. since the solution has low regularity across the interface, when applying finite difference discretization to this problem, an additional treatment accounting for the jump discontinuities must be employed. here, we aim to elevate such an extra effort to ease our implementation by machine learning methodology. the key idea is to decompose the solution into singular and regular parts. the neural network learning machinery incorporating the given jump conditions finds the singular solution, while the standard five - point laplacian discretization is used to obtain the regular solution with associated boundary conditions. regardless of the interface geometry, these two tasks only require supervised learning for function approximation and a fast direct solver for poisson equation, making the hybrid method easy to implement and efficient. the two - and three - dimensional numerical results show that the present hybrid method preserves second - order accuracy for the solution and its derivatives, and it is comparable with the traditional immersed interface method in the literature. as an application, we solve the stokes equations with singular forces to demonstrate the robustness of the present method. | arxiv:2210.05523 |
grand unified theories ( gut ) predict proton decay as well as the formation of cosmic strings which can generate gravitational waves. we determine which non - supersymmetric $ so ( 10 ) $ breaking chains provide gauge unification in addition to a gravitational signal from cosmic string decay. we calculate the gut and intermediate scales for these $ so ( 10 ) $ breaking chains by solving the renormalisation group equations at the two - loop level. this analysis predicts the gut scale, hence the proton lifetime, in addition to the scale of cosmic string generation and thus the associated gravitational wave signal. we determine which $ so ( 10 ) $ breaking chains survive in the event of the null results of the next generation of gravitational waves and proton decay searches and determine the correlations between proton decay and gravitational waves scales if these observables are measured. | arxiv:2106.15634 |
in the online facility assignment on a line ( ofal ) with a set $ s $ of $ k $ servers and a capacity $ c : s \ to \ mathbb { n } $, each server $ s \ in s $ with a capacity $ c ( s ) $ is placed on a line and a request arrives on a line one - by - one. the task of an online algorithm is to irrevocably assign a current request to one of the servers with vacancies before the next request arrives. an algorithm can assign up to $ c ( s ) $ requests to each server $ s \ in s $. in this paper, we show that the competitive ratio of the permutation algorithm is at least $ k + 1 $ for ofal where the servers are evenly placed on a line. this disproves the result that the permutation algorithm is $ k $ - competitive by ahmed et al.. | arxiv:2402.12734 |
we consider linear slices of the space of kleinian once - punctured torus groups ; a linear slice is obtained by fixing the value of the trace of one of the generators. the linear slice for trace 2 is called the maskit slice. we will show that if traces converge ` horocyclically ' to 2 then associated linear slices converge to the maskit slice, whereas if the traces converge ` tangentially ' to 2 the linear slices converge to a proper subset of the maskit slice. this result will be also rephrased in terms of complex fenchel - nielsen coordinates. in addition, we will show that there is a linear slice which is not locally connected. | arxiv:1303.7324 |
we study the relationship between the loop problem of a semigroup, and that of a rees matrix construction ( with or without zero ) over the semigroup. this allows us to characterize exactly those completely zero - simple semigroups for which the loop problem is context - free. we also establish some results concerning loop problems for subsemigroups and rees quotients. | arxiv:math/0702691 |
the generation of synthetic natural gas from renewable electricity enables long - term energy storage and provides clean fuels for transportation. in this article, we analyze fully renewable power - to - methane systems using a high - resolution energy system optimization model applied to two regions within europe. the optimum system layout and operation depend on the availability of natural resources, which vary between locations and years. we find that much more wind than solar power is used, while the use of an intermediate battery electric storage system has little effects. the resulting levelized costs of methane vary between 0. 24 and 0. 30 euro / kwh and the economic optimal utilization rate between 63 % and 78 %. we further discuss how the economic competitiveness of power - to - methane systems can be improved by the technical developments and by the use of co - products, such as oxygen and curtailed electricity. a sensitivity analysis reveals that the interest rate has the highest influence on levelized costs, followed by the investment costs for wind and electrolyzer stack. | arxiv:2002.06007 |
the randomized kaczmarz algorithm is a randomized method which aims at solving a consistent system of over determined linear equations. this note discusses how to find an optimized randomization scheme for this algorithm, which is related to the question raised by \ cite { c2 }. illustrative experiments are conducted to support the findings. | arxiv:1402.2863 |
we report improved measurements of branching fraction and decay constant fd + in d + to mu + nu using 281 pb - 1 of data taken on the psi ( 3770 ) resonance with the cleo - c detector. we extract a relatively precise value for the decay constant of the d + meson by measuring br ( d + to mu + nu ) = ( 4. 40 + - 0. 66 + 0. 09 - 0. 12 ) x10 ^ - 4 and find fd + = ( 222. 6 + - 16. 7 + 2. 8 - 3. 4 ) mev. we also set 90 % confidence upper limit on br ( d + to e + nu ) < 2. 4x10 ^ - 5 which limits contributions from non - standard model physics. | arxiv:hep-ex/0512047 |
graph computational tasks are inherently challenging and often demand the development of advanced algorithms for effective solutions. with the emergence of large language models ( llms ), researchers have begun investigating their potential to address these tasks. however, existing approaches are constrained by llms ' limited capability to comprehend complex graph structures and their high inference costs, rendering them impractical for handling large - scale graphs. inspired by human approaches to graph problems, we introduce a novel framework, pie ( pseudocode - injection - enhanced llm reasoning for graph computational tasks ), which consists of three key steps : problem understanding, prompt design, and code generation. in this framework, llms are tasked with understanding the problem and extracting relevant information to generate correct code. the responsibility for analyzing the graph structure and executing the code is delegated to the interpreter. we inject task - related pseudocodes into the prompts to further assist the llms in generating efficient code. we also employ cost - effective trial - and - error techniques to ensure that the llm - generated code executes correctly. unlike other methods that require invoking llms for each individual test case, pie only calls the llm during the code generation phase, allowing the generated code to be reused and significantly reducing inference costs. extensive experiments demonstrate that pie outperforms existing baselines in terms of both accuracy and computational efficiency. | arxiv:2501.13731 |
we study the effect of two modified gravity ( mg ) theories, $ f ( r ) $ and ndgp, on three probes of large - scale structure, the real space power spectrum estimator $ q _ 0 $, bispectrum and voids, and validate fast approximate cola simulations against full $ n $ - body simulations for the prediction of these probes. we find that using the first three even multipoles of the redshift space power spectrum to estimate $ q _ 0 $ is enough to reproduce the mg boost factors of the real space power spectrum for both halo and galaxy catalogues. by analysing the bispectrum and reduced bispectrum of dark matter ( dm ), we show that the strong mg signal present in the dm bispectrum is mainly due to the enhanced power spectrum. we warn about adopting screening approximations in simulations as this neglects non - linear contributions that can source a significant component of the mg bispectrum signal at the dm level, but we argue that this is not a problem for the bispectrum of galaxies in redshift space where the signal is dominated by the non - linear galaxy bias. finally, we perform void - finding on our galaxy mock catalogues by the zobov watershed algorithm. to apply a linear model for redshift - space distortion ( rsd ) in the void - galaxy cross - correlation function, we first examine the effects of mg on the void profiles entering into the rsd model. we find relevant mg signals in the integrated - density, velocity dispersion and radial velocity profiles in the ndgp theory. fitting the rsd model for the linear growth rate, we recover the linear theory prediction in an ndgp model, which is larger than the $ \ lambda $ cdm prediction at the $ 3 \ sigma $ level. in $ f ( r ) $ theory we cannot naively compare the results of the fit with the linear theory prediction as this is scale - dependent, but we obtain results that are consistent with the $ \ lambda $ cdm prediction. | arxiv:2208.01345 |
protein structure similarity search ( psss ), which tries to search proteins with similar structures, plays a crucial role across diverse domains from drug design to protein function prediction and molecular evolution. traditional alignment - based psss methods, which directly calculate alignment on the protein structures, are highly time - consuming with high memory cost. recently, alignment - free methods, which represent protein structures as fixed - length real - valued vectors, are proposed for psss. although these methods have lower time and memory cost than alignment - based methods, their time and memory cost is still too high for large - scale psss, and their accuracy is unsatisfactory. in this paper, we propose a novel method, called $ \ underline { \ text { p } } $ r $ \ underline { \ text { o } } $ tein $ \ underline { \ text { s } } $ tructure $ \ underline { \ text { h } } $ ashing ( posh ), for psss. posh learns a binary vector representation for each protein structure, which can dramatically reduce the time and memory cost for psss compared with real - valued vector representation based methods. furthermore, in posh we also propose expressive hand - crafted features and a structure encoder to well model both node and edge interactions in proteins. experimental results on real datasets show that posh can outperform other methods to achieve state - of - the - art accuracy. furthermore, posh achieves a memory saving of more than six times and speed improvement of more than four times, compared with other methods. | arxiv:2411.08286 |
the belief propagation ( bp ) algorithm is widely applied to perform approximate inference on arbitrary graphical models, in part due to its excellent empirical properties and performance. however, little is known theoretically about when this algorithm will perform well. using recent analysis of convergence and stability properties in bp and new results on approximations in binary systems, we derive a bound on the error in bp ' s estimates for pairwise markov random fields over discrete valued random variables. our bound is relatively simple to compute, and compares favorably with a previous method of bounding the accuracy of bp. | arxiv:1206.5277 |
phase - flip bifurcation plays an important role in the transition to synchronization state in unidirectionally coupled parametrically excited pendula. in coupled identical system it is the cause of complete synchronization whereas in case of coupled non - identical system it causes desynchronization. in coupled identical systems negativity of conditional lyapunov exponent is not always sufficient for complete synchronization. in complete synchronization state the largest conditional lyapunov exponent and the second largest lyapunov exponent are equal in magnitude and slope. | arxiv:1801.05572 |
in this work we re - examine the opacity of the cosmic background radiation to the propagation of extremely high energy cosmic rays. we use the continuous energy loss approximation to provide spectral modification factors for several hypothesized cosmic ray sources. earlier problems with this approximation are resolved including the effects of resonances other than the $ \ delta $. | arxiv:hep-ph/9704387 |
the general method is proposed for constructing a family of martingale measures for a wide class of evolution of risky assets. the sufficient conditions are formulated for the evolution of risky assets under which the family of equivalent martingale measures to the original measure is a non - empty set. the set of martingale measures is constructed from a set of strictly nonneg ative random variables, satisfying certain conditions. the inequalities are obtained for the non - negative random variables satisfying certain conditions. using these inequalities, a new simple proof of optional decomposition theorem for the nonnegative super - martingale is proposed. the family of spot measures is introduced and the representation is found for them. the conditions are found under which each martingale measure is an integral over the set of spot measures. on the basis of nonlinear processes such as arch and garch, the parametric family of random processes is introduced for which the interval of non - arbitrage prices are found. the formula is obtained for the fair price of the contract with option of european type for the considered parametric processes. the parameters of the introduced random processes are estimated and the estimate is found at which the fair price of contract with option is the least. | arxiv:2010.13630 |
quantum machine learning ( qml ) has been identified as one of the key fields that could reap advantages from near - term quantum devices, next to optimization and quantum chemistry. research in this area has focused primarily on variational quantum algorithms ( vqas ), and several proposals to enhance supervised, unsupervised and reinforcement learning ( rl ) algorithms with vqas have been put forward. out of the three, rl is the least studied and it is still an open question whether vqas can be competitive with state - of - the - art classical algorithms based on neural networks ( nns ) even on simple benchmark tasks. in this work, we introduce a training method for parametrized quantum circuits ( pqcs ) that can be used to solve rl tasks for discrete and continuous state spaces based on the deep q - learning algorithm. we investigate which architectural choices for quantum q - learning agents are most important for successfully solving certain types of environments by performing ablation studies for a number of different data encoding and readout strategies. we provide insight into why the performance of a vqa - based q - learning algorithm crucially depends on the observables of the quantum model and show how to choose suitable observables based on the learning task at hand. to compare our model against the classical dqn algorithm, we perform an extensive hyperparameter search of pqcs and nns with varying numbers of parameters. we confirm that similar to results in classical literature, the architectural choices and hyperparameters contribute more to the agents ' success in a rl setting than the number of parameters used in the model. finally, we show when recent separation results between classical and quantum agents for policy gradient rl can be extended to inferring optimal q - values in restricted families of environments. | arxiv:2103.15084 |
a global diabatization scheme, based on the ` ` valence - hole ' ' concept, has been previously applied to model webs of avoided - crosssings that exist in four electronic - state symmetry manifolds of c $ _ 2 $ ( $ ^ 1 \ pi _ g $, $ ^ 3 \ pi _ g $, $ ^ 1 \ sigma _ u ^ + $, $ ^ 3 \ sigma _ u ^ + $ ). here, this model is extended to the electronically excited states of four more molecules : cn ( $ ^ 2 \ sigma ^ + $ ), n $ _ 2 $ ( $ ^ 3 \ pi _ u $ ), sic ( $ ^ 3 \ pi $ ), and si $ _ 2 $ ( $ ^ 3 \ pi _ g $ ). many strangenesses in the spectroscopic observations ( e. g., energy level structure, predissociation linewidths, and radiative lifetimes ) for all four electronic state systems discussed here are accounted for by this $ unified $ model. the key concept of the model is valence - hole electron configurations : $ 3 \ sigma ^ 24 \ sigma1 \ pi ^ 45 \ sigma ^ 2 $ in cn, $ 2 \ sigma _ g ^ 22 \ sigma _ u ^ 11 \ pi _ { u } ^ 43 \ sigma _ g ^ 21 \ pi _ { g } ^ 1 $ in n $ _ 2 $, $ 5 \ sigma ^ 26 \ sigma7 \ sigma ^ 22 \ pi ^ 3 $ in sic, and $ 4 \ sigma _ g ^ 24 \ sigma _ u ^ 15 \ sigma _ g ^ 22 \ pi _ { u } ^ 3 $ in si $ _ 2 $. these valence - hole configurations have a nominal bond order of three or higher, and correlate with high - energy separated - atom limits with an np $ \ leftarrow $ ns ( n = 2, 3 ) promotion in $ one $ of the atomic constituents. this promotion results in a triply - occupied ` ` valence - core " ( i. e., $ 2 \ sigma _ g ^ 22 \ sigma _ u ^ 1 $ or the equivalent ). on its way to dissociation, the strongly - bound diabatic valence - hole state crosses multiple weakly - bound or repulsive states, which belong to electron configurations with a completely - filled valence - core. these curve - crossings between diabatic potentials result in a network of many | arxiv:2401.08122 |
our recent theoretical developments related to the nonrelativistic and relativistic fluctuation - electromagnetic interactions of bodies with different temperatures moving translationally and ( or ) rotationally relative to each other are briefly summarized. three basic geometrical configurations and physical systems are discussed : " a small particle and a thick plate ( i ) ; " a small particle in radiation vacuum background " ( ii ), and " two thick plates in relative motion ( iii ) - classical casimir - lifshitz configuration with allowance for relative motion and different temperatures of plates. for the third configuration, it is shown that the theory of friction and heat exchange by levin - polevoi - rytov proves to be quite adequate contrary to the settled point of view of many authors. | arxiv:1912.04678 |
when considering perceptions, the observation scale and resolution are closely related properties. there is consensus in considering resolution as the density of elementary pieces of information in a specified information space. differently, with the concept of scale, several conceptions compete for a consistent meaning. scale is typically regarded as way to indicate the degree of detail in which an observation is performed. but surprisingly, there is not a unified definition of scale as a description ' s property. this paper offers a precise definition of scale, and a method to quantify it as a property associated to the interpretation of a description. to complete the parameters needed to describe the perception of a description, the concepts of scope and resolution are also exposed with an exact meaning. a model describing the recursive process of interpretation, based on evolving steps of scale, scope and resolution, is introduced. the model relies on the conception of observation scale and its association to the selection of symbols. five experiments illustrate the application of these concepts, showing that resolution, scale and scope integrate the set of properties to define any point of view from which an observation is performed and interpreted. | arxiv:1701.09040 |
we give a new, elementary proof of a key inequality used by rudelson in the derivation of his well - known bound for random sums of rank - one operators. our approach is based on ahlswede and winter ' s technique for proving operator chernoff bounds. we also prove a concentration inequality for sums of random matrices of rank one with explicit constants. | arxiv:1004.3821 |
in the nineties immerman and medina initiated the search for syn - tactic tools to prove np - completeness. in their work, amongst several results, they conjecture that the np - completeness of a problem defined by the conjunction of a sentence in existential second order logic with a first order sentence, necessarily imply the np - completeness of the problem defined by the existential second order sentence alone. this is interesting because if true it would justify the restriction heuristic pro - posed in garey and johnson in his classical book on np completeness, which roughly says that in some cases one can prove np - complete a problem a by proving np - complete a problem b contained in a. borges and bonet extend some results from immerman and medina and they also prove for a host of complexity classes that the immerman - medina conjecture is true when the first order sentence in the conjunc - tion is universal. our work extends that result to the second level of the polynomial - time hierarchy. | arxiv:1707.09327 |
by the classical li - yau inequality, an immersion of a closed surface in $ \ mathbb { r } ^ n $ with willmore energy below $ 8 \ pi $ has to be embedded. we discuss analogous results for curves in $ \ mathbb { r } ^ 2 $, involving euler ' s elastic energy and other possible curvature functionals. additionally, we provide applications to associated gradient flows. | arxiv:2101.08509 |
crush curves are of fundamental importance to numerical modeling of small and porous astrophysical bodies. the empirical literature often measures them for silica grains, and different studies have used various methods, sizes, textures, and pressure conditions. here we review past studies and supplement further experiments in order to develop a full and overarching understanding of the silica crush curve behavior. we suggest a new power - law function that can be used in impact simulations of analog materials similar to micro - granular silica. we perform a benchmarking study to compare this new crush curve to the parametric quadratic crush curve often used in other studies, based on the study case of the dart impact onto the asteroid dimorphos. we find that the typical quadratic crush curve parameters do not closely follow the silica crushing experiments, and as a consequence they under ( over ) estimate compression close ( far ) from the impact site. the new crush curve presented here, applicable to pressures between a few hundred pa and up to 1. 1 gpa, might therefore be more precise. additionally, it is not calibrated by case - specific parameters, and can be used universally for comet - or asteroid - like bodies, given an assumed composition similar to micro - granular silica. | arxiv:2408.04014 |
we study reinforcement learning ( rl ) in the setting of continuous time and space, for an infinite horizon with a discounted objective and the underlying dynamics driven by a stochastic differential equation. built upon recent advances in the continuous approach to rl, we develop a notion of occupation time ( specifically for a discounted objective ), and show how it can be effectively used to derive performance - difference and local - approximation formulas. we further extend these results to illustrate their applications in the pg ( policy gradient ) and trpo / ppo ( trust region policy optimization / proximal policy optimization ) methods, which have been familiar and powerful tools in the discrete rl setting but under - developed in continuous rl. through numerical experiments, we demonstrate the effectiveness and advantages of our approach. | arxiv:2305.18901 |
during thunderstorms, the atmospheric electric field can increase above hundreds of kv / m, causing an acceleration in the charged particles of secondary cosmic rays. such an acceleration causes avalanche processes in the atmosphere, enhancing / reducing the particle flux at ground level depending on the strength / polarity of the electric field. we present the design and implementation of a self - triggered and fast - recording lightning monitoring system used to study the transient electric field atmospheric effect on the secondary particle flux above cosmic ray observatories. the acquisition device records lightning electric field at 10 $ \ mu $ s resolution ( during 1. 2 s per event ), covering a detection range up to 200 km ( $ i _ { peak } > $ 100 ka ) | arxiv:2205.00311 |
an unsupervised, iterative point - set registration algorithm for an unlabeled ( i. e. correspondence between points is unknown ) n - dimensional euclidean point - cloud is proposed. it is based on linear least squares, and considers all possible point pairings and iteratively aligns the two sets until the number of point pairs does not exceed the maximum number of allowable one - to - one pairings. | arxiv:1908.04384 |
learning domain - invariant semantic representations is crucial for achieving domain generalization ( dg ), where a model is required to perform well on unseen target domains. one critical challenge is that standard training often results in entangled semantic and domain - specific features. previous works suggest formulating the problem from a causal perspective and solving the entanglement problem by enforcing marginal independence between the causal ( \ ie semantic ) and non - causal ( \ ie domain - specific ) features. despite its simplicity, the basic marginal independent - based idea alone may be insufficient to identify the causal feature. by d - separation, we observe that the causal feature can be further characterized by being independent of the domain conditioned on the object, and we propose the following two strategies as complements for the basic framework. first, the observation implicitly implies that for the same object, the causal feature should not be associated with the non - causal feature, revealing that the common practice of obtaining the two features with a shared base feature extractor and two lightweight prediction heads might be inappropriate. to meet the constraint, we propose a simple early - branching structure, where the causal and non - causal feature obtaining branches share the first few blocks while diverging thereafter, for better structure design ; second, the observation implies that the causal feature remains invariant across different domains for the same object. to this end, we suggest that augmentation should be incorporated into the framework to better characterize the causal feature, and we further suggest an effective random domain sampling scheme to fulfill the task. theoretical and experimental results show that the two strategies are beneficial for the basic marginal independent - based framework. code is available at \ url { https : / / github. com / liangchen527 / causeb }. | arxiv:2403.08649 |
dataset pruning aims to select a subset of a dataset for efficient model training. while data efficiency in natural language processing has primarily focused on within - corpus scenarios during model pre - training, efficient dataset pruning for task - specific fine - tuning across diverse datasets remains challenging due to variability in dataset sizes, data distributions, class imbalance and label spaces. current cross - dataset pruning techniques for fine - tuning often rely on computationally expensive sample ranking processes, typically requiring full dataset training or reference models. we address this gap by proposing swift cross - dataset pruning ( scdp ). specifically, our approach uses tf - idf embeddings with geometric median to rapidly evaluate sample importance. we then apply dataset size - adaptive pruning to ensure diversity : for smaller datasets, we retain samples far from the geometric median, while for larger ones, we employ distance - based stratified pruning. experimental results on six diverse datasets demonstrate the effectiveness of our method, spanning various tasks and scales while significantly reducing computational resources. source code is available at : https : / / github. com / he - y / nlp - dataset - pruning | arxiv:2501.02432 |
based on the slave - boson theory of the two - dimensional $ t - t ' - j $ model, we calculate the superconducting condensation energy for optimally doped and overdoped high $ t _ { c } $ cuprates at finite temperatures using a renormalized random phase approximation. the contributions come from the mean - field part and the antiferromagnetic spin fluctuations. in the presence of neutron resonance peak at $ ( \ pi, \ pi ) $, the latter is shown to have similar temperature and doping dependences as the difference in antiferromagnetic ( af ) exchange energy between the normal and the superconducting state. this difference has been proposed to be related to the superconducting condensation energy by scalapino and white. the total condensation energy, however, is about 1 / 2 smaller than the proposed af exchange energy difference and shows a more rapid decrease as the temperature rises. the doping dependence of the condensation energy is found to be consistent with experiments. in particular, near zero temperature, our result shows a good quantitative agreement with experiments. | arxiv:cond-mat/0102448 |
puncturable encryption ( pe ), proposed by green and miers at ieee s & p 2015, is a kind of public key encryption that allows recipients to revoke individual messages by repeatedly updating decryption keys without communicating with senders. pe is an essential tool for constructing many interesting applications, such as asynchronous messaging systems, forward - secret zero round - trip time protocols, public - key watermarking schemes and forward - secret proxy re - encryptions. this paper revisits pes from the observation that the puncturing property can be implemented as efficiently computable functions. from this view, we propose a generic pe construction from the fully key - homomorphic encryption, augmented with a key delegation mechanism ( dfkhe ) from boneh et al. at eurocrypt 2014. we show that our pe construction enjoys the selective security under chosen plaintext attacks ( that can be converted into the adaptive security with some efficiency loss ) from that of dfkhe in the standard model. basing on the framework, we obtain the first post - quantum secure pe instantiation that is based on the learning with errors problem, selective secure under chosen plaintext attacks ( cpa ) in the standard model. we also discuss about the ability of modification our framework to support the unbounded number of ciphertext tags inspired from the work of brakerski and vaikuntanathan at crypto 2016. | arxiv:2007.06353 |
we define the dyson diffusion process on a curved smooth closed contour in the plane and derive the fokker - planck equation for probability density. its stationary solution is shown to be the boltzmann weight for the logarithmic gas confined on the contour. | arxiv:2211.02339 |
we report transport studies on the 5 nm thick bi2se3 topological insulator films which are grown via molecular beam epitaxy technique. the angle - resolved photoemission spectroscopy data show that the fermi level of the system lies in the bulk conduction band above the dirac point, suggesting important contribution of bulk states to the transport results. in particular, the crossover from weak antilocalization to weak localization in the bulk states is observed in the parallel magnetic field measurements up to 50 tesla. the measured magneto - resistance exhibits interesting anisotropy with respect to the orientation of b / / and i, signifying intrinsic spin - orbit coupling in the bi2se3 films. our work directly shows the crossover of quantum interference effect in the bulk states from weak antilocalization to weak localization. it presents an important step toward a better understanding of the existing three - dimensional topological insulators and the potential applications of nano - scale topological insulator devices. | arxiv:1310.5194 |
in this paper, we consider a solar - powered access point ( ap ) that is tasked with supporting both non - energy harvesting or legacy data users such as laptops, and devices with radio frequency ( rf ) - energy harvesting and sensing capabilities. we propose two solutions that enable the ap to manage its harvested energy via transmit power control and also ensure devices perform sensing tasks frequently. advantageously, our solutions are suitable for current wireless networks and do not require perfect channel gain information or non - causal energy arrival at devices. the first solution uses a deep q - network ( dqn ) whilst the second solution uses model predictive control ( mpc ) to control the ap ' s transmit power. our results show that our dqn and mpc solutions improve energy efficiency and user satisfaction by respectively 16 % to 35 %, and 10 % to 42 % as compared to competing algorithms. | arxiv:2005.12022 |
we give an explicit upper bound on the volume of lattice simplices with fixed positive number of interior lattice points. the bound differs from the conjectural sharp upper bound only by a linear factor in the dimension. this improves significantly upon the previously best results by pikhurko from 2001. | arxiv:1710.08646 |
we cross - match the two currently largest all - sky photometric catalogs, mid - infrared wise and supercosmos scans of ukst / poss - ii photographic plates, to obtain a new galaxy sample that covers 3pi steradians. in order to characterize and purify the extragalactic dataset, we use external gama and sdss spectroscopic information to define quasar and star loci in multicolor space, aiding the removal of contamination from our extended - source catalog. after appropriate data cleaning we obtain a deep wide - angle galaxy sample that is approximately 95 % pure and 90 % complete at high galactic latitudes. the catalog contains close to 20 million galaxies over almost 70 % of the sky, outside the zone of avoidance and other confused regions, with a mean surface density of over 650 sources per square degree. using multiwavelength information from two optical and two mid - ir photometric bands, we derive photometric redshifts for all the galaxies in the catalog, using the annz framework trained on the final gama - ii spectroscopic data. our sample has a median redshift of z _ { med } = 0. 2 but with a broad dn / dz reaching up to z > 0. 4. the photometric redshifts have a mean bias of | delta _ z | ~ 10 ^ { - 3 }, normalized scatter of sigma _ z = 0. 033 and less than 3 % outliers beyond 3sigma _ z. comparison with external datasets shows no significant variation of photo - z quality with sky position. together with the overall statistics, we also provide a more detailed analysis of photometric redshift accuracy as a function of magnitudes and colors. the final catalog is appropriate for ` all - sky ' 3d cosmology to unprecedented depths, in particular through cross - correlations with other large - area surveys. it should also be useful for source pre - selection and identification in forthcoming surveys such as taipan or wallaby. | arxiv:1607.01182 |
field theories on canonical noncommutative spacetimes, which are being studied also in connection with string theory, and on $ \ kappa $ - minkowski spacetime, which is a popular example of lie - algebra noncommutative spacetime, can be naturally constructed by introducing a suitable generating functional for green functions in energy - momentum space. direct reference to a star product is not necessary. it is sufficient to make use of the simple properties that the fourier transform preserves in these spacetimes and establish the rules for products of wave exponentials that are dictated by the non - commutativity of the coordinates. the approach also provides an elementary description of " planar " and " non - planar " feynman diagrams. we also comment on the rich phenomenology emerging from the analysis of these theories. | arxiv:hep-th/0205047 |
in this paper, we introduce a novel framework for combining scientific knowledge within physics - based models and recurrent neural networks to advance scientific discovery in many dynamical systems. we will first describe the use of outputs from physics - based models in learning a hybrid - physics - data model. then, we further incorporate physical knowledge in real - world dynamical systems as additional constraints for training recurrent neural networks. we will apply this approach on modeling lake temperature and quality where we take into account the physical constraints along both the depth dimension and time dimension. by using scientific knowledge to guide the construction and learning the data - driven model, we demonstrate that this method can achieve better prediction accuracy as well as scientific consistency of results. | arxiv:1810.02880 |
recent developments in analog quantum simulators based on cold atoms and trapped ions call for cross - validating the accuracy of quantum - simulation experiments with use of quantitative numerical methods ; however, it is particularly challenging for dynamics of systems with more than one spatial dimension. here we demonstrate that a tensor - network method running on classical computers is useful for this purpose. we specifically analyze real - time dynamics of the two - dimensional bose - hubbard model after a sudden quench starting from the mott insulator by means of the tensor - network method based on infinite projected entangled pair states. calculated single - particle correlation functions are found to be in good agreement with a recent experiment. by estimating the phase and group velocities from the single - particle and density - density correlation functions, we predict how these velocities vary in the moderate interaction region, which serves as a quantitative benchmark for future experiments and numerical simulations. | arxiv:2108.11051 |
the microscopic dynamics in liquid gallium ( l - ga ) at melting ( t = 315 k ) has been studied by inelastic x - ray scattering. we demonstrate the existence of collective acoustic - like modes up to wave - vectors above one half of the first maximum of the static structure factor, at variance with earlier results from inelastic neutron scattering data [ f. j. bermejo et al. phys. rev. e 49, 3133 ( 1994 ) ]. despite the structural ( an extremely rich polymorphism and rather complex phase diagram ) and electronic ( mixed valence ) peculiarity of l - ga, its collective dynamics is strikingly similar to the one of van der walls and alkali metals liquids. this result speaks in favor of the universality of the short time dynamics in monatomic liquids rather than of system - specific dynamics. | arxiv:cond-mat/0208594 |
we update previous frequentist analyses of the cmssm and nuhm1 parameter spaces to include the public results of searches for supersymmetric signals using ~ 1 / fb of lhc data recorded by atlas and cms and ~ 0. 3 / fb of data recorded by lhcb in addition to electroweak precision and b - physics observables. we also include the constraints imposed by the cosmological dark matter density and the xenon100 search for spin - independent dark matter scattering. the lhc data set includes atlas and cms searches for jets + missing et events and for the heavier mssm higgs bosons, and the upper limits on b _ s to mu ^ + mu ^ - from lhcb and cms. the absences of jets + missing et signals in the lhc data favour heavier mass spectra than in our previous analyses of the cmssm and nuhm1, which may be reconciled with ( g - 2 ) _ mu if tan beta ~ 40, a possibility that is however under pressure from heavy higgs searches and the upper limits on b _ s to mu ^ + mu ^ -. as a result, the p - value for the cmssm fit is reduced to ~ 15 ( 38 ) %, and that for the nuhm1 to ~ 16 ( 38 ) %, to be compared with ~ 9 ( 49 ) % for the standard model limit of the cmssm for the same set of observables ( dropping ( g - 2 ) _ mu ), ignoring the dark matter relic density in both cases. we discuss the sensitivities of the fits to the ( g - 2 ) _ mu and b to s gamma constraints, contrasting fits with and without the ( g - 2 ) _ mu constraint, and combining the theoretical and experimental errors for b to s gamma linearly or in quadrature. we present predictions for m _ gluino, b _ s to mu ^ + mu ^ -, m _ h and m _ a, and update predictions for spin - independent dark matter scattering, stressing again the importance of taking into account the uncertainty in the pi - nucleon sigma term, sigma _ { pi n }. finally, we present predictions based on our fits for the likely thresholds for sparticle pair production in e ^ + e ^ - collisions in the cmssm and nuhm1. | arxiv:1110.3568 |
we compute complete tree level matrix elements for $ gg, q \ bar q \ rightarrow b \ bar b w ^ + w ^ - $. we analyze the irreducible backgrounds to top signal at the tevatron and at the lhc. their contribution to the total cross section is about $ 5 \ % $ at the lhc, due to single resonant channels. several distributions with contributions from signal and backgrounds are presented. | arxiv:hep-ph/9607288 |
since the dawn of scanning probe microscopy ( spm ), tapping or intermittent contact mode has been one of the most widely used imaging modes. manual optimization of tapping mode not only takes a lot of instrument and operator time, but also often leads to frequent probe and sample damage, poor image quality and reproducibility issues for new types of samples or inexperienced users. despite wide use, optimization of tapping mode imaging is an extremely hard problem, ill - suited to either classical control methods or machine learning. here we introduce a reward - driven workflow to automate the optimization of spm in the tapping mode. the reward function is defined based on multiple channels with physical and empirical knowledge of good scans encoded, representing a sample - agnostic measure of image quality and imitating the decision - making logic employed by human operators. this automated workflow gives optimal scanning parameters for different probes and samples and gives high - quality spm images consistently in the attractive mode. this study broadens the application and accessibility of spm and opens the door for fully automated spm. | arxiv:2408.04055 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.