text
stringlengths
1
3.65k
source
stringlengths
15
79
we propose a seesaw scenario that possible corrections to the tribimaximal pattern of lepton mixing are due to the small phase splitting of the right - handed neutrino mass matrix. we show that the small deviations can be expressed analytically in terms of two splitting parameters ( $ \ delta _ 1 $ and $ \ delta _ 2 $ ) in the leading order. the solar mixing angle $ \ theta _ { 12 } $ favors a relatively smaller value compared to zero order value ( $ 35. 3 ^ \ circ $ ), and the dirac type cp phase $ \ delta $ chooses a nearly maximal one. the two majorana type cp phases $ \ rho $ and $ \ sigma $ turn out to be a nearly linear dependence. also a normal hierarchy neutrino mass spectrum is favored due to the stability of perturbation calculations.
arxiv:0911.2670
we study a simple model that can give rise to isospin - violating interactions of dirac fermion asymmetric dark matter to protons and neutrons through the interference of a scalar and u ( 1 ) $ ' $ gauge boson contribution. the model can yield a large suppression of the elastic scattering cross section off xenon relative to silicon thus reconciling cdms - si and lux results while being compatible with lhc findings on the 126 gev higgs, electroweak precision tests and flavour constraints.
arxiv:1311.0022
the flow size distribution is a useful metric for traffic modeling and management. its estimation based on sampled data, however, is problematic. previous work has shown that flow sampling ( fs ) offers enormous statistical benefits over packet sampling but high resource requirements precludes its use in routers. we present dual sampling ( ds ), a two - parameter family, which, to a large extent, provide fs - like statistical performance by approaching fs continuously, with just packet - sampling - like computational cost. our work utilizes a fisher information based approach recently used to evaluate a number of sampling schemes, excluding fs, for tcp flows. we revise and extend the approach to make rigorous and fair comparisons between fs, ds and others. we show how ds significantly outperforms other packet based methods, including sample and hold, the closest packet sampling - based competitor to fs. we describe a packet sampling - based implementation of ds and analyze its key computational costs to show that router implementation is feasible. our approach offers insights into numerous issues, including the notion of ` flow quality ' for understanding the relative performance of methods, and how and when employing sequence numbers is beneficial. our work is theoretical with some simulation support and case studies on internet data.
arxiv:1106.3809
discussion of ` ` the dantzig selector : statistical estimation when $ p $ is much larger than $ n $ ' ' [ math / 0506081 ]
arxiv:0803.3135
##peciales ". this is an extraordinarily strong concentration of mathematical education – up to 16 hours a week – in which elementary analytic geometry and mechanics, and recently infinitesimal calculus also, are thoroughly studied and are made into a securely mastered tool by means of many exercises. sylvestre lacroix was a gifted teacher and expositor. his book on descriptive geometry uses sections labelled " probleme " to exercise the reader ’ s understanding. in 1816 he wrote essays on teaching in general, and on mathematics teaching in particular which emphasized the need to exercise and test : the examiner, obliged, in the short - term, to multiply his questions enough to cover the subjects that he asks, to the greater part of the material taught, cannot be less thorough, since if, to abbreviate, he puts applications aside, he will not gain anything for the pupils ’ faculties this way. andrew warwick has drawn attention to the historical question of exercises : the inclusion of illustrative exercises and problems at the end of chapters in textbooks of mathematical physics is now so commonplace as to seem unexceptional, but it is important to appreciate that this pedagogical device is of relatively recent origin and was introduced in a specific historical context. : 168 in reporting mathematical tripos examinations instituted at cambridge university, he notes such cumulative, competitive learning was also accomplished more effectively by private tutors using individual tuition, specially prepared manuscripts, and graded examples and problems, than it was by college lecturers teaching large classes at the pace of the mediocre. : 79 explaining the relationship of examination and exercise, he writes... by the 1830s it was the problems on examination papers, rather than exercises in textbooks, that defined the standard to which ambitious students aspired... [ cambridge students ] not only expected to find their way through the merest sketch of an example, but were taught to regard such exercises as useful preparation for tackling difficult problems in examinations. : 152 explaining how the reform took root, warwick wrote : it was widely believed in cambridge that the best way of teaching mathematics, including the new analytical methods, was through practical examples and problems, and, by the mid - 1830s, some of the first generation of young college fellows to have been taught higher analysis this way were beginning both to undertake their own research and to be appointed tripos examiners. : 155 warwick reports that in germany, franz ernst neumann about the same time " developed a common system of graded exercises that introduced student to a hierarchy of essential mathematical skills
https://en.wikipedia.org/wiki/Exercise_(mathematics)
this joint theoretical and experimental study establishes that the adsorption of polycyclic aromatic hydrocarbons ( pahs ) onto the amorphous ice surface provokes a broadening and redshift of the " dangling " oh ( doh ) ice spectral feature, the redshift increasing with pah size up to $ \ sim $ 85 ~ cm $ ^ { - 1 } $. it also reveals that, in certain interaction configurations, adsorption induces substantial reorganisation of the hydrogen - bonding network at the ice surface. comparison with experiments validates the novel theoretical methodology relying on the density functional based tight binding approach, which offers a compromise between system size and accuracy enabling a wide sampling of surface structures. applied in an astrophysical context, this study suggests that widening of the doh feature by adsorption of aromatic molecules could explain its absence heretofore in observational ice spectra, offering hope that future missions with higher sensitivity will verify its presence or absence in dense regions.
arxiv:2003.02098
we propose a simple approach to measure the energy of the few - ev isomeric state in th - 229. to this end, u - 229 nuclei are doped into vuv - transparent crystals, where they undergo alpha decay into th - 229, and, with a probability of 2 %, populate the isomeric state. these th - 229m nuclei may decay into the nuclear ground state under emission of the sought - after vuv gamma, whose wavelength can be determined with a spectrometer. based on measurements of the optical transmission of u : caf2 crystals in the vuv range, we expect a signal at least 2 orders of magnitude larger compared to current schemes using surface - implantation of recoil nuclei. the signal background is dominated by cherenkov radiation induced by beta decays of the thorium decay chain. we estimate that, even if the isomer undergoes radiative de - excitation with a probability of only 0. 1 %, the vuv gamma can be detected within a reasonable measurement time.
arxiv:1511.07187
transformer - based sequential recommenders are very powerful for capturing both short - term and long - term sequential item dependencies. this is mainly attributed to their unique self - attention networks to exploit pairwise item - item interactions within the sequence. however, real - world item sequences are often noisy, which is particularly true for implicit feedback. for example, a large portion of clicks do not align well with user preferences, and many products end up with negative reviews or being returned. as such, the current user action only depends on a subset of items, not on the entire sequences. many existing transformer - based models use full attention distributions, which inevitably assign certain credits to irrelevant items. this may lead to sub - optimal performance if transformers are not regularized properly.
arxiv:2212.04120
characteristic length of mass density resolution in dynamical reduction models is calculated utilizing energy conservation law and viable cosmological model with decreasing energy density of vacuum ( dark energy density ). the value found, $ \ sim 10 ^ { - 5 } $ cm, numerically coincides with phenomenological spatial short - length cutoff parameter introduced in the ghirardi - rimini - weber model. it seems that our results support the gravity induced mechanism of dynamical reduction.
arxiv:0705.4247
a method is described for calculating the heat generated in a quantum computer due to loss of quantum phase information. amazingly enough, this heat generation can take place at zero temperature. and may explain why it is impossible to extract energy from vacuum fluctuations. implications for optical computers and quantum cosmology are also briefly discussed.
arxiv:quant-ph/0310074
in this work we show that the ordering ambiguity on quantization depends on the representation choice. this property is then used to solve unambiguously some particular systems. finally, we speculate on the consequences for more involved cases.
arxiv:0705.3247
one of the most important nonperturbative methods for solving qcd is quantization at fixed light - front time, \ tau = t + z / c - - dirac ' s " front form ". the eigenvalues of the light - front qcd hamiltonian predict the hadron spectrum and the eigensolutions provide the light - front wavefunctions which describe hadron structure. more generally, we show that the valence fock - state wavefunctions of the light - front qcd hamiltonian satisfy a single - variable relativistic equation of motion, analogous to the nonrelativistic radial schr \ " odinger equation, with an effective confining potential u which systematically incorporates the effects of higher quark and gluon fock states. we outline a method for computing the required potential from first principles in qcd. the holographic mapping of gravity in ads space to qcd, quantized at fixed light - front time, yields the same light front schr \ " odinger equation ; in fact, the soft - wall ads / qcd approach provides a model for the light - front potential which is color - confining and reproduces well the light - hadron spectrum. one also derives via light - front holography a precise relation between the bound - state amplitudes in the fifth dimension of ads space and the boost - invariant light - front wavefunctions describing the internal structure of hadrons in physical space - time. the elastic and transition form factors of the pion and the nucleons are found to be well described in this framework. the light - front ads / qcd holographic approach thus gives a frame - independent first approximation of the color - confining dynamics, spectroscopy, and excitation spectra of relativistic light - quark bound states in qcd.
arxiv:1208.3020
flare stacks play an important role in the treatment of waste gas and waste materials in petroleum fossil energy plants. monitoring the efficiency of flame combustion is of great significance for environmental protection. the traditional method of monitoring with sensors is not only expensive, but also easily damaged in harsh combustion environments. in this paper, we propose to monitor the quality of flames using only visual features, including the area ratio of flame to smoke, rgb information of flames, angle of flames and other features. comprehensive use of image segmentation, target detection, target tracking, principal component analysis, gpt - 4 and other methods or tools to complete this task. in the end, real - time monitoring of the picture can be achieved, and when the combustion efficiency is low, measures such as adjusting the ratio of air and waste can be taken in time. as far as we know, the method of this paper is relatively innovative and has industrial production value.
arxiv:2410.19823
we apply an alloying strategy to single - layer ptn $ _ 2 $ and ptp $ _ 2 $, aiming to obtain a single - layer pt - p - n alloy with a relatively low formation energy with reference to its bulk structure. we perform structure search based on a cluster - expansion method and predict single - layer and bulk ptpn consisting of pentagonal networks. the formation energy of single - layer ptpn is significantly lower in comparison with that of single - layer ptp $ _ 2 $. the predicted bulk structure of ptpn adopts a structure that is similar to the pyrite structure. we also find that single - layer pentagonal ptpn, unlike ptn $ _ 2 $ and ptp $ _ 2 $, exhibits a sizable, direct pbe band gap of 0. 84 ev. furthermore, the band gap of single - layer pentagonal ptpn calculated with the hybrid density functional theory is 1. 60 ev, which is within visible light spectrum and promising for optoelectronics applications. in addition to predicting ptpn in the 2d and 3d forms, we study the flexural rigidity and electronic structure of ptpn in the nanotube form. we find that single - layer ptpn has similar flexural rigidity to that of single - layer carbon and boron nitride nanosheets and that the band gaps of ptpn nanotubes depend on their radii. our work shed light on obtaining an isolated 2d planar, pentagonal ptpn nanosheet from its 3d counterpart and on obtaining 1d nanotubes with tunable bandgaps.
arxiv:1905.03973
by analyzing the structure of possible proofs in the system, and showing through this analysis that it is impossible to prove a contradiction. this idea led to the study of proof theory. moreover, hilbert proposed that the analysis should be entirely concrete, using the term finitary to refer to the methods he would allow but not precisely defining them. this project, known as hilbert ' s program, was seriously affected by godel ' s incompleteness theorems, which show that the consistency of formal theories of arithmetic cannot be established using methods formalizable in those theories. gentzen showed that it is possible to produce a proof of the consistency of arithmetic in a finitary system augmented with axioms of transfinite induction, and the techniques he developed to do so were seminal in proof theory. a second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. the study of constructive mathematics includes many different programs with various definitions of constructive. at the most accommodating end, proofs in zf set theory that do not use the axiom of choice are called constructive by many mathematicians. more limited versions of constructivism limit themselves to natural numbers, number - theoretic functions, and sets of natural numbers ( which can be used to represent real numbers, facilitating the study of mathematical analysis ). a common idea is that a concrete means of computing the values of the function must be known before the function itself can be said to exist. in the early 20th century, luitzen egbertus jan brouwer founded intuitionism as a part of philosophy of mathematics. this philosophy, poorly understood at first, stated that in order for a mathematical statement to be true to a mathematician, that person must be able to intuit the statement, to not only believe its truth but understand the reason for its truth. a consequence of this definition of truth was the rejection of the law of the excluded middle, for there are statements that, according to brouwer, could not be claimed to be true while their negations also could not be claimed true. brouwer ' s philosophy was influential, and the cause of bitter disputes among prominent mathematicians. kleene and kreisel would later study formalized versions of intuitionistic logic ( brouwer rejected formalization, and presented his work in unformalized natural language ). with the advent of the bhk interpretation and kripke models, intuitionism became easier to reconcile with classical mathematics. = = see also = = argument
https://en.wikipedia.org/wiki/Mathematical_logic
we study static brane configurations in the bulk background of the topological black holes in asymptotically flat spacetime. we find that such configurations are possible even for flat black hole horizon, unlike the ads black hole case. we construct the brane world model with an orbifold structure s ^ 1 / z _ 2 in such bulk background. we also study massless bulk scalar field.
arxiv:hep-th/0107174
stein ' s method is applied to obtain a general cramer - type moderate deviation result for dependent random variables whose dependence is defined in terms of a stein identity. a corollary for zero - bias coupling is deduced. the result is also applied to a combinatorial central limit theorem, a general system of binary codes, the anti - voter model on a complete graph, and the curie - weiss model. a general moderate deviation result for independent random variables is also proved.
arxiv:0911.5373
neural conversation models are known to generate appropriate but non - informative responses in general. a scenario where informativeness can be significantly enhanced is conversing by reading ( cbr ), where conversations take place with respect to a given external document. in previous work, the external document is utilized by ( 1 ) creating a context - aware document memory that integrates information from the document and the conversational context, and then ( 2 ) generating responses referring to the memory. in this paper, we propose to create the document memory with some anticipated responses in mind. this is achieved using a teacher - student framework. the teacher is given the external document, the context, and the ground - truth response, and learns how to build a response - aware document memory from three sources of information. the student learns to construct a response - anticipated document memory from the first two sources, and the teacher ' s insight on memory creation. empirical results show that our model outperforms the previous state - of - the - art for the cbr task.
arxiv:2005.06128
a data set of recorded single played tones of a concert grand piano is investigated using machine learning ( ml ) on psychoacoustic timbre features. the examined instrument has been recorded at two stages : firstly right after manufacture and secondly after being played in a concert hall for one year. a previous study [ plath2019 ] revealed that listeners clearly distinguished both stages but no clear correlation with acoustics, signal processing tools or verbalizations of perceived differences could be found. using a self - organizing map ( som ), training single as well as double feature sets, it can be shown that spectral flux is able to perfectly cluster the two stages. sound pressure level ( spl ), roughness, and fractal correlation dimension ( as a measure for initial transient chaoticity ) are furthermore able to order the keys with respect to high and low notes. combining spectral flux with the three other features in double - feature training sets maintains stage clustering only for spl and fractal dimension, showing sub - clusters for both stages. these sub - clusters point to a homogenization of spl for stage 2 with respect to stage 1 and a pronounced ordering and sub - clustering of key regions with respect to initial transient chaoticity.
arxiv:2112.03214
sharpening ( a particular case of ) a result of szemeredi and vu and extending earlier results of sarkozy and ourselves, we find, subject to some technical restrictions, a sharp threshold for the number of integer sets needed for their sumset to contain a block of consecutive integers of length, comparable with the lengths of the set summands. a corollary of our main result is as follows. let $ k, l \ ge 1 $ and $ n \ ge 3 $ be integers, and suppose that $ a _ 1,..., a _ k \ subset [ 0, l ] $ are integer sets of size at least $ n $, none of which is contained in an arithmetic progression with difference greater than 1. if $ k \ ge 2 \ lceil ( l - 1 ) / ( n - 2 ) \ rceil $, then the sumset $ a _ 1 +... + a _ k $ contains a block of consecutive integers of length $ k ( n - 1 ) $.
arxiv:0806.4580
personalization is very powerful in improving the effectiveness of health interventions. reinforcement learning ( rl ) algorithms are suitable for learning these tailored interventions from sequential data collected about individuals. however, learning can be very fragile. the time to learn intervention policies is limited as disengagement from the user can occur quickly. also, in e - health intervention timing can be crucial before the optimal window passes. we present an approach that learns tailored personalization policies for groups of users by combining rl and clustering. the benefits are two - fold : speeding up the learning to prevent disengagement while maintaining a high level of personalization. our clustering approach utilizes dynamic time warping to compare user trajectories consisting of states and rewards. we apply online and batch rl to learn policies over clusters of individuals and introduce our self - developed and publicly available simulator for e - health interventions to evaluate our approach. we compare our methods with an e - health intervention benchmark. we demonstrate that batch learning outperforms online learning for our setting. furthermore, our proposed clustering approach for rl finds near - optimal clusterings which lead to significantly better policies in terms of cumulative reward compared to learning a policy per individual or learning one non - personalized policy across all individuals. our findings also indicate that the learned policies accurately learn to send interventions at the right moments and that the users workout more and at the right times of the day.
arxiv:1804.03592
we present infrared observations in search of a planet around the white dwarf, gd66. time - series photometry of gd66 shows a variation in the arrival time of stellar pulsations consistent with the presence of a planet with mass > 2. 4mj. any such planet is too close to the star to be resolved, but the planet ' s light can be directly detected as an excess flux at 4. 5um. we observed gd66 with the two shorter wavelength channels of irac on spitzer but did not find strong evidence of a companion, placing an upper limit of 5 - - 7mj on the mass of the companion, assuming an age of 1. 2 - - 1. 7gyr.
arxiv:0812.2951
we present and discuss several novel applications of deep learning for the physical layer. by interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end - to - end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. we show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. lastly, we demonstrate the application of convolutional neural networks on raw iq samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. the paper is concluded with a discussion of open challenges and areas for future investigation.
arxiv:1702.00832
we have used the wisconsin h - alpha mapper ( wham ) facility to measure the interstellar h - alpha emission toward the high galactic latitude o star hd 93521 ( l = 183. 1, b = + 62. 2 ). three emission components were detected having radial velocities of - 10 km s ^ { - 1 }, - 51 km s ^ { - 1 }, and - 90 km s ^ { - 1 } with respect to the local standard of rest ( lsr ) and h - alpha intensities of 0. 20 r, 0. 15 r, and 0. 023 r, respectively, corresponding to emission measures of 0. 55 cm ^ { - 6 } pc, 0. 42 cm ^ { - 6 } pc, and 0. 06 cm ^ { - 6 } pc. we have also detected an h - alpha emission component at - 1 km s ^ { - 1 } ( lsr ) with an intensity of 0. 20 r ( 0. 55 cm ^ { - 6 } pc ) toward the direction l = 148. 5, b = + 53. 0, which lies in the region of exceptionally low h i column density known as the lockman window. in addition, we studied the direction l = 163. 5, b = + 53. 5. upper limits on the possible intensity of galactic emission toward this direction are 0. 11 r at the lsr and 0. 06 r at - 50 km s ^ { - 1 }. we also detected and characterized twelve faint ( ~ 0. 03 - 0. 15 r ), unidentified atmospheric lines present in wham h - alpha spectra. lastly, we have used wham to obtain [ o i ] 6300 spectra along the line of sight toward hd 93521. we place an upper limit of 0. 060 r on the [ o i ] intensity of the - 51 km s ^ { - 1 } component. if the temperature of the gas is 10, 000 k within the h - alpha emitting region, the hydrogen ionization fraction n ( h + ) / n ( h _ total ) > 0. 6.
arxiv:astro-ph/0112500
the famous posa conjecture states that every graph of minimum degree at least 2n / 3 contains the square of a hamilton cycle. this has been proved for large n by koml \ ' os, sark \ " ozy and szemer \ ' edi. here we prove that if p > n ^ { - 1 / 2 + \ eps }, then asymptotically almost surely, the binomial random graph g _ { n, p } contains the square of a hamilton cycle. this provides an ` approximate threshold ' for the property in the sense that the result fails to hold if p < n ^ { - 1 / 2 }.
arxiv:1203.6310
we explore the ' t hooft - veneziano limit of the polyakov loop models at finite baryon chemical potential. using methods developed by us earlier we calculate the two - and $ n $ - point correlation functions of the polyakov loops. this gives a possibility to compute the various potentials in the confinement phase and to derive the screening masses outside the confinement region. in particular, we establish the existence of complex masses and an oscillating decay of correlations in a certain range of parameters. furthermore, it is shown that the calculation of the $ n $ - point correlation function in the confinement phase reduces to the geometric median problem. this leads to a large $ n $ analog of the $ y $ law for the baryon potential.
arxiv:2311.03907
zero - shot domain adaptation for dialogue state tracking ( dst ) remains a challenging problem in task - oriented dialogue ( tod ) systems, where models must generalize to target domains unseen at training time. current large language model approaches for zero - shot domain adaptation rely on prompting to introduce knowledge pertaining to the target domains. however, their efficacy strongly depends on prompt engineering, as well as the zero - shot ability of the underlying language model. in this work, we devise a novel data augmentation approach, schema augmentation, that improves the zero - shot domain adaptation of language models through fine - tuning. schema augmentation is a simple but effective technique that enhances generalization by introducing variations of slot names within the schema provided in the prompt. experiments on multiwoz and spokenwoz showed that the proposed approach resulted in a substantial improvement over the baseline, in some experiments achieving over a twofold accuracy gain over unseen domains while maintaining equal or superior performance over all domains.
arxiv:2411.00150
bridge + + is a general - purpose code set for lattice qcd simulations aiming at a readable, extensible, and portable code while keeping practical high performance. the new version 2. 0 employs machine - dependent optimization, enabling flexible data layout in float / double precision, while it was fixed layout and only with the double precision in previous versions. we report the performance on supercomputer fugaku with arm a64fx - sve architecture by fujitsu.
arxiv:2303.05883
we present prospects for the $ \ theta ^ + $ pentaquark baryon search using the newly constructed leps2 facility at spring - 8. the leps2 detector system features significant improvements in acceptance for multi - particle final states compared to previous experiments. our search employs two complementary strategies : direct production in the $ \ gamma n \ to k ^ - \ theta ^ + $ reaction using a liquid deuterium target with a photon beam up to 2. 4 gev, and $ \ bar { k } ^ { * 0 } $ - associated $ \ theta ^ + $ production using a liquid hydrogen target with a photon beam up to 2. 9 gev. the extended acceptance covers both forward and large angle regions, effectively spanning the kinematic regions explored by previous leps and clas experiments. the large acceptance and improved resolution of leps2, combined with these complementary approaches, provide unprecedented sensitivity for establishing the existence of the $ \ theta ^ + $ or placing definitive upper limits on its production.
arxiv:2503.02528
we compare the observational and theoretical spectra of the $ \ delta v $ = 2 co bands in a range of m dwarfs. we investigate the dependence of theoretical spectra on effective temperatures as well as carbon abundance. in general we find that the synthetic co bands fit the observed data extremely well and are excellent diagnostics. in particular the synthetic spectra reasonably match observations and the best fit temperatures are similar to those found by empirical methods. we also examine the \ cdc isotopic ratio. we find that fundamental $ ^ { 13 } $ co bands around 2. 345 and 2. 375 $ \ mu $ m are good discriminators for the \ cdc ratio in m dwarfs. the 2. 375 $ \ mu $ m is more useful because it doesn ' t suffer such serious contamination by water vapour transitions. our current dataset does not quite have the wavelength coverage to perform a reliable determination of the \ cdc ratio in m dwarfs. for this we recommend observing the region 2. 31 - - 2. 40 $ \ mu $ m at a resolution of better than 1000. alternatively the observational problems of contamination by water vapour at 2. 345 $ \ mu $ m maybe solved by observing at resolutions of around 50000. we also investigated the possibility of using the $ \ delta v $ = 1 co bands around 4. 5 $ \ mu $ m. we find that the contamination due to water vapour is even more of a problem at these wavelengths.
arxiv:astro-ph/0210017
we propose an efficient evaluation protocol for large vision - language models ( vlms ). given their broad knowledge and reasoning capabilities, multiple benchmarks are needed for comprehensive assessment, making evaluation computationally expensive. to improve efficiency, we construct a subset that yields results comparable to full benchmark evaluations. our benchmark classification experiments reveal that no single benchmark fully covers all challenges. we then introduce a subset construction method using farthest point sampling ( fps ). our experiments show that fps - based benchmarks maintain a strong correlation ( > 0. 96 ) with full evaluations while using only ~ 1 \ % of the data. additionally, applying fps to an existing benchmark improves correlation with overall evaluation results, suggesting its potential to reduce unintended dataset biases.
arxiv:2504.09979
in this paper, we present the details of women in computer vision workshop - wicv 2023, organized alongside the hybrid cvpr 2023 in vancouver, canada. wicv aims to amplify the voices of underrepresented women in the computer vision community, fostering increased visibility in both academia and industry. we believe that such events play a vital role in addressing gender imbalances within the field. the annual wicv @ cvpr workshop offers a ) opportunity for collaboration between researchers from minority groups, b ) mentorship for female junior researchers, c ) financial support to presenters to alleviate finanacial burdens and d ) a diverse array of role models who can inspire younger researchers at the outset of their careers. in this paper, we present a comprehensive report on the workshop program, historical trends from the past wicv @ cvpr events, and a summary of statistics related to presenters, attendees, and sponsorship for the wicv 2023 workshop.
arxiv:2309.12768
we estimate the black hole spin parameter in grs 1915 + 105 using the continuum - fitting method with revised mass and inclination constraints based on the very long baseline interferometric parallax measurement of the distance to this source. we fit rossi x - ray timing explorer observations selected to be accretion disk - dominated spectral states as described in mcclintock et al. ( 2006 ) and middleton et al. ( 2006 ), which previously gave discrepant spin estimates with this method. we find that, using the new system parameters, the spin in both datasets increased, providing a best - fit spin of $ a _ * = 0. 86 $ for the middleton et al. data and a poor fit for the mcclintock et al. dataset, which becomes pegged at the bhspec model limit of $ a _ * = 0. 99 $. we explore the impact of the uncertainties in the system parameters, showing that the best - fit spin ranges from $ a _ * = 0. 4 $ to 0. 99 for the middleton et al. dataset and allows reasonable fits to the mcclintock et al. dataset with near maximal spin for system distances greater than $ \ sim 10 $ kpc. we discuss the uncertainties and implications of these estimates.
arxiv:2101.11655
let $ f : a \ lo b $ be a ring homomorphism and let $ j $ be an ideal of $ b. $ in this paper, we investigate the transfer of notions elementary divisor ring, hermite ring and b \ ' ezout ring to the amalgamation $ a \ bowtie ^ fj. $ we provide necessary and sufficient conditions for $ a \ bowtie ^ fj $ to be an elementary divisor ring where $ a $ and $ b $ are integral domains. in this case it is shown that $ a \ bowtie ^ fj $ is an hermite ring if and only it is a b \ ' ezout ring. in particular, we study the transfer of the previous notions to the amalgamated duplication of a ring $ a $ along an $ a - $ submodule $ e $ of $ q ( a ) $ such that $ e ^ 2 \ subseteq e. $
arxiv:1006.0159
the pressure function is a fundamental object in various areas of mathematics. its regularity is studied to derive insights into phase transitions in certain physical systems or to determine the hausdorff dimension of self - affine sets. in this paper, we prove the analyticity of the pressure function for products of non - invertible matrices satisfying an irreducibility and a contractivity assumptions. additionally, we establish a variational principle for the pressure function, thereby generalizing previous results.
arxiv:2501.03590
we consider inverse problems of determining coefficients or time independent factors of source terms in radiative transport equations by means of carleman estimate. we establish global lipschitz stability results with an additional condition which requires some strict positivity for initial value or given factor of source, but we need not any extra conditions on domains of velocities, which is the main achievement of this article compared with the existing work by machida and yamamoto ( { \ it inverse problems } { \ bf 30 } 035010, 2014 ). the proof relies on a carleman estimate with a piecewise linear weight function according to the partition of the velocity domain.
arxiv:2009.04277
while canonical quantization solves many problems there are some problems where it fails. a close examination of the classical / quantum connection leads to a new connection that permits quantum and classical realms to coexist, as is the case in the real world. for those problems for which conventional quantization works well, the new procedures yield identical results. however, we offer an example that fails when quantized conventionally but succeeds when quantized with the new procedures.
arxiv:1811.05328
this paper deals with the existence, multiplicity, minimal complexity and global structure of the subharmonic solutions to a class of planar hamiltonian systems with periodic coefficients, being the classical predator - prey model of v. volterra its most paradigmatic example. by means of a topological approach based on techniques from global bifurcation theory, the first part of the paper ascertains their nature, multiplicity and minimal complexity, as well as their global minimal structure, in terms of the configuration of the function coefficients in the setting of the model. the second part of the paper introduces a dynamical system approach based on the theory of topological horseshoes that permits to detect, besides subharmonic solutions, ` ` chaotic - type ' ' solutions. as a byproduct of our analysis, the simplest predator - prey prototype models in periodic environments can provoke chaotic dynamics. this cannot occur in cooperative and quasi - cooperative dynamics, as a consequence of the ordering imposed by the maximum principle.
arxiv:2212.11280
based on an analysis of data obtained with the wide field camera 3 ( wfc3 ) on the hubble space telescope ( hst ) we report the identification of two distinct stellar populations in the core of the giant hii region 30doradus in the large magellanic cloud. the most compact and richest component coincides with the center of r136 and is ~ 1 myr younger than a second more diffuse clump, located ~ 5. 4 pc toward the northeast. we note that published spectral types of massive stars in these two clumps lend support to the proposed age difference. the morphology and age difference between the two sub - clusters suggests that an ongoing merger may occurring within the core of 30doradus. this finding is consistent with the predictions of models of hierarchical fragmentation of turbulent giant molecular clouds, according to which star clusters would be the final products of merging smaller sub - structures.
arxiv:1208.3103
, fluid surface interaction, etc. = = = biomechanics = = = biomechanics is the application of mechanical principles to biological systems, such as humans, animals, plants, organs, and cells. biomechanics also aids in creating prosthetic limbs and artificial organs for humans. biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems. some simple applications of newtonian mechanics and / or materials sciences can supply correct approximations to the mechanics of many biological systems. in the past decade, reverse engineering of materials found in nature such as bone matter has gained funding in academia. the structure of bone matter is optimized for its purpose of bearing a large amount of compressive stress per unit weight. the goal is to replace crude steel with bio - material for structural design. over the past decade the finite element method ( fem ) has also entered the biomedical sector highlighting further engineering aspects of biomechanics. fem has since then established itself as an alternative to in vivo surgical assessment and gained the wide acceptance of academia. the main advantage of computational biomechanics lies in its ability to determine the endo - anatomical response of an anatomy, without being subject to ethical restrictions. this has led fe modelling to the point of becoming ubiquitous in several fields of biomechanics while several projects have even adopted an open source philosophy ( e. g. biospine ). = = = computational fluid dynamics = = = computational fluid dynamics, usually abbreviated as cfd, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. with high - speed supercomputers, better solutions can be achieved. ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as turbulent flows. initial validation of such software is performed using a wind tunnel with the final validation coming in full - scale testing, e. g. flight tests. = = = acoustical engineering = = = acoustical engineering is one of many other sub - disciplines of mechanical engineering and is the application of acoustics. acoustical engineering is the study of sound and vibration. these engineers work effectively to reduce noise pollution in mechanical devices and in buildings by soundproofing or removing sources of unwanted noise. the study of acoustics can range from designing a more efficient hearing aid, microphone, headphone, or recording studio to enhancing
https://en.wikipedia.org/wiki/Mechanical_engineering
many features of a molecule which are of physical interest ( e. g. molecular conformations, reaction rates ) are described in terms of its dynamics in configuration space. this article deals with the projection of molecular dynamics in phase space onto configuration space. specifically, we study the situation that the phase space dynamics is governed by a stochastic langevin equation and study its relation with the configurational smoluchowski equation in the three different scaling regimes : firstly, the smoluchowski equations in non - cartesian geometries are derived from the overdamped limit of the langevin equation. secondly, transfer operator methods are used to describe the metastable behaviour of the system at hand, and an explicit small - time asymptotics is derived on which the smoluchowski equation turns out to govern the dynamics of the position coordinate ( without any assumptions on the damping ). by using an adequate reduction technique, these considerations are then extended to one - dimensional reaction coordinates. thirdly, we sketch three different approaches to approximate the metastable dynamics based on time - local information only.
arxiv:1502.01191
the first year results of wmap tentatively indicate running of the spectral index as well as a deficit of power in the low multipoles in the cmb spectrum. the former can be rather easily understood in the noncommutative inflation model, and the latter, as we shall show in this paper, still appears to be an anomaly, even though the noncommutative inflation model already suppresses the low multipoles to a certain degree. by fitting the power spectrum, we determine the string scale to be $ l _ s \ sim 4 \ times 10 ^ { - 29 } $ cm.
arxiv:astro-ph/0308458
we report on the search for gamma ray bursts ( grbs ) in the energy range 1 - 100 gev in coincidence with the prompt emission detected by satellites using the astrophysical radiation with ground - based observatory at yangbajing ( argo - ybj ) air shower detector. thanks to its mountain location ( yangbajing, tibet, p. r. china, 4300 m a. s. l. ), active surface ( about 6700 m * * 2 of resistive plate chambers ), and large field of view ( about 2 sr, limited only by the atmospheric absorption ), the argo - ybj air shower detector is particularly suitable for the detection of unpredictable and short duration events such as grbs. the search is carried out using the " single particle technique ", i. e. counting all the particles hitting the detector without measurement of the energy and arrival direction of the primary gamma rays. between 2004 december 17 and 2009 april 7, 81 grbs detected by satellites occurred within the field of view of argo - ybj ( zenith angle < 45 deg ). it was possible to examine 62 of these for > 1 gev counterpart in the argo - ybj data finding no statistically significant emission. with a lack of detected spectra in this energy range fluence upper limits are profitable, especially when the redshift is known and the correction for the extragalactic absorption can be considered. the obtained fluence upper limits reach values as low as 10 * * { - 5 } erg cm * * { - 2 } in the 1 - 100 gev energy region. besides this individual search for a higher energy counterpart, a statistical study of the stack of all the grbs both in time and in phase was made, looking for a common feature in the grb high energy emission. no significant signal has been detected.
arxiv:0905.1189
we demonstrate coupling of magnetically trapped ultracold $ ^ 87 $ rb ground state atoms to a coherently driven superconducting coplanar resonator on an integrated atom chip. we measure the microwave field strength in the cavity through observation of the ac shift of the hyperfine transition frequency when the cavity is driven off - resonance from the atomic transition. the measured shifts are used to reconstruct the field in the resonator, in close agreement with transmission measurements of the cavity, giving proof of the coupling between atoms and resonator. when driving the cavity in resonance with the atoms, we observe rabi oscillations between atomic hyperfine states, demonstrating coherent control of the atomic states through the cavity field. the observation of two - photon rabi oscillations using an additional external radio frequency enables the preparation of magnetically trapped coherent superposition states near the superconducting cavity, which are required for the implementation of an atomic quantum memory.
arxiv:1707.02730
relativistic free - motion time - of - arrival theory for massive spin - 1 / 2 particles is systematically developed. contrary to the nonrelativistic time - of - arrival operator studied thoroughly in previous literatures, the relativistic time - of - arrival operator possesses self - adjoint extensions because of the particle - antiparticle symmetry. the nonrelativistic limit of our theory is in agreement with the nonrelativistic time - of - arrival theory.
arxiv:quant-ph/0608031
the quest for a model that is able to explain, describe, analyze and simulate real - world complex networks is of uttermost practical as well as theoretical interest. in this paper we introduce and study a network model that is based on a latent attribute structure : each node is characterized by a number of features and the probability of the existence of an edge between two nodes depends on the features they share. features are chosen according to a process of indian - buffet type but with an additional random " fitness " parameter attached to each node, that determines its ability to transmit its own features to other nodes. as a consequence, a node ' s connectivity does not depend on its age alone, so also " young " nodes are able to compete and succeed in acquiring links. one of the advantages of our model for the latent bipartite " node - attribute " network is that it depends on few parameters with a straightforward interpretation. we provide some theoretical, as well experimental, results regarding the power - law behaviour of the model and the estimation of the parameters. by experimental data, we also show how the proposed model for the attribute structure naturally captures most local and global properties ( e. g., degree distributions, connectivity and distance distributions ) real networks exhibit. keyword : complex network, social network, attribute matrix, indian buffet process
arxiv:1407.7729
we exhibit an explicit construction for the second cohomology group $ h ^ 2 ( l, a ) $ for a lie ring $ l $ and a trivial $ l $ - module $ a $. we show how the elements of $ h ^ 2 ( l, a ) $ correspond one - to - one to the equivalence classes of central extensions of $ l $ by $ a $, where $ a $ now is considered as an abelian lie ring. for a finite lie ring $ l $ we also show that $ h ^ 2 ( l, \ c ^ * ) \ cong m ( l ) $, where $ m ( l ) $ denotes the schur multiplier of $ l $. these results match precisely the analogue situation in group theory.
arxiv:1310.0503
speaker anonymization aims to protect the privacy of speakers while preserving spoken linguistic information from speech. current mainstream neural network speaker anonymization systems are complicated, containing an f0 extractor, speaker encoder, automatic speech recognition acoustic model ( asr am ), speech synthesis acoustic model and speech waveform generation model. moreover, as an asr am is language - dependent, trained on english data, it is hard to adapt it into another language. in this paper, we propose a simpler self - supervised learning ( ssl ) - based method for language - independent speaker anonymization without any explicit language - dependent model, which can be easily used for other languages. extensive experiments were conducted on the voiceprivacy challenge 2020 datasets in english and aishell - 3 datasets in mandarin to demonstrate the effectiveness of our proposed ssl - based language - independent speaker anonymization method.
arxiv:2202.13097
in many unpaired image domain translation problems, e. g., style transfer or super - resolution, it is important to keep the translated image similar to its respective input image. we propose the extremal transport ( et ) which is a mathematical formalization of the theoretically best possible unpaired translation between a pair of domains w. r. t. the given similarity function. inspired by the recent advances in neural optimal transport ( ot ), we propose a scalable algorithm to approximate et maps as a limit of partial ot maps. we test our algorithm on toy examples and on the unpaired image - to - image translation task. the code is publicly available at https : / / github. com / milenagazdieva / extremalneuraloptimaltransport
arxiv:2301.12874
we prove a decomposition theorem of the quantum cohomology d - module of the blowup of a smooth projective variety x along a smooth subvariety z. the main tools we use are shift operators and fourier analysis for equivariant quantum cohomology.
arxiv:2307.13555
we investigate the statistical properties of adjacency relationships in a two - dimensional vicsek model. we define adjacent edges for all particles at every time step by ( a ) delaunay triangulation and ( b ) euclidean distance, and obtain cumulative distributions $ p ( \ tau ) $ of lifetime $ \ tau $ of the edges. we find that the shape of $ p ( \ tau ) $ changes from an exponential to a power law depending on the interaction radius, which is a parameter of the vicsek model. we discuss the emergence of the power - law distribution from the viewpoint of first passage time problem for a fractional brownian motion.
arxiv:1812.06395
let $ p $ and $ q $ be locally h \ " { o } lder functions in $ \ rr ^ n $, $ p > 0 $ and $ q \ geq 0 $. we study the emden - fowler equation $ - \ delta u + q ( x ) | \ nabla u | ^ a = p ( x ) u ^ { - \ gamma } $ in $ \ rr ^ n $, where $ a $ and $ \ gamma $ are positive numbers. our main result establishes that the above equation has a unique positive solutions decaying to zero at infinity. our proof is elementary and it combines the maximum principle for elliptic equations with a theorem of crandall, rabinowitz and tartar.
arxiv:math/0511164
practical constructions of lossless distributed source codes ( for the slepian - wolf problem ) have been the subject of much investigation in the past decade. in particular, near - capacity achieving code designs based on ldpc codes have been presented for the case of two binary sources, with a binary - symmetric correlation. however, constructing practical codes for the case of non - binary sources with arbitrary correlation remains by and large open. from a practical perspective it is also interesting to consider coding schemes whose performance remains robust to uncertainties in the joint distribution of the sources. in this work we propose the usage of reed - solomon ( rs ) codes for the asymmetric version of this problem. we show that algebraic soft - decision decoding of rs codes can be used effectively under certain correlation structures. in addition, rs codes offer natural rate adaptivity and performance that remains constant across a family of correlation structures with the same conditional entropy. the performance of rs codes is compared with dedicated and rate adaptive multistage ldpc codes ( varodayan et al. ' 06 ), where each ldpc code is used to compress the individual bit planes. our simulations show that in classical slepian - wolf scenario, rs codes outperform both dedicated and rate - adaptive ldpc codes under $ q $ - ary symmetric correlation, and are better than rate - adaptive ldpc codes in the case of sparse correlation models, where the conditional distribution of the sources has only a few dominant entries. in a feedback scenario, the performance of rs codes is comparable with both designs of ldpc codes. our simulations also demonstrate that the performance of rs codes in the presence of inaccuracies in the joint distribution of the sources is much better as compared to multistage ldpc codes.
arxiv:1106.2792
in modern neural networks like transformers, linear layers require significant memory to store activations during backward pass. this study proposes a memory reduction approach to perform backpropagation through linear layers. since the gradients of linear layers are computed by matrix multiplications, we consider methods for randomized matrix multiplications and demonstrate that they require less memory with a moderate decrease of the test accuracy. also, we investigate the variance of the gradient estimate induced by the randomized matrix multiplication. we compare this variance with the variance coming from gradient estimation based on the batch of samples. we demonstrate the benefits of the proposed method on the fine - tuning of the pre - trained roberta model on glue tasks.
arxiv:2201.13195
in this paper we will modify the milnor - - thurston map, which maps a one dimensional mapping to a piece - wise linear of the same entropy, and study its properties. this will allow us to give a simple proof of monotonicity of topological entropy for real polynomials and better understand when a one dimensional map can and cannot be approximated by hyperbolic maps of the same entropy. in particular, we will find maps of particular combinatorics which cannot be approximated by hyperbolic maps of the same entropy.
arxiv:1901.06906
autonomous robotic manipulation in clutter is challenging. a large variety of objects must be perceived in complex scenes, where they are partially occluded and embedded among many distractors, often in restricted spaces. to tackle these challenges, we developed a deep - learning approach that combines object detection and semantic segmentation. the manipulation scenes are captured with rgb - d cameras, for which we developed a depth fusion method. employing pretrained features makes learning from small annotated robotic data sets possible. we evaluate our approach on two challenging data sets : one captured for the amazon picking challenge 2016, where our team nimbro came in second in the stowing and third in the picking task, and one captured in disaster - response scenarios. the experiments show that object detection and semantic segmentation complement each other and can be combined to yield reliable object perception.
arxiv:1810.00818
the unified gas kinetic scheme ( ugks ) is a direct modeling method based on the gas dynamical model on the mesh size and time step scales. with the implementation of particle transport and collision in a time - dependent flux function, the ugks can recover multiple flow physics from the kinetic particle transport to the hydrodynamic wave propagation. in comparison with direct simulation monte carlo ( dsmc ), the equations - based ugks can use the implicit techniques in the updates of macroscopic conservative variables and microscopic distribution function. the implicit ugks significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regime. in order to further improve the computational efficiency, for the first time a geometric multigrid technique is introduced into the implicit ugks, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. the multigrid implicit ugks ( miugks ) is used in the non - equilibrium flow study, which includes microflow, such as lid - driven cavity flow and the flow passing through a finite - length flat plate, and high speed one, such as supersonic flow over a square cylinder. the miugks shows 5 to 9 times efficiency increase over the previous implicit scheme. for the low speed microflow, the efficiency of miugks is several orders of magnitude higher than the dsmc. even for the hypersonic flow at mach number 5 and knudsen number 0. 1, the miugks is still more than 100 times faster than the dsmc method for a convergent steady state solution.
arxiv:1704.03151
} { \ displaystyle \ { y _ { ij } : i = 1, \ dots, n ; j = 1, \ dots, n \ } }, indexed by pairs of nodes i j { \ displaystyle ij }, where y i j = 1 { \ displaystyle y _ { ij } = 1 } if the nodes ( i, j ) { \ displaystyle ( i, j ) } are connected by an edge and y i j = 0 { \ displaystyle y _ { ij } = 0 } otherwise. the basic assumption of ergms is that the structure in an observed graph y { \ displaystyle y } can be explained by a given vector of sufficient statistics s ( y ) { \ displaystyle s ( y ) } which are a function of the observed network and, in some cases, nodal attributes. the probability of a graph y ∈ y { \ displaystyle y \ in { \ mathcal { y } } } in an ergm is defined by : p ( y = y | θ ) = exp ( θ t s ( y ) ) c ( θ ) { \ displaystyle p ( y = y | \ theta ) = { \ frac { \ exp ( \ theta ^ { t } s ( y ) ) } { c ( \ theta ) } } } where θ { \ displaystyle \ theta } is a vector of model parameters associated with s ( y ) { \ displaystyle s ( y ) } and c ( θ ) = y ′ ∈ y exp ( θ t s ( y ′ ) ) { \ displaystyle c ( \ theta ) = \ sum _ { y ' \ in { \ mathcal { y } } } \ exp ( \ theta ^ { t } s ( y ' ) ) } is a normalising constant. = = network analysis = = = = = social network analysis = = = social network analysis examines the structure of relationships between social entities. these entities are often persons, but may also be groups, organizations, nation states, web sites, scholarly publications. since the 1970s, the empirical study of networks has played a central role in social science, and many of the mathematical and statistical tools used for studying networks have been first developed in sociology. amongst many other applications, social network analysis has been used to understand the diffusion of innovation, news and rumors. similarly, it has been used to examine the spread of both diseases and health - related behaviors. it has also been applied to the
https://en.wikipedia.org/wiki/Network_science
the theory of waves and instabilities in a differentially rotating disc containing a poloidal magnetic field is developed within the framework of ideal magnetohydrodynamics. a continuous spectrum, for which the eigenfunctions are localized on individual magnetic surfaces, is identified but is found not to contain any instabilities associated with differential rotation. the normal modes of a weakly magnetized thin disc are studied by extending the asymptotic methods used previously to describe the equilibria. waves propagate radially in the disc according to a dispersion relation which is determined by solving an eigenvalue problem at each radius. the dispersion relation for a hydrodynamic disc is re - examined and the modes are classified according to their behaviour in the limit of large wavenumber. the addition of a magnetic field introduces new, potentially unstable, modes and also breaks up the dispersion diagram by causing avoided crossings. the stability boundary to the magnetorotational instability in the parameter space of polytropic equilibria is located by solving directly for marginally stable equilibria. for a given vertical magnetic field in the disc, bending of the field lines has a stabilizing effect and it is shown that stable equilibria exist which are capable of launching a predominantly centrifugally driven wind.
arxiv:astro-ph/9801306
branching flow - - a phenomenon known for steady wave propagation in two - dimensional weak correlated random potential is also present in the time - dependent schr \ " odinger equation for a single particle in one dimension, moving in a fluctuating random potential. we explore the two - dimensional parameter space of this model using numerical simulations and identify its classical regions, where just one classical parameter is sufficient for its specification, and its quantum region, where such a simplification is not possible. we also identify region of the parameter space where known analytical results of a classical white - noise model are relevant. qualitative behavior of quantum and classical particle dynamics is discussed in terms of branching time scale and a new time scale related to particle ' s kinetic energy.
arxiv:2209.01439
the design of deep graph models still remains to be investigated and the crucial part is how to explore and exploit the knowledge from different hops of neighbors in an efficient way. in this paper, we propose a novel rnn - like deep graph neural network architecture by incorporating adaboost into the computation of network ; and the proposed graph convolutional network called adagcn ~ ( adaboosting graph convolutional network ) has the ability to efficiently extract knowledge from high - order neighbors of current nodes and then integrates knowledge from different hops of neighbors into the network in an adaboost way. different from other graph neural networks that directly stack many graph convolution layers, adagcn shares the same base neural network architecture among all ` ` layers ' ' and is recursively optimized, which is similar to an rnn. besides, we also theoretically established the connection between adagcn and existing graph convolutional methods, presenting the benefits of our proposal. finally, extensive experiments demonstrate the consistent state - of - the - art prediction performance on graphs across different label rates and the computational advantage of our approach adagcn ~ \ footnote { code is available at \ url { https : / / github. com / datake / adagcn }. }
arxiv:1908.05081
quantum coherence with respect to orthonormal bases has been studied extensively in the past few years. recently, bischof, et al. [ phys. rev. lett. 123, 110402 ( 2019 ) ] generalized it to the case of general positive operator - valued measure ( povm ) measurements. such povm - based coherence, including the block coherence as a special case, have significant operational interpretations in quantifying the advantage of quantum states in quantum information processing. in this work we first establish an alternative framework for quantifying the block coherence and provide several block coherence measures. we then present several coherence measures with respect to povm measurements, and prove a conjecture on the $ l _ { 1 } $ - norm related povm coherence measure.
arxiv:2007.09587
this paper presents novel methods to approximate the nonlinear ac optimal power flow ( opf ) into tractable linear / quadratic programming ( lp / qp ) based opf problems that can be used for power system planning and operation. we derive a linear power flow approximation and consider a convex reformulation of the power losses in the form of absolute value functions. we show four ways how to incorporate this approximation into lp / qp based opf problems. in a comprehensive case study the usefulness of our opf methods is analyzed and compared with an existing opf relaxation and approximation method. as a result, the errors on voltage magnitudes and angles are reasonable, while obtaining near - optimal results for typical scenarios. we find that our methods reduce significantly the computational complexity compared to the nonlinear ac - opf making them a good choice for planning purposes.
arxiv:1711.00317
the objective of pose slam or pose - graph optimization ( pgo ) is to estimate the trajectory of a robot given odometric and loop closing constraints. state - of - the - art iterative approaches typically involve the linearization of a non - convex objective function and then repeatedly solve a set of normal equations. furthermore, these methods may converge to a local minima yielding sub - optimal results. in this work, we present to the best of our knowledge the first deep reinforcement learning ( drl ) based environment and proposed agent for 2d pose - graph optimization. we demonstrate that the pose - graph optimization problem can be modeled as a partially observable markov decision process and evaluate performance on real - world and synthetic datasets. the proposed agent outperforms state - of - the - art solver g2o on challenging instances where traditional nonlinear least - squares techniques may fail or converge to unsatisfactory solutions. experimental results indicate that iterative - based solvers bootstrapped with the proposed approach allow for significantly higher quality estimations. we believe that reinforcement learning - based pgo is a promising avenue to further accelerate research towards globally optimal algorithms. thus, our work paves the way to new optimization strategies in the 2d pose slam domain.
arxiv:2202.13221
the linear and non - linear dynamic response to an oscillatory shear flow of giant wormlike micelles consisting of pb - peo block copolymers is studied by means of fourier transform rheology. experiments are performed in the vicinity of the isotropic - nematic phase transition concentration, where the location of isotropic - nematic phase transition lines is determined independently. strong shear - thinning behaviour is observed due to critical slowing down of orientational diffusion as a result of the vicinity of the isotropic - nematic spinodal. this severe shear - thinning behaviour is shown to result in gradient shear banding. time - resolved small angle neutron scattering experiments are used to obtain insight in the microscopic phenomena that underly the observed rheological response. an equation of motion for the order - parameter tensor and an expression of the stress tensor in terms of the order - parameter tensor are used to interpret the experimental data, both in the linear and non - linear regime. scaling of the dynamic behaviour of the orientational order parameter and the stress is found when critical slowing down due to the vicinity of the isotropic - nematic spinodal is accounted for.
arxiv:0808.4054
basing on a semiclassical picture of dyons, we present a nonperturbative model of a pure yang - - mills theory at any temperatures, for an arbitrary simple gauge group. we argue that at low temperatures dyons drive the yang - - mills system for all groups to a phase where the ` eigenphases ' of the polyakov line are, as a vector, proportional to the weyl vector being the half sum of positive roots. for most gauge groups it means confinement, in particular for ` quarks ' in any n - ality nonzero representation of the su ( n ) gauge group. at a critical temperature there is a 1st order phase transition for all groups ( except su ( 2 ) where the transition is 2nd order ), characterized by a jump of polyakov lines, irrespectively of whether the gauge group has a nontrivial center, or not.
arxiv:1011.5636
research on digital nudging has become increasingly popular in the information systems ( is ) community. this paper presents an overview of the current progress, a critical reflection and an outlook to further research regarding digital nudging in is. for this purpose, we conducted a comprehensive literature review as well as an interview with markus weinmann from rotterdam school of management at erasmus university, one of the first scholars who introduced digital nudging to the is community, and alexey voinov, director of the centre on persuasive systems for wise adaptive living at university of technology sydney. the findings uncover a gap between what we know about what constitutes digital nudging and how consequent requirements can actually be put into practice. in this context, the original concept of nudging bears inherent challenges, e. g. regarding the focus on the individuals ' welfare, which hence also apply to digital nudging. moreover, we need a better understanding of how nudging in digital choice environments differs from that in the offline world. to further distinguish itself from other disciplines that already tested various nudges in many different domains, digital nudging research in is may benefit from a strong design science perspective, going beyond the test of effectiveness and providing specific design principles for the different types of digital nudges.
arxiv:1911.08202
semantic parsing that translates natural language queries to sparql is of great importance for knowledge graph question answering ( kgqa ) systems. although pre - trained language models like t5 have achieved significant success in the text - to - sparql task, their generated outputs still exhibit notable errors specific to the sparql language, such as triplet flips. to address this challenge and further improve the performance, we propose an additional pre - training stage with a new objective, triplet order correction ( toc ), along with the commonly used masked language modeling ( mlm ), to collectively enhance the model ' s sensitivity to triplet order and sparql syntax. our method achieves state - of - the - art performances on three widely - used benchmarks.
arxiv:2410.05731
prediction of vehicle lane change maneuvers has gained a lot of momentum in the last few years. some recent works focus on predicting a vehicle ' s intention by predicting its trajectory first. this is not enough, as it ignores the context of the scene and the state of the surrounding vehicles ( as they might be risky to the target vehicle ). other works assessed the risk made by the surrounding vehicles only by considering their existence around the target vehicle, or by considering the distance and relative velocities between them and the target vehicle as two separate numerical features. in this work, we propose a solution that leverages knowledge graphs ( kgs ) to anticipate lane changes based on linguistic contextual information in a way that goes well beyond the capabilities of current perception systems. our solution takes the time to collision ( ttc ) with surrounding vehicles as input to assess the risk on the target vehicle. moreover, our kg is trained on the highd dataset using the transe model to obtain the knowledge graph embeddings ( kge ). then, we apply bayesian inference on top of the kg using the embeddings learned during training. finally, the model can predict lane changes two seconds ahead with 97. 95 % f1 - score, which surpassed the state of the art, and three seconds before changing lanes with 93. 60 % f1 - score.
arxiv:2312.06336
hospital readmission has become a critical metric of quality and cost of healthcare. medicare anticipates that nearly $ 17 billion is paid out on the 20 % of patients who are readmitted within 30 days of discharge. although several interventions such as transition care management and discharge reengineering have been practiced in recent years, the effectiveness and sustainability depends on how well they can identify and target patients at high risk of rehospitalization. based on the literature, most current risk prediction models fail to reach an acceptable accuracy level ; none of them considers patient ' s history of readmission and impacts of patient attribute changes over time ; and they often do not discriminate between planned and unnecessary readmissions. tackling such drawbacks, we develop a new readmission metric based on administrative data that can identify potentially avoidable readmissions from all other types of readmission. we further propose a tree based classification method to estimate the predicted probability of readmission that can directly incorporate patient ' s history of readmission and risk factors changes over time. the proposed methods are validated with 2011 - 12 veterans health administration data from inpatients hospitalized for heart failure, acute myocardial infarction, pneumonia, or chronic obstructive pulmonary disease in the state of michigan. results shows improved discrimination power compared to the literature ( c - statistics > 80 % ) and good calibration.
arxiv:1402.5991
neutrino oscillation experiments and direct bounds on absolute masses constrain neutrino mass differences to fall into the microwave energy range, for most of the allowed parameter space. as a consequence of these recent phenomenological advances, older constraints on radiative neutrino decays based on diffuse background radiations and assuming strongly hierarchical masses in the ev range are now outdated. we thus derive new bounds on the radiative neutrino lifetime using the high precision cosmic microwave background spectral data collected by the far infrared absolute spectrophotometer instrument on board of cosmic background explorer. the lower bound on the lifetime is between a few x 10 ^ 19 s and 5 x 10 ^ 20 s, depending on the neutrino mass ordering and on the absolute mass scale. however, due to phase space limitations, the upper bound in terms of the effective magnetic moment mediating the decay is not better than ~ 10 ^ - 8 bohr magnetons. we also comment about possible improvements of these limits, by means of recent diffuse infrared photon background data. we compare these bounds with pre - existing limits coming from laboratory or astrophysical arguments. we emphasize the complementarity of our results with others available in the literature.
arxiv:0705.4667
we give lower bounds for the degree of the discriminant with respect to y of separable polynomials f in k [ x, y ] over an algebraically closed field of characteristic zero. depending on the invariants involved in the lower bound, we give a geometrical characterisation of those polynomials having minimal discriminant, and give an explicit construction of all such polynomials in many cases. in particular, we show that irreducible monic polynomials with minimal discriminant coincide with coordinate polynomials. we obtain analogous partial results for the case of nonmonic or reducible polynomials by studying their gl2 ( k [ x ] ) - orbit and by establishing some combinatorial constraints on their newton polytope. our results suggest some natural extensions of the embedding line theorem of abhyankar - moh and of the nagata - coolidge problem to the case of unicuspidal curves of p1 x p1.
arxiv:1507.01091
the complex numbers are an important part of quantum theory, but are difficult to motivate from a theoretical perspective. we describe a simple formal framework for theories of physics, and show that if a theory of physics presented in this manner satisfies certain completeness properties, then it necessarily includes the complex numbers as a mathematical ingredient. central to our approach are the techniques of category theory, and we introduce a new category - theoretical tool, called the dagger - limit, which governs the way in which systems can be combined to form larger systems. these dagger - limits can be used to characterize the dagger - functor on the category of finite - dimensional hilbert spaces, and so can be used as an equivalent definition of the inner product. one of our main results is that in a nontrivial monoidal dagger - category with all finite dagger - limits and a simple tensor unit, the semiring of scalars embeds into an involutive field of characteristic 0 and orderable fixed field.
arxiv:0807.2927
scene graph generation ( sgg ) is built on top of detected objects to predict object pairwise visual relations for describing the image content abstraction. existing works have revealed that if the links between objects are given as prior knowledge, the performance of sgg is significantly improved. inspired by this observation, in this article, we propose a relation regularized network ( r2 - net ), which can predict whether there is a relationship between two objects and encode this relation into object feature refinement and better sgg. specifically, we first construct an affinity matrix among detected objects to represent the probability of a relationship between two objects. graph convolution networks ( gcns ) over this relation affinity matrix are then used as object encoders, producing relation - regularized representations of objects. with these relation - regularized features, our r2 - net can effectively refine object labels and generate scene graphs. extensive experiments are conducted on the visual genome dataset for three sgg tasks ( i. e., predicate classification, scene graph classification, and scene graph detection ), demonstrating the effectiveness of our proposed method. ablation studies also verify the key roles of our proposed components in performance improvement.
arxiv:2202.10826
hertzberg has constructed a quantum oscillon that decays into pairs of relativistic mesons with a power much greater than the radiation from classical oscillon decay. this result is often construed as a proof that quantum oscillons decay quickly, and so are inconsequential. we apply a construction similar to hertzberg ' s to the quantum kink. again it leads to a rapid decay via the emission of relativistic mesons. however, we find that this is the decay of a squeezed kink state to a stable kink state, and so it does not imply that the quantum kink is unstable. we then consider a time - dependent solution, which may be an oscillon, and we see that the argument proceeds identically.
arxiv:2305.18056
for any non - zero finite module m of finite projective dimension over a noetherian local ring r with maximal ideal m and residue field k, it is proved that the natural map ext _ r ( k, m ) - - > ext _ r ( k, m / mm ) is non - zero when r is regular and is zero otherwise. a noteworthy aspect of the proof is the use of stable cohomology. applications include computations of bass series over certain local rings.
arxiv:1208.4458
solving for the lowest energy eigenstate of the many - body schrodinger equation is a cornerstone problem that hinders understanding of a variety of quantum phenomena. the difficulty arises from the exponential nature of the hilbert space which casts the governing equations as an eigenvalue problem of exponentially large, structured matrices. variational methods approach this problem by searching for the best approximation within a lower - dimensional variational manifold. in this work we use graph neural networks to define a structured variational manifold and optimize its parameters to find high quality approximations of the lowest energy solutions on a diverse set of heisenberg hamiltonians. using graph networks we learn distributed representations that by construction respect underlying physical symmetries of the problem and generalize to problems of larger size. our approach achieves state - of - the - art results on a set of quantum many - body benchmark problems and works well on problems whose solutions are not positive - definite. the discussed techniques hold promise of being a useful tool for studying quantum many - body systems and providing insights into optimization and implicit modeling of exponentially - sized objects.
arxiv:2110.06390
future wireless communication systems must simultaneously address multiple challenges to ensure accurate data detection, deliver high quality of service ( qos ), adding enable a high data transmission with low system design. additionally, they need to reduce energy consumption and latency without increasing system complexity. orthogonal frequency division multiplexing ( ofdm ) is a commonly used waveform in 4g and 5g systems, it has limitations in handling significant delay and doppler spread in high mobility scenarios. to overcome these weaknesses, a novel waveform named orthogonal time frequency space ( otfs ) has been proposed, which aims to improve upon ofdm by closely matching signals to channel behavior. in this study, we propose a novel strategy that enables operators to dynamically select the best waveform based on estimated mobile user parameters. we use an integrated radar sensing and communication system ( isac ) to estimate delay and doppler, as well as speed and range. this approach allows the base station to adapt to the mobile target, thereby enhancing the performance of wireless communication systems in high mobility and low complexity scenarios. simulation results demonstrate the effectiveness of our proposed approach and show that it outperforms existing methods.
arxiv:2408.03460
no. the title question was posed by d. kalyuzhnyi - verbovetskyi [ 1, problem 1. 3 ]. let \ mathcal { l ( h, k ) } denote the set of all bounded linear operators between a pair of hilbert spaces \ mathcal { h, k }, and let \ mathbb { d } ^ { n } and \ mathbb { t } ^ { n } denote the open unit polydisk, and the unit n - torus, respectively.
arxiv:1201.3286
we will remark an extension of a linear functional on subalgebra of algebra of continuous functions on subset of $ \ mathbb { r } ^ n $ which preserves positivity.
arxiv:1605.00368
we give a direct and simple proof of touchard ' s continued fraction, provide an extension of it, and transform it into similar expansions related to motzkin and schroeder numbers. another proof is then given that uses only induction. we use this machinery on two examples that appear in recent papers of josuat - verges ; with an additional parameter, these two can be treated simultaneously.
arxiv:1102.5186
we use the suita conjecture ( now a theorem ) to prove that for any domain $ \ omega \ subset \ mathbb { c } $ its bergman kernel $ k ( \ cdot, \ cdot ) $ satisfies $ k ( z _ 0, z _ 0 ) = \ hbox { volume } ( \ omega ) ^ { - 1 } $ for some $ z _ 0 \ in \ omega $ if and only if $ \ omega $ is either a disk minus a ( possibly empty ) closed polar set or $ \ mathbb { c } $ minus a ( possibly empty ) closed polar set. when $ \ omega $ is bounded with $ c ^ { \ infty } $ - boundary, we provide a simple proof of this using the zero set of the szeg \ " o kernel. finally, we show that this theorem fails to hold in $ \ mathbb { c } ^ n $ for $ n > 1 $ by constructing a bounded complete reinhardt domain ( with algebraic boundary ) which is strongly convex and not biholomorphic to the unit ball $ \ mathbb { b } ^ n \ subset \ mathbb { c } ^ n $.
arxiv:2001.01856
we prove that analogues of the hardy - littlewood generalised twin prime conjecture for almost primes hold on average. our main theorem establishes an asymptotic formula for the number of integers $ n = p _ 1p _ 2 \ leq x $ such that $ n + h $ is a product of exactly two primes which holds for almost all $ | h | \ leq h $ with $ \ log ^ { 19 + \ varepsilon } x \ leq h \ leq x ^ { 1 - \ varepsilon } $, under a restriction on the size of one of the prime factors of $ n $ and $ n + h $. additionally, we consider correlations $ n, n + h $ where $ n $ is a prime and $ n + h $ has exactly two prime factors, establishing an asymptotic formula which holds for almost all $ | h | \ leq h $ with $ x ^ { 1 / 6 + \ varepsilon } \ leq h \ leq x ^ { 1 - \ varepsilon } $.
arxiv:2102.12297
the effect of phase space general noncommutativity on producing deformed coherent squeezed states is examined. a two - dimensional noncommutative quantum system supported by a deformed mathematical structure similar to that of hadamard billiards is obtained and their components behavior are monitored in time. it is assumed that the independent degrees of freedom are two \ emph { free } 1d harmonic oscillators ( ho ' s ), so the system hamiltonian does not contain interaction terms. through the noncommutative deformation parameterized by a seiberg - witten transform on the original canonical variables, one gets the standard commutation relations for the new ones, such that the obtained hamiltonian represents then two \ emph { interacting } 1d ho ' s. by assuming that one ho is inverted relatively to the other, we show that their effective interaction induces a squeezing dynamics for initial coherent states imaged in the phase space. a suitable pattern of logarithmic spirals is obtained and some relevant properties are discussed in terms of wigner functions, which are essential to put in evidence the effects of the noncommutativity.
arxiv:1501.03661
classically supersymmetric wilson loop on a null polygonal contour possesses all symmetries required to match it onto non - mhv amplitudes in maximally supersymmetric yang - mills theory. however, to define it quantum mechanically, one is forced to regularize it since perturbative loop diagrams are not well - defined due to presence of ultraviolet divergences stemming from integration in the vicinity of the cusps. a regularization that is adopted by practitioners by allowing one to use spinor helicity formalism, on the one hand, and systematically go to higher orders of perturbation theory is based on a version of dimensional regularization, known as four - dimensional helicity scheme. recently it was demonstrated that its use for the super wilson loop at one loop breaks both conformal symmetry and poincare supersymmetry. presently, we exhibit the origin for these effects and demonstrate how one can undo this breaking. the phenomenon is alike the one emerging in renormalization group mixing of conformal operators in conformal theories when one uses dimensional regularization. the rotation matrix to the diagonal basis is found by means of computing the anomaly in the ward identity for the conformal boost. presently, we apply this ideology to the super wilson loop. we compute the one - loop conformal anomaly for the super wilson loop and find that the anomaly depends on its grassmann coordinates. by subtracting this anomalous contribution from the super wilson loop we restore its interpretation as a dual description for reduced non - mhv amplitudes which are expressed in terms of superconformal invariants.
arxiv:1201.6073
distributionally robust reinforcement learning ( dr - rl ) aims to derive a policy optimizing the worst - case performance within a predefined uncertainty set. despite extensive research, previous dr - rl algorithms have predominantly favored model - based approaches, with limited availability of model - free methods offering convergence guarantees or sample complexities. this paper proposes a model - free dr - rl algorithm leveraging the multi - level monte carlo ( mlmc ) technique to close such a gap. our innovative approach integrates a threshold mechanism that ensures finite sample requirements for algorithmic implementation, a significant improvement than previous model - free algorithms. we develop algorithms for uncertainty sets defined by total variation, chi - square divergence, and kl divergence, and provide finite sample analyses under all three cases. remarkably, our algorithms represent the first model - free dr - rl approach featuring finite sample complexity for total variation and chi - square divergence uncertainty sets, while also offering an improved sample complexity and broader applicability compared to existing model - free dr - rl algorithms for the kl divergence model. the complexities of our method establish the tightest results for all three uncertainty models in model - free dr - rl, underscoring the effectiveness and efficiency of our algorithm, and highlighting its potential for practical applications.
arxiv:2406.17096
we show how hot qcd equations of states can be adapted to make definite predictions for quark - gluon plasma at rhic. we consider equations of state up to $ o ( g ^ 5 ) $ and $ o [ g ^ 6 ( ln ( 1 / g ) + \ delta ) ] $. our method involves the extraction of equilibrium distribution functions for gluons and quarks from these equations of state by capturing the the interaction effects entirely in the effective chemical potentials. we further utilize these distribution functions to study the screening length in hot qcd and dissociation phenomenon of heavy quarkonia states by combining this understanding with the semi - classical transport theory.
arxiv:0805.3199
x - ray interferometry is an emerging imaging modality with a wide variety of potential clinical applications, including lung and breast imaging, as well as in non - destructive testing, such as additive manufacturing and porosimetry. a grating interferometer uses a diffraction grating to produce a periodic interference pattern and measures how a patient or sample perturbs the pattern, producing three unique images that highlight x - ray absorption, refraction, and small angle scattering, known as the transmission, differential - phase, and dark - field images, respectively. image artifacts that are unique to x - ray interferometry are introduced when assuming the fringe pattern is perfectly sinusoidal and the phase steps are evenly spaced. inaccuracies in grating position, coupled with multi - harmonic fringes, lead to remnant oscillations and phase wraparound artifacts. we have developed an image recovery algorithm that uses additional harmonics, direct relative phase fitting, and phase step corrections to prevent them. the direct relative phase fitting removes the phase wraparound artifact. correcting the phase step positions and introducing the additional harmonic removes the grating remnant artifact present in the transmission, differential - phase, and dark - field images. by modifying existing algorithms, the fit to the fringe pattern is greatly improved and artifacts are minimized, as we demonstrate with the imaging of several samples, including pmma microspheres, ex vivo formalin fixed mouse lungs, and porous alumina.
arxiv:2503.23148
we show that any toric k \ " ahler cone with smooth compact cross - section admits a family of calabi - yau cone metrics with conical singularities along its toric divisors. the family is parametrized by the reeb cone and the angles are given explicitly in terms of the reeb vector field. the result is optimal, in the sense that any toric calabi - yau cone metric with conical singularities along the toric divisor ( and smooth elsewhere ) belongs to this family. we also provide examples and interpret our results in terms of sasaki - einstein metrics.
arxiv:2005.03502
in a recent paper ( chabrier et al. 2019 ), we have derived a new equation of state ( eos ) for dense hydrogen / helium mixtures which covers the temperature - density domain from solar - type stars to brown dwarfs and gaseous planets. this eos is based on the so - called additive volume law and thus does not take into account the interactions between the hydrogen and helium species. in the present paper, we go beyond these calculations by taking into account h / he interactions, derived from quantum molecular dynamics simulations. these interactions, which eventually lead to h / he phase separation, become important at low temperature and high density, in the domain of brown dwarfs and giant planets. the tables of this new eos are made publicly available.
arxiv:2107.04434
we construct a high - order adaptive time stepping scheme for vesicle suspensions with viscosity contrast. the high - order accuracy is achieved using a spectral deferred correction ( sdc ) method, and adaptivity is achieved by estimating the local truncation error with the numerical error of physically constant values. numerical examples demonstrate that our method can handle suspensions with vesicles that are tumbling, tank - treading, or both. moreover, we demonstrate that a user - prescribed tolerance can be automatically achieved for simulations with long time horizons.
arxiv:1409.0212
using molecular manipulation in a cryogenic scanning tunneling microscope, the structure and rearrangement of sexiphenyl molecules at the buried interface of the organic film with the cu ( 110 ) substrate surface have been revealed. it is shown that a reconstruction of the first monolayer of flat lying molecules occurs due to the van der waals pressure from subsequent layers. in this rearrangement, additional sexiphenyl molecules are forced into the established complete monolayer and adopt an edge - on configuration. incorporation of second layer molecules into the first layer is also demonstrated by purposely pushing sexiphenyl molecules with the stm tip. the results indicate that even chemisorbed organic layers at interfaces can be significantly influenced by external stress from van der waals forces of subsequent layers.
arxiv:1904.11722
recent advancements in quantum technologies have opened new horizons for exploring the physical world in ways once deemed impossible. central to these breakthroughs is the concept of quantum advantage, where quantum systems outperform their classical counterparts in solving specific tasks. while much attention has been devoted to computational speedups, quantum advantage in learning physical systems remains a largely untapped frontier. here, we present a photonic implementation of a quantum - enhanced protocol for learning the probability distribution of a multimode bosonic displacement process. by harnessing the unique properties of continuous - variable quantum entanglement, we obtain a massive advantage in sample complexity with respect to conventional methods without entangled resources. with approximately 5 db of two - mode squeezing - - corresponding to imperfect einstein - - podolsky - - rosen ( epr ) entanglement - - we learn a 100 - mode bosonic displacement process using 11. 8 orders of magnitude fewer samples than a conventional scheme. our results demonstrate that even with non - ideal, noisy entanglement, a significant quantum advantage can be realized in continuous - variable quantum systems. this marks an important step towards practical quantum - enhanced learning protocols with implications for quantum metrology, certification, and machine learning.
arxiv:2502.07770
we discuss the schr \ " odinger functional in lattice qcd with staggered fermions and relate it, in the classical continuum limit, to the schr \ " odinger functional regularized with wilson fermions. we compute the strong coupling constant defined via the schr \ " odinger functional with staggered fermions at one loop and show that it agrees with the continuum running coupling constant in the schr \ " odinger functional formalism. we compute this running coupling in the ` ` weak coupling phase ' ' of many flavor qcd numerically at several values of the bare coupling and for several system sizes from $ l / a = 4 $ to 12. the results indicate that the $ \ beta $ - function for 16 flavors has the opposite sign than for few flavor qcd, in agreement with a recent claim, and with the perturbative prediction.
arxiv:hep-lat/9709159
given a finite tensor category $ \ ca $, an exact indecomposable $ \ ca $ - module category $ \ mo $, and a tensor subcategory $ \ do \ subseteq \ ca ^ * _ \ mo $, we describe a way to produce \ textit { exact } commutative algebras in the center $ z ( \ ca ) $, measuring this inclusion. the construction of such algebras is done in an analogous way as presented by shimizu \ cite { sh2 }, but using instead the \ textit { relative ( co ) end }, a categorical tool developed in \ cite { bm } in the realm of representations of tensor categories. we provide some explicit computations.
arxiv:2212.07390
in this paper we offer a definition of monogenicity for functions defined on $ \ rr ^ { n + 1 } $ with values in the clifford algebra $ \ rr _ n $ following an idea inspired by the recent papers \ cite { gs }, \ cite { advances }. this new class of monogenic functions contains the polynomials ( and, more in general, power series ) with coefficients in the clifford algebra $ \ rr _ n $. we will prove a cauchy integral formula as well as some of its consequences. finally, we deal with the zeroes of some polynomials and power series.
arxiv:0708.3595
detr has set up a simple end - to - end pipeline for object detection by formulating this task as a set prediction problem, showing promising potential. despite its notable advancements, this paper identifies two key forms of misalignment within the model : classification - regression misalignment and cross - layer target misalignment. both issues impede detr ' s convergence and degrade its overall performance. to tackle both issues simultaneously, we introduce a novel loss function, termed as align loss, designed to resolve the discrepancy between the two tasks. align loss guides the optimization of detr through a joint quality metric, strengthening the connection between classification and regression. furthermore, it incorporates an exponential down - weighting term to facilitate a smooth transition from positive to negative samples. align - detr also employs many - to - one matching for supervision of intermediate layers, akin to the design of h - detr, which enhances robustness against instability. we conducted extensive experiments, yielding highly competitive results. notably, our method achieves a 49. 3 % ( + 0. 6 ) ap on the h - detr baseline with the resnet - 50 backbone. it also sets a new state - of - the - art performance, reaching 50. 5 % ap in the 1x setting and 51. 7 % ap in the 2x setting, surpassing several strong competitors. our code is available at https : / / github. com / felixcaae / aligndetr.
arxiv:2304.07527
the fifth generation ( 5g ) communication scenarios such as the cellular network and the emerging machine type communications will produce massive small packets. to support massive connectivity and avoid signaling overhead caused by the transmission of those small packets, this paper proposes a novel method to improve the transmission efficiency for massive connections of wireless uplink channel. the proposed method combines compressive sensing ( cs ) with power domain noma jointly, especially neither the scheduling nor the centralized power allocation is necessary in the method. both the analysis and simulation show that the method can support up to two or three times overloading.
arxiv:1704.05349
biometric systems based on brain activity have been proposed as an alternative to passwords or to complement current authentication techniques. by leveraging the unique brainwave patterns of individuals, these systems offer the possibility of creating authentication solutions that are resistant to theft, hands - free, accessible, and potentially even revocable. however, despite the growing stream of research in this area, faster advance is hindered by reproducibility problems. issues such as the lack of standard reporting schemes for performance results and system configuration, or the absence of common evaluation benchmarks, make comparability and proper assessment of different biometric solutions challenging. further, barriers are erected to future work when, as so often, source code is not published open access. to bridge this gap, we introduce neuroidbench, a flexible open source tool to benchmark brainwave - based authentication models. it incorporates nine diverse datasets, implements a comprehensive set of pre - processing parameters and machine learning algorithms, enables testing under two common adversary models ( known vs unknown attacker ), and allows researchers to generate full performance reports and visualizations. we use neuroidbench to investigate the shallow classifiers and deep learning - based approaches proposed in the literature, and to test robustness across multiple sessions. we observe a 37. 6 % reduction in equal error rate ( eer ) for unknown attacker scenarios ( typically not tested in the literature ), and we highlight the importance of session variability to brainwave authentication. all in all, our results demonstrate the viability and relevance of neuroidbench in streamlining fair comparisons of algorithms, thereby furthering the advancement of brainwave - based authentication through robust methodological practices.
arxiv:2402.08656
recent text - to - image diffusion models achieve impressive visual quality through extensive scaling of training data and model parameters, yet they often struggle with complex scenes and fine - grained details. inspired by the self - reflection capabilities emergent in large language models, we propose reflectionflow, an inference - time framework enabling diffusion models to iteratively reflect upon and refine their outputs. reflectionflow introduces three complementary inference - time scaling axes : ( 1 ) noise - level scaling to optimize latent initialization ; ( 2 ) prompt - level scaling for precise semantic guidance ; and most notably, ( 3 ) reflection - level scaling, which explicitly provides actionable reflections to iteratively assess and correct previous generations. to facilitate reflection - level scaling, we construct genref, a large - scale dataset comprising 1 million triplets, each containing a reflection, a flawed image, and an enhanced image. leveraging this dataset, we efficiently perform reflection tuning on state - of - the - art diffusion transformer, flux. 1 - dev, by jointly modeling multimodal inputs within a unified framework. experimental results show that reflectionflow significantly outperforms naive noise - level scaling methods, offering a scalable and compute - efficient solution toward higher - quality image synthesis on challenging tasks.
arxiv:2504.16080