text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
we present a numerical study on the light transport properties and statistics of transmission channels in random media with inhomogeneous disorder. for the case of longitudinal inhomogeneity of disorder we find that the statistics of the transmission channels is independent of the inhomogeneity and the system can be equivalent to a counterpart with homogeneous disorder strength, both of which have the same statistical distribution of the transmission channels. however, for the case of transverse inhomogeneity of disorder, such equivalence does not exist, moreover, the transmission eigenvalues are pushed to the two ends of the distribution and the distribution of the total transmission is broadened since the spatial structure gives rise to larger and smaller transmitted incident channels. | arxiv:1601.01120 |
the millimeter wavelength emission from grb 991208 is the second brightest ever detected, yielding a unique data set. we present here well - sampled spectra and light curves over more than two decades in frequency for a two - week period. this data set has allowed us for the first time to trace the evolution of the characteristic synchrotron self - absorption frequency nu _ a and peak frequency nu _ m, and the peak flux density f _ m : we obtain nu _ a \ propto t ^ { - 0. 15 + - 0. 12 }, nu _ m \ propto t ^ { - 1. 7 + - 0. 4 }, and $ _ m \ propto t ^ { - 0. 47 + - 0. 11 }. from the radio data we find that models of homogeneous or wind - generated ambient media with a spherically symmetric outflow can be ruled out. a model in which the relativistic outflow is collimated ( a jet ) can account for the observed evolution of the synchrotron parameters, the rapid decay at optical wavelengths, and the observed radio to optical spectral flux distributions that we present here, provided that the jet transition has not been fully completed in the first two weeks after the event. these observations provide additional evidence that rapidly decaying optical / x - ray afterglows are due to jets and that such transitions either develop very slowly or perhaps never reach the predicted asymptotic decay f ( t ) \ propto t ^ { - p }. | arxiv:astro-ph/0006201 |
we quantitatively determine the effect and the uncertainty on solar neutrino production arising from the screening process. we present predictions for the solar neutrino fluxes and signals obtained with different screening models available in the literature and by using our stellar evolution code. we explain these numerical results in terms of simple laws relating the screening factors with the neutrino fluxes. futhermore we explore a wider range of models for screening, obtained from the mitler model by introducing and varying two phenomenological parameters, taking into account effects not included in the mitler prescription. screening implies, with respect to a no - screening case, a central temperat reduction of 0. 5 %, a 2 % ( 8 % ) increase of beryllium ( boron ) - neutrino flux and a 2 % ( 12 % ) increase of the gallium ( chlorine ) signal. we also find that uncertainties due to the screening effect ar at the level of 1 % for the predicted beryllium - neutrino flux and gallium signal, not exceeding 3 % for the boron - neutrino flux and the chlorine signal. | arxiv:astro-ph/9411099 |
we study the fission yield of recently predicted thermally fissile neutron - rich uranium and thorium nuclei using statistical model. the level density parameters needed for the study are evaluated from the excitation energies of temperature dependent relativistic mean field formalism. the excitation energy and the level density parameter for a given temperature are employed in the convolution integral method to obtain the probability of the particular fragmentation. as representative case, we present the results for the binary fission yield of 250 u and 254 th. the relative yields are presented for three different temperatures t = 1, 2 and 3 mev. | arxiv:1704.08105 |
in this paper we consider a novel partitioned framework for distributed optimization in peer - to - peer networks. in several important applications the agents of a network have to solve an optimization problem with two key features : ( i ) the dimension of the decision variable depends on the network size, and ( ii ) cost function and constraints have a sparsity structure related to the communication graph. for this class of problems a straightforward application of existing consensus methods would show two inefficiencies : poor scalability and redundancy of shared information. we propose an asynchronous distributed algorithm, based on dual decomposition and coordinate methods, to solve partitioned optimization problems. we show that, by exploiting the problem structure, the solution can be partitioned among the nodes, so that each node just stores a local copy of a portion of the decision variable ( rather than a copy of the entire decision vector ) and solves a small - scale local problem. | arxiv:1805.08460 |
the thermodynamics of the deconfined phase of the su ( n ) gauge theory is studied. careful study is made of the approach to the continuum limit. the latent heat of the deconfinement transition is studied, for the theories with 3, 4 and 6 colors. continuum estimates of various thermodynamic quantities are studied, and the approach to conformality investigated. the bulk thermodynamic quantities at different n are compared, to investigate the validity of ' t hooft scaling at these values of n. | arxiv:1101.0043 |
we propose a multiple pulses phase - matching quantum key distribution protocol ( mppm - qkd ) to exceed the linear key rate bound and to achieve higher error tolerance. in our protocol, alice and bob generate at first their own train pulses ( each train should contain l pulses ) as well as random bit sequences, and also encode each pulse of their trains with a randomized phase and a modulation phase. as the next step, both encoded trains are simultaneously sent to charlie, who performs an interference detection and may be also an eavesdropper. after a successful detection is announced by charlie, alice and bob open the randomized phase of each pulse and keep only communications when the summation of the difference randomized phases at two success detection ' s time - stamps for alice and bob are equal to 0 or pi. thereafter, alice and bob compute the sifted key with the time - stamps. the above procedure is repeated until both alice and bob achieve sufficiently long sifted keys. we can also show that the secret key rate of the proposed qkd protocol can beat the rate - loss limit of so far known qkd protocols when the transmission distance is greater than 250 km. moreover, the proposed protocol has a higher error tolerance, approximately 24 %, when the transmission distance is 50 km and l = 128. the secret key rate and the transmission distance of our protocol are superior to that of the round - robin differential - phase - shift quantum key distribution protocol [ 6 ], and also of the measurement - device - independent quantum key distribution protocol [ 4 ], and the secret key rate performance is better in both cases than that of phase - matching quantum key distribution when bit train length is greater than 32. | arxiv:1905.10545 |
a classification up to automorphism of the inner ideals of the real finite - dimensional simple lie algebras is given, jointly with precise descriptions in the case of the exceptional lie algebras. | arxiv:2202.09351 |
in this paper, we prove newton - maclaurin type inequalities for functions obtained by linear combination of two neighboring primary symmetry functions, which is a generalization of the classical newton - maclaurin inequality. | arxiv:2205.00873 |
retrieval - augmented generation ( rag ) systems have gained widespread adoption by application builders because they leverage sources of truth to enable large language models ( llms ) to generate more factually sound responses. however, hallucinations, instances of llm responses that are unfaithful to the provided context, often prevent these systems from being deployed in production environments. current hallucination detection methods typically involve human evaluation or the use of closed - source models to review rag system outputs for hallucinations. both human evaluators and closed - source models suffer from scaling issues due to their high costs and slow inference speeds. in this work, we introduce a perturbed multi - hop qa dataset with induced hallucinations. via supervised fine - tuning on our dataset, we achieve better recall with a 7b model than gpt - 4o on the ragtruth hallucination detection benchmark and offer competitive performance on precision and accuracy, all while using a fraction of the parameters. code is released at our repository. | arxiv:2505.04844 |
we quantify the impact of galaxy formation on dark matter halo shapes using cosmological simulations at redshift $ z = 0 $. the haloes are drawn from the illustristng project, a suite of magneto - hydrodynamic simulations of galaxies. we focus on haloes of mass $ 10 ^ { 10 - 14 } m _ \ odot $ from the 50 - mpc ( tng50 ) and 100 - mpc ( tng100 ) boxes, and compare them to dark matter - only ( dmo ) analogues and other simulations e. g. nihao and eagle. we further quantify the prediction uncertainty by varying the baryonic feedback models in a series of smaller 25 mpc $ h ^ { - 1 } $ boxes. we find that : ( i ) galaxy formation results in rounder haloes compared to the dmo simulations, in qualitative agreement with past hydrodynamic models. haloes of mass $ \ approx 2 \ times 10 ^ { 12 } m _ \ odot $ are most spherical, with an average minor - to - major axis ratio of $ \ left < s \ right > \ approx 0. 75 $ in the inner halo, an increase of 40 per cent compared to their dmo counterparts. no significant change in halo shape is found for low - mass $ 10 ^ { 10 } m _ \ odot $ haloes ; ( ii ) stronger feedback, e. g. increasing galactic wind speed, reduces the impact of baryons ; ( iii ) the inner halo shape correlates with the stellar mass fraction, which can explain the dependence of halo shapes on different feedback models ; ( iv ) the fiducial and weaker feedback models are most consistent with observational estimates of the milky way halo shape. yet, at fixed halo mass, very diverse and possibly unrealistic feedback models all predict inner halo shapes that are closer to one another than to the dmo results. this implies that a larger observational sample would be required to statistically distinguish between different baryonic prescriptions due to large halo - to - halo variation in halo shapes. | arxiv:2109.00012 |
intra - operative recognition of surgical phases holds significant potential for enhancing real - time contextual awareness in the operating room. however, we argue that online recognition, while beneficial, primarily lends itself to post - operative video analysis due to its limited direct impact on the actual surgical decisions and actions during ongoing procedures. in contrast, we contend that the prediction and anticipation of surgical phases are inherently more valuable for intra - operative assistance, as they can meaningfully influence a surgeon ' s immediate and long - term planning by providing foresight into future steps. to address this gap, we propose a dual approach that simultaneously recognises the current surgical phase and predicts upcoming ones, thus offering comprehensive intra - operative assistance and guidance on the expected remaining workflow. our novel method, surgical phase recognition and anticipation ( supra ), leverages past and current information for accurate intra - operative phase recognition while using future segments for phase prediction. this unified approach challenges conventional frameworks that treat these objectives separately. we have validated supra on two reputed datasets, cholec80 and autolaparo21, where it demonstrated state - of - the - art performance with recognition accuracies of 91. 8 % and 79. 3 %, respectively. additionally, we introduce and evaluate our model using new segment - level evaluation metrics, namely edit and f1 overlap scores, for a more temporal assessment of segment classification. in conclusion, supra presents a new multi - task approach that paves the way for improved intra - operative assistance through surgical phase recognition and prediction of future events. | arxiv:2403.06200 |
the availability of technological means to enhance and repair human cognitive function raises questions about the perceived morality of their use. in this study, we administered a survey to the public in which subjects were asked to report how willing they would be to enhance and / or repair specific cognitive abilities. among 894 responders, we found that subjects were more willing to use technologies to repair other people than themselves, and especially to enhance or repair functions more " core " to authentic identity in others. subjects ' ratings of the moral acceptability of specific uses was related to their reported willingness to use brain stimulation. these findings suggest that the public endorses an altruistic approach to applying brain stimulation for cognitive gains. further, this study establishes a basis to guide moral psychological studies of cognitive modification and social processes that guide attitudes toward and uses of brain stimulation. | arxiv:1801.09024 |
we revisit the numerical evolution of ellis - bronnikov - morris - thorne wormholes, which are constructed with a massless real ghost scalar field. for our simulations, we have developed a new code based on the standard 3 + 1 foliation of spacetime. we confirm that, for the massless symmetric wormhole, a pulse of regular scalar field causes the wormhole throat to collapse and form an apparent horizon, while a pulse of ghost scalar field can cause the wormhole throat to expand. as a new result, we show that it is possible for a pulse of regular matter to travel through the wormhole and then to send a light signal back before the wormhole collapses. we also evolve pulses of matter traveling through massive asymmetric wormholes, which has not previously been simulated. | arxiv:2210.04905 |
we provide a rigorous mathematical derivation of the convergence in the long - wave transonic limit of the minimizing travelling waves for the two - dimensional gross - pitaevskii equation towards ground states for the kadomtsev - petviashvili equation ( kp i ). | arxiv:0806.1122 |
contrastive learning ( cl ) - based self - supervised learning models learn visual representations in a pairwise manner. although the prevailing cl model has achieved great progress, in this paper, we uncover an ever - overlooked phenomenon : when the cl model is trained with full images, the performance tested in full images is better than that in foreground areas ; when the cl model is trained with foreground areas, the performance tested in full images is worse than that in foreground areas. this observation reveals that backgrounds in images may interfere with the model learning semantic information and their influence has not been fully eliminated. to tackle this issue, we build a structural causal model ( scm ) to model the background as a confounder. we propose a backdoor adjustment - based regularization method, namely interventional contrastive learning with meta semantic regularizer ( icl - msr ), to perform causal intervention towards the proposed scm. icl - msr can be incorporated into any existing cl methods to alleviate background distractions from representation learning. theoretically, we prove that icl - msr achieves a tighter error bound. empirically, our experiments on multiple benchmark datasets demonstrate that icl - msr is able to improve the performances of different state - of - the - art cl methods. | arxiv:2206.14702 |
we study ore localisation of differential graded algebras. further we define dg - prime rings, dg - semiprime rings, and study the dg - nil radical of dg - rings. then, we define dg - essential submodules, dg - uniform dimension, and apply all this to a dg - version of goldie ' s theorem on prime dg - rings. | arxiv:2311.16619 |
single image reflection removal ( sirr ) is a canonical blind source separation problem and refers to the issue of separating a reflection - contaminated image into a transmission and a reflection image. the core challenge lies in minimizing the commonalities among different sources. existing deep learning approaches either neglect the significance of feature interactions or rely on heuristically designed architectures. in this paper, we propose a novel deep exclusion unfolding network ( dexnet ), a lightweight, interpretable, and effective network architecture for sirr. dexnet is principally constructed by unfolding and parameterizing a simple iterative sparse and auxiliary feature update ( i - safu ) algorithm, which is specifically designed to solve a new model - based sirr optimization formulation incorporating a general exclusion prior. this general exclusion prior enables the unfolded safu module to inherently identify and penalize commonalities between the transmission and reflection features, ensuring more accurate separation. the principled design of dexnet not only enhances its interpretability but also significantly improves its performance. comprehensive experiments on four benchmark datasets demonstrate that dexnet achieves state - of - the - art visual and quantitative results while utilizing only approximately 8 \ % of the parameters required by leading methods. | arxiv:2503.01938 |
we revisit the tres - 4 system parameters based on high - precision harps - n radial - velocity measurements and new photometric light curves. a combined spectroscopic and photometric analysis allows us to determine a spectroscopic orbit with an amplitude $ k = 51 \ pm3 $ m s $ ^ { - 1 } $. the derived mass of tres - 4b is found to be $ m _ { \ rm p } = 0. 49 \ pm0. 04 \ rm m _ { jup } $, significantly lower than previously reported. combined with the large radius ( $ r _ { \ rm p } = 1. 84 _ { - 0. 09 } ^ { + 0. 08 } \ rm r _ { jup } $ ) inferred from our analysis, tres - 4b becomes the second - lowest density transiting hot jupiter known. we discuss several scenarios to explain the puzzling discrepancy in the mass of tres - 4b in the context of the exotic class of highly inflated transiting giant planets. | arxiv:1501.06403 |
machine learning ( ml ) solutions are prevalent. however, many challenges exist in making these solutions business - grade. one major challenge is to ensure that the ml solution provides its expected business value. in order to do that, one has to bridge the gap between the way ml model performance is measured and the solution requirements. in previous work ( barash et al, " bridging the gap... " ) we demonstrated the effectiveness of utilizing feature models in bridging this gap. whereas ml performance metrics, such as the accuracy or f1 - score of a classifier, typically measure the average ml performance, feature models shed light on explainable data slices that are too far from that average, and therefore might indicate unsatisfied requirements. for example, the overall accuracy of a bank text terms classifier may be very high, say $ 98 \ % \ pm 2 \ % $, yet it might perform poorly for terms that include short descriptions and originate from commercial accounts. a business requirement, which may be implicit in the training data, may be to perform well regardless of the type of account and length of the description. therefore, the under - performing data slice that includes short descriptions and commercial accounts suggests poorly - met requirements. in this paper we show the feasibility of automatically extracting feature models that result in explainable data slices over which the ml solution under - performs. our novel technique, ibm freaai aka freaai, extracts such slices from structured ml test data or any other labeled data. we demonstrate that freaai can automatically produce explainable and statistically - significant data slices over seven open datasets. | arxiv:2108.05620 |
a complete mapping of a group $ g $ is a bijection $ \ phi \ colon g \ to g $ such that $ x \ mapsto x \ phi ( x ) $ is also bijective. hall and paige conjectured in 1955 that a finite group $ g $ has a complete mapping whenever $ \ prod _ { x \ in g } x $ is the identity in the abelianization of $ g $. this was confirmed in 2009 by wilcox, evans, and bray with a proof using the classification of finite simple groups. \ par in this paper, we give a combinatorial proof of a far - reaching generalisation of the hall - paige conjecture for large groups. we show that for random - like and equal - sized subsets $ a, b, c $ of a group $ g $, there exists a bijection $ \ phi \ colon a \ to b $ such that $ x \ mapsto x \ phi ( x ) $ is a bijection from $ a $ to $ c $ whenever $ \ prod _ { a \ in a } a \ prod _ { b \ in b } b = \ prod _ { c \ in c } c $ in the abelianization of $ g $. we use this statement as a black - box to settle the following old problems in combinatorial group theory for large groups. ( 1 ) we characterise sequenceable groups, that is, groups which admit a permutation $ \ pi $ of their elements such that the partial products $ \ pi _ 1 $, $ \ pi _ 1 \ pi _ 2 $, $ \ pi _ 1 \ pi _ 2 \ cdots \ pi _ n $ are all distinct. this resolves a problem of gordon from 1961 and confirms conjectures made by several authors, including keedwell ' s 1981 conjecture that all large non - abelian groups are sequenceable. we also characterise the related $ r $ - sequenceable groups, addressing a problem of ringel from 1974. ( 2 ) we confirm in a strong form a conjecture of snevily from 1999 by characterising large subsquares of multiplication tables of finite groups that admit transversals. previously, this characterisation was known only for abelian groups of odd order ( by a combination of papers by alon and dasgupta - k \ ' arolyi - serra - szegedy and arsovski ). | arxiv:2204.09666 |
we apply the semi - discrete method, c. f. \ emph { n. halidias and i. s. stamatiou ( 2016 ), on the numerical solution of some non - linear stochastic differential equations using the semi - discrete method, computational methods in applied mathematics, 16 ( 1 ) }, to a class of non - colliding particle systems. the proposed numerical scheme preserves the non - colliding property and strongly converges to the exact solution. | arxiv:1807.08924 |
string theory suggests the simultaneous presence of many ultralight axions possibly populating each decade of mass down to the hubble scale 10 ^ - 33ev. conversely the presence of such a plenitude of axions ( an " axiverse " ) would be evidence for string theory, since it arises due to the topological complexity of the extra - dimensional manifold and is ad hoc in a theory with just the four familiar dimensions. we investigate how upcoming astrophysical experiments will explore the existence of such axions over a vast mass range from 10 ^ - 33ev to 10 ^ - 10ev. axions with masses between 10 ^ - 33ev to 10 ^ - 28ev cause a rotation of the cmb polarization that is constant throughout the sky. the predicted rotation angle is of order \ alpha ~ 1 / 137. axions in the mass range 10 ^ - 28ev to 10 ^ - 18ev give rise to multiple steps in the matter power spectrum, that will be probed by upcoming galaxy surveys. axions in the mass range 10 ^ - 22ev to 10 ^ - 10ev affect the dynamics and gravitational wave emission of rapidly rotating astrophysical black holes through the penrose superradiance process. when the axion compton wavelength is of order of the black hole size, the axions develop " superradiant " atomic bound states around the black hole " nucleus ". their occupation number grows exponentially by extracting rotational energy from the ergosphere, culminating in a rotating bose - einstein axion condensate emitting gravitational waves. this mechanism creates mass gaps in the spectrum of rapidly rotating black holes that diagnose the presence of axions. the rapidly rotating black hole in the x - ray binary lmc x - 1 implies an upper limit on the decay constant of the qcd axion f _ a < 2 * 10 ^ 17gev, much below the planck mass. this reach can be improved down to the grand unification scale f _ a < 2 * 10 ^ 16gev, by observing smaller stellar mass black holes. | arxiv:0905.4720 |
it is proved that for adjointable operators $ a $ and $ b $ between hilbert $ c ^ * $ - modules, certain majorization conditions are always equivalent without any assumptions on $ \ overline { \ mathcal { r } ( a ^ * ) } $, where $ a ^ * $ denotes the adjoint operator of $ a $ and $ \ overline { \ mathcal { r } ( a ^ * ) } $ is the norm closure of the range of $ a ^ * $. in the case that $ \ overline { { \ mathcal r } ( a ^ * ) } $ is not orthogonally complemented, it is proved that there always exists an adjointable operator $ b $ whose range is contained in that of $ a $, whereas the associated equation $ ax = b $ for adjointable operators is unsolvable. | arxiv:1711.02280 |
vision transformers require a huge amount of labeled data to outperform convolutional neural networks. however, labeling a huge dataset is a very expensive process. self - supervised learning techniques alleviate this problem by learning features similar to supervised learning in an unsupervised way. in this paper, we propose a self - supervised technique patchrot that is crafted for vision transformers. patchrot rotates images and image patches and trains the network to predict the rotation angles. the network learns to extract both global and local features from an image. our extensive experiments on different datasets showcase patchrot training learns rich features which outperform supervised learning and compared baseline. | arxiv:2210.15722 |
visible watermarks are widely - used in images to protect copyright ownership. analyzing watermark removal helps to reinforce the anti - attack techniques in an adversarial way. current removal methods normally leverage image - to - image translation techniques. nevertheless, the uncertainty of the size, shape, color and transparency of the watermarks set a huge barrier for these methods. to combat this, we combine traditional watermarked image decomposition into a two - stage generator, called watermark - decomposition network ( wdnet ), where the first stage predicts a rough decomposition from the whole watermarked image and the second stage specifically centers on the watermarked area to refine the removal results. the decomposition formulation enables wdnet to separate watermarks from the images rather than simply removing them. we further show that these separated watermarks can serve as extra nutrients for building a larger training dataset and further improving removal performance. besides, we construct a large - scale dataset named clwd, which mainly contains colored watermarks, to fill the vacuum of colored watermark removal dataset. extensive experiments on the public gray - scale dataset lvw and clwd consistently show that the proposed wdnet outperforms the state - of - the - art approaches both in accuracy and efficiency. the code and clwd dataset are publicly available at https : / / github. com / mruil / wdnet. | arxiv:2012.07616 |
we show efficient electro - optic modulation in a subwavelength gap - plasmon waveguide ( gpw ) formed by an electro - optic polymer with metal coatings. the proposed device is studied in the attenuated total reflection and end - fire configurations. in dealing with the end - fire configuration we used a taper from a micron sized guide to the gpw. the structure is shown to exhibit large phase accumulation over short distances, controllable by the applied modulating voltage. | arxiv:1003.0497 |
this paper gives a comprehensive treatment of the convergence rates of penalized spline estimators for simultaneously estimating several leading principal component functions, when the functional data is sparsely observed. the penalized spline estimators are defined as the solution of a penalized empirical risk minimization problem, where the loss function belongs to a general class of loss functions motivated by the matrix bregman divergence, and the penalty term is the integrated squared derivative. the theory reveals that the asymptotic behavior of penalized spline estimators depends on the interesting interplay between several factors, i. e., the smoothness of the unknown functions, the spline degree, the spline knot number, the penalty order, and the penalty parameter. the theory also classifies the asymptotic behavior into seven scenarios and characterizes whether and how the minimax optimal rates of convergence are achievable in each scenario. | arxiv:2402.05438 |
large - scale compressive slow - mode - like fluctuations can cause variations in the density, temperature, and magnetic - field magnitude in the solar wind. in addition, they also lead to fluctuations in the differential flow $ u _ { \ rm p \ alpha } $ between $ \ alpha $ - particles and protons ( $ p $ ), which is a common source of free energy for the driving of ion - scale instabilities. if the amplitude of the compressive fluctuations is sufficiently large, the fluctuating $ u _ { \ rm p \ alpha } $ intermittently drives the plasma across the instability threshold, leading to the excitation of ion - scale instabilities and thus the growth of corresponding ion - scale waves. the unstable waves scatter particles and reduce the average value of $ u _ { \ rm p \ alpha } $. we propose that this " fluctuating - beam effect " maintains the average value of $ u _ { \ rm p \ alpha } $ well below the marginal instability threshold. we model the large - scale compressive fluctuations in the solar wind as long - wavelength slow - mode waves using a multi - fluid model. we numerically quantify the fluctuating - beam effect for the alfv \ ' en / ion - cyclotron ( a / ic ) and fast - magnetosonic / whistler ( fm / w ) instabilities. we show that measurements of the proton - $ \ alpha $ differential flow and compressive fluctuations from the { \ it wind } spacecraft are consistent with our predictions for the fluctuating - beam effect. this effect creates a new channel for a direct cross - scale energy transfer from large - scale compressions to ion - scale fluctuations. | arxiv:2308.02036 |
in this paper, we will analyze the effects of thermal fluctuations on the stability of a black saturn. the entropy of the black saturn will get corrected due to these thermal fluctuations. we will demonstrate that the correction term generated by these thermal fluctuations is a logarithmic term. then we will use this corrected value of the entropy to obtain bounds for various parameters of the black saturn. we will also analyze the thermodynamical stability of the black saturn in presence of thermal fluctuations, using this corrected value of the entropy. | arxiv:1505.02373 |
stellar evolution calculations predict the flux - weighted gravity g / teff ^ 4 and absolute bolometric magnitude of blue supergiants to be strongly correlated. we use medium resolution multi - object spectroscopy of late b and early a supergiants in two spiral galaxies, ngc 300 and ngc 3621, to demonstrate the existence of such a relationship, which proves to be surprisingly tight. an analysis of high resolution spectra of blue supergiants in local group galaxies confirms this detection. we discuss the application of the relationship for extragalactic distance determinations and conservatively conclude that once properly calibrated it has the potential to allow for measurements of distance moduli out to 30. 5 mag with an accuracy of 0. 1 mag or better. | arxiv:astro-ph/0212042 |
this is the second paper in a series of studies of the coma cluster using the srg / erosita x - ray data obtained during the calibration and performance verification phase of the mission. here, we focus on the region adjacent to the radio source 1253 + 275 ( radio relic, rr, hereafter ). we show that the x - ray surface brightness exhibits its steepest gradient at $ \ sim 79 ' $ ( $ \ sim 2. 2 \, { \ rm mpc } \ approx r _ { 200c } $ ), which is almost co - spatial to the outer edge of the rr. as in the case of several other relics, the mach number of the shock derived from the x - ray surface brightness profile ( $ m _ x \ approx 1. 9 $ ) appears to be lower than needed to explain the slope of the integrated radio spectrum in the diffusive shock acceleration ( dsa ) model ( $ m _ r \ approx 3. 5 $ ) if the magnetic field is uniform and the radiative losses are fast. however, the shock geometry is plausibly much more complicated than a spherical wedge centered on the cluster, given the non - trivial correlation between radio, x - ray, and sz images. while the complicated shock geometry alone might cause a negative bias in $ m _ x $, we speculate on a few other possibilities that may affect the $ m _ x $ - $ m _ r $ relation, including the shock substructure that might be modified by the presence of non - thermal filaments stretching across the shock and the propagation of relativistic electrons along the non - thermal filaments with a strong magnetic field. we also discuss the " history " of the radio galaxy ngc4789, which is located ahead of the relic in the context of the coma - ngc4839 merger scenario. | arxiv:2205.07511 |
in 1999 lyngs { \ o } and pedersen proposed a conjecture stating that every binary circular word of length $ n $ with equal number of zeros and ones has an antipalindromic linear subsequence of length at least $ \ frac { 2 } { 3 } n $. no progress over a trivial $ \ frac { 1 } { 2 } n $ bound has been achieved since then. we suggest a palindromic counterpart to this conjecture and provide a non - trivial infinite series of circular words which prove the upper bound of $ \ frac { 2 } { 3 } n $ for both conjectures at the same time. the construction also works for words over an alphabet of size $ k $ and gives rise to a generalization of the conjecture by lyngs { \ o } and pedersen. moreover, we discuss some possible strengthenings and weakenings of the named conjectures. we also propose two similar conjectures for linear words and provide some evidences for them. | arxiv:1901.07502 |
an extreme learning machine ( elm ) is a three - layered feed - forward neural network having untrained parameters, which are randomly determined before training. inspired by the idea of elm, a probabilistic untrained layer called a probabilistic - elm ( pelm ) layer is proposed, and it is combined with a discriminative restricted boltzmann machine ( drbm ), which is a probabilistic three - layered neural network for solving classification problems. the proposed model is obtained by stacking drbm on the pelm layer. the resultant model ( i. e., multi - layered drbm ( mdrbm ) ) forms a probabilistic four - layered neural network. in mdrbm, the parameters in the pelm layer can be determined using gaussian - bernoulli restricted boltzmann machine. owing to the pelm layer, mdrbm obtains a strong immunity against noise in inputs, which is one of the most important advantages of mdrbm. numerical experiments using some benchmark datasets, mnist, fashion - mnist, urban land cover, and cifar - 10, demonstrate that mdrbm is superior to other existing models, particularly, in terms of the noise - robustness property ( or, in other words, the generalization property ). | arxiv:2210.15434 |
we investigate the radiation of surface polaritons by an annular beam that coaxially encloses a cylindrical waveguide surrounded by a homogeneous medium. by using the green dyadic, the electromagnetic potentials and the electric and magnetic fields are found inside and outside the waveguide. the expression for the energy losses is derived for the general case of the dispersion for dielectric permittivities inside and outside the cylinder. a comprehensive analysis is presented in the spectral range corresponding to the radiation of surface polaritons. the highest peaks in the spectral distribution are obtained for intermediate values of the beam velocity. in the limit of transparent medium the spectrum of radiated surface polaritons is discrete and the corresponding frequencies are determined by the eigenvalue equation for the cylindrical waveguide. numerical examples are presented for the drude model of dispersion. | arxiv:2412.20561 |
when tackling binary optimization problems using quantum algorithms, the conventional ising representation and quantum approximate optimization algorithm ( qaoa ) encounter difficulties in efficiently handling errors for large - scale problems involving multiple constraints. to address these challenges, this paper presents a hybrid framework that combines the use of standard ising hamiltonians to solve a subset of the constraints, while employing non - ising formulations to represent and address the remaining constraints. the resolution of these non - ising constraints is achieved through either penalty dephasing or the quantum zeno effect. this innovative approach leads to a collection of quantum circuits with adaptable structures, depending on the chosen representation for each constraint. furthermore, this paper introduces a novel technique that utilizes the quantum zeno effect by frequently measuring the constraint flag, enabling the resolution of any optimization constraint. theoretical properties of these algorithms are discussed, and their performance in addressing practical aircraft loading problems is highly promising, showcasing significant potential for a wide range of industrial applications. | arxiv:2305.08056 |
we determine the corrections to the schwarzschild geometry arising from including the goroff - sagnotti counterterm in the gravitational dynamics. we find that static, asymptotically flat, and spherically symmetric geometries are completely characterized by their asymptotic mass and the coupling associated with the counterterm. the latter induces distinct corrections at sixth order of the parameterized post - newtonian expansion. the resulting spacetime geometries still exhibit an event horizon. in the parameter space accessible to numerical integration, the horizon area is smaller than its schwarzschild counterpart, leading to an increase in the hawking temperature. corrections to the shadow size can be determined analytically and are used to give a first bound on the new coupling. while it is difficult to access the geometry inside of the event horizon, our analysis also provides evidence that the counterterm could resolve the curvature singularity appearing in the schwarzschild geometry. | arxiv:2311.15739 |
current neural radiance fields ( nerf ) can generate photorealistic novel views. for editing 3d scenes represented by nerf, with the advent of generative models, this paper proposes inpaint4dnerf to capitalize on state - of - the - art stable diffusion models ( e. g., controlnet ) for direct generation of the underlying completed background content, regardless of static or dynamic. the key advantages of this generative approach for nerf inpainting are twofold. first, after rough mask propagation, to complete or fill in previously occluded content, we can individually generate a small subset of completed images with plausible content, called seed images, from which simple 3d geometry proxies can be derived. second and the remaining problem is thus 3d multiview consistency among all completed images, now guided by the seed images and their 3d proxies. without other bells and whistles, our generative inpaint4dnerf baseline framework is general which can be readily extended to 4d dynamic nerfs, where temporal consistency can be naturally handled in a similar way as our multiview consistency. | arxiv:2401.00208 |
in this note, we describe a $ \ alpha _ { gw } + \ tilde { \ omega } ( 1 / d ^ 2 ) $ - factor approximation algorithm for max - cut on weighted graphs of degree $ \ leq d $. here, $ \ alpha _ { gw } \ approx 0. 878 $ is the worst - case approximation ratio of the goemans - williamson rounding for max - cut. this improves on previous results for unweighted graphs by feige, karpinski, and langberg and flor \ ' en. our guarantee is obtained by a tighter analysis of the solution obtained by applying a natural local improvement procedure to the goemans - williamson rounding of the basic sdp strengthened with triangle inequalities. | arxiv:2206.09204 |
integral representations play a prominent role in the analysis of entire functions. the representations of generalized mittag - leffler type functions and their asymptotics have been ( and still are ) investigated by plenty of authors in various conditions and cases. the present paper explores the integral representations of a special function extending to two variables the two - parametric mittag - leffler type function. integral representations of this functions within different variation ranges of its arguments for certain values of the parameters are thus obtained. asymptotic expansion formulas and asymptotic properties of this function are also established for large values of the variables. this yields corresponding theorems providing integral representations as well as expansion formulas. | arxiv:1710.10839 |
we provide a thorough treatment of one - class classification with hyperparameter optimisation for five data descriptors : support vector machine ( svm ), nearest neighbour distance ( nnd ), localised nearest neighbour distance ( lnnd ), local outlier factor ( lof ) and average localised proximity ( alp ). the hyperparameters of svm and lof have to be optimised through cross - validation, while nnd, lnnd and alp allow an efficient form of leave - one - out validation and the reuse of a single nearest - neighbour query. we experimentally evaluate the effect of hyperparameter optimisation with 246 classification problems drawn from 50 datasets. from a selection of optimisation algorithms, the recent malherbe - powell proposal optimises the hyperparameters of all data descriptors most efficiently. we calculate the increase in test auroc and the amount of overfitting as a function of the number of hyperparameter evaluations. after 50 evaluations, alp and svm significantly outperform lof, nnd and lnnd, and lof and nnd outperform lnnd. the performance of alp and svm is comparable, but alp can be optimised more efficiently so constitutes a good default choice. alternatively, using validation auroc as a selection criterion between alp or svm gives the best overall result, and nnd is the least computationally demanding option. we thus end up with a clear trade - off between three choices, allowing practitioners to make an informed decision. | arxiv:2102.02618 |
in recent years, large - scale pre - trained speech language models ( slms ) have demonstrated remarkable advancements in various generative speech modeling applications, such as text - to - speech synthesis, voice conversion, and speech enhancement. these applications typically involve mapping text or speech inputs to pre - trained slm representations, from which target speech is decoded. this paper introduces a new approach, slmgan, to leverage slm representations for discriminative tasks within the generative adversarial network ( gan ) framework, specifically for voice conversion. building upon starganv2 - vc, we add our novel slm - based wavlm discriminators on top of the mel - based discriminators along with our newly designed slm feature matching loss function, resulting in an unsupervised zero - shot voice conversion system that does not require text labels during training. subjective evaluation results show that slmgan outperforms existing state - of - the - art zero - shot voice conversion models in terms of naturalness and achieves comparable similarity, highlighting the potential of slm - based discriminators for related applications. | arxiv:2307.09435 |
we define a class of finite - dimensional jacobian algebras, which are called ( simple ) polygon - tree algebras, as a generalization of cluster - tilted algebras of type $ \ d $. they are $ 2 $ - cy - tilted algebras. using a suitable process of mutations of quivers with potentials ( which are also bb - mutations ) inducing derived equivalences, and one - pointed ( co ) extensions which preserve singularity equivalences, we find a connected selfinjective nakayama algebra whose stable category is equivalent to the singularity category of a simple polygon - tree algebra. furthermore, we also give a classification of algebras of this kind up to representation type. | arxiv:1509.05511 |
we show that with glast there will be the possibility to detect, within the uhecr skimming at atmosphere edges, the showers generated by very high energy upward and horizontal tau. the effective area, thanks to the large area covered by the showers at 550 km, is less than that of auger, but its efficiency is comparable because the lower detection threshold and the consequent event rate may lead to a few eev and - or few glashow resonant signals within a decade. | arxiv:0806.2046 |
recently, liang and partington \ cite { yp } show that kernels of finite - rank perturbations of toeplitz operators are nearly invariant with finite defect under the backward shift operator acting on the scalar - valued hardy space. in this article we provide a vectorial generalization of a result of liang and partington. as an immediate application we identify the kernel of perturbed toeplitz operator in terms of backward shift - invariant subspaces in various important cases by applying the recent theorem ( \ cite { cdp, or } ) in connection with nearly invariant subspaces of finite defect for the backward shift operator acting on the vector - valued hardy space. | arxiv:2005.02255 |
it was shown in work \ cite { vergeles2021note } that in the theory of gravity coupled with the dirac field, each state $ | \ lambda \ rangle $ has its own twin $ | \ lambda ; pt \ rangle $, which is obtained by a discrete pt transformation. if in the state $ | \ lambda \ rangle $ the dirac sea is filled, then in the state $ | \ lambda ; pt \ rangle $ there is an " anti - dirac " filling ( in terms of the state $ | \ lambda \ rangle $ ). it is important that the energies of these states are the same. therefore, there may be domains with different filling of the dirac sea. here we study a domain wall connecting two such adjacent domains. | arxiv:2202.12944 |
the paper is concerned with a posteriori error bounds for a wide class of numerical schemes, for $ n \ times n $ hyperbolic conservation laws in one space dimension. these estimates are achieved by a " post - processing algorithm ", checking that the numerical solution retains small total variation, and computing its oscillation on suitable subdomains. the results apply, in particular, to solutions obtained by the godunov or the lax - friedrichs scheme, backward euler approximations, and the method of periodic smoothing. some numerical implementations are presented. | arxiv:2010.00428 |
we investigate the effect of the dzyaloshinskii moriya interaction ( dmi ) on magnetic domain nucleation in a ferromagnetic thin film with perpendicular magnetic anisotropy. we propose an extended droplet model to determine the nucleation field as a function of the in - plane field. the model can explain the experimentally observed nucleation in a coni microstrip with the interfacial dmi. the results are also reproduced by micromagnetic simulation based on the string model. the electrical measurement method proposed in this study can be widely used to quantitatively determine the dmi energy density. | arxiv:1702.07078 |
higher - dimensional sliding puzzles are constructed on the vertices of a $ d $ - dimensional hypercube, where $ 2 ^ d - l $ vertices are distinctly coloured. rings with the same colours are initially set randomly on the vertices of the hypercube. the goal of the puzzle is to move each of the $ 2 ^ d - l $ rings to pre - defined target vertices on the cube. in this setting, the $ k $ - rule constraint represents a generalisation of edge collision for the movement of colours between vertices, allowing movement only when a hypercube face of dimension $ k $ containing a ring is completely free of other rings. starting from an initial configuration, what is the minimum number of moves needed to make ring colours match the vertex colours? an algorithm that provides us with such a number is called god ' s algorithm. when such an algorithm exists, it does not have a polynomial time complexity, at least in the case of the 15 - puzzle corresponding to $ k = 1 $ in the cubical puzzle. this paper presents a comprehensive computational study of different scenarios of the higher - dimensional puzzle. a benchmark of three computational techniques, an exact algorithm ( the a * search ) and two approximately optimal search techniques ( an evolutionary algorithm ( ea ) and reinforcement learning ( rl ) ) is presented in this work. the experiments show that all three methods can successfully solve the puzzle of dimension three for different face dimensions and across various difficulty levels. when the dimension increases, the a * search fails, and rl and ea methods can still provide a generally acceptable solution, i. e. a distribution of a number of moves with a median value of less than $ 30 $. overall, the ea method consistently requires less computational time, while failing in most cases to minimise the number of moves for the puzzle dimensions $ d = 4 $ and $ d = 5 $. | arxiv:2412.01937 |
we show that spin - orbit coupling in a quantum dot molecule allows for coherent manipulation of two electron spin states using raman transitions. such two - electron spin states defined by the singlet and triplet states of two exchange coupled quantum dots can have favorable coherence properties. in addition, two of the four metastable ground states in this system can be used as auxiliary states that could facilitate implementation of tasks such as mapping of spin states to that of a single propagating photon. we find that even weak spin - orbit effects - - manifesting themselves as slightly different g - factors for the electron and the hole - - would allow for the coherent raman coupling of the singlet - triplet states. we also discuss the possibilities for implementing quantum optical techniques for spin preparation and manipulation. | arxiv:cond-mat/0611469 |
the field of object detection using deep learning ( dl ) is constantly evolving with many new techniques and models being proposed. yolov7 is a state - of - the - art object detector based on the yolo family of models which have become popular for industrial applications. one such possible application domain can be semiconductor defect inspection. the performance of any machine learning model depends on its hyperparameters. furthermore, combining predictions of one or more models in different ways can also affect performance. in this research, we experiment with yolov7, a recently proposed, state - of - the - art object detector, by training and evaluating models with different hyperparameters to investigate which ones improve performance in terms of detection precision for semiconductor line space pattern defects. the base yolov7 model with default hyperparameters and non maximum suppression ( nms ) prediction combining outperforms all retinanet models from previous work in terms of mean average precision ( map ). we find that vertically flipping images randomly during training yields a 3 % improvement in the mean ap of all defect classes. other hyperparameter values improved ap only for certain classes compared to the default model. combining models that achieve the best ap for different defect classes was found to be an effective ensembling strategy. combining predictions from ensembles using weighted box fusion ( wbf ) prediction gave the best performance. the best ensemble with wbf improved on the map of the default model by 10 %. | arxiv:2302.09565 |
charles peirce develops a scheme for classifying different kinds of monadic, dyadic and triadic relations. his account of these different classes of relations figures prominently in the development of his algebraic and diagrammatic systems of mathematical logic. our aim in this essay is to reconstruct and examine central features of the classificatory system that he develops. given the complexity of the system, we will focus our attention on the classification and explanation of of degenerate and genuine dyadic relations, and we will take up the discussion of triadic relations elsewhere. one of our reasons for wanting to explore this account of relations is to better understand how it informed the later development of relations as they figure in the history of mathematical logic. the earlier work of peirce on dyadic relations influenced the development of the account of dyadic logical relations in works of ernst schroder, leopold lowenheim, thoralf skolem and alfred tarski. as such, our primary aim in this essay is to trace the early development of these ideas about the formal relation of the dyad for the sake of better understanding how it might have influenced these later developments. | arxiv:1709.05722 |
the structural and magnetic properties of the hexagonal four - layer form of srmno $ _ 3 $ have been investigated by combining magnetization measurements, electron diffraction and high - resolution synchrotron x - ray and neutron powder diffraction. below 350k, there is subtle structural phase transition from hexagonal symmetry ( space group $ p6 _ 3 / mmc $ ) to orthorhombic symmetry ( space group $ c222 _ 1 $ ) where the hexagonal metric is preserved. the second - order phase transition involves a slight tilting of the corner - sharing mn $ _ { 2 } $ o $ _ { 9 } $ units composed of 2 face - sharing mno $ _ 6 $ octahedra and the associated displacement of sr $ ^ { 2 + } $ cations. the phase transition is described in terms of symmetry - adapted displacement modes of the high symmetry phase. upon further cooling, long range magnetic order with propagation vector $ \ mathbf { k } = ( 0, 0, 0 ) $ sets in below 300k. the antiferromagnetic structure, analyzed using representation theory, shows a considerably reduced magnetic moment indicating the crucial role played by direct exchange between mn centers of the mn $ _ { 2 } $ o $ _ { 9 } $ units. | arxiv:cond-mat/0609235 |
non - parametric lensing methods are a useful way of reconstructing the lensing mass of a cluster without making assumptions about the way the mass is distributed in the cluster. these methods are particularly powerful in the case of galaxy clusters with a large number of constraints. the advantage of not assuming implicitly that the luminous matter follows the dark matter is particularly interesting in those cases where the cluster is in a non - relaxed dynamical state. on the other hand, non - parametric methods have several limitations that should be taken into account carefully. we explore some of these limitations and focus on their implications for the possible ring of dark matter around the galaxy cluster cl0024 + 17. we project three background galaxies through a mock cluster of known radial profile density and obtain a map for the arcs ( $ \ theta $ map ). we also calculate the shear field associated with the mock cluster across the whole field of view ( 3. 3 arcmin ). combining the positions of the arcs and the two - direction shear, we perform an inversion of the lens equation using two separate methods, the biconjugate gradient, and the quadratic programming ( qadp ) to reconstruct the convergence map of the mock cluster. we explore the space of the solutions of the convergence map and compare the radial density profiles to the density profile of the mock cluster. when the inversion matrix algorithms are forced to find the exact solution, we encounter systematic effects resembling ring structures, that clearly depart from the original convergence map. overfitting lensing data with a non - parametric method can produce ring - like structures similar to the alleged one in cl0024. | arxiv:1110.3979 |
a link stream is a sequence of pairs of the form $ ( t, \ { u, v \ } ) $, where $ t \ in \ mathbb n $ represents a time instant and $ u \ neq v $. given an integer $ \ gamma $, the $ \ gamma $ - edge between vertices $ u $ and $ v $, starting at time $ t $, is the set of temporally consecutive edges defined by $ \ { ( t ', \ { u, v \ } ) | t ' \ in [ t, t + \ gamma - 1 ] \ } $. we introduce the notion of temporal matching of a link stream to be an independent $ \ gamma $ - edge set belonging to the link stream. we show that the problem of computing a temporal matching of maximum size is np - hard as soon as $ \ gamma > 1 $. we depict a kernelization algorithm parameterized by the solution size for the problem. as a byproduct we also give a $ 2 $ - approximation algorithm. both our $ 2 $ - approximation and kernelization algorithms are implemented and confronted to link streams collected from real world graph data. we observe that finding temporal matchings is a sensitive question when mining our data from such a perspective as : managing peer - working when any pair of peers $ x $ and $ y $ are to collaborate over a period of one month, at an average rate of at least two email exchanges every week. we furthermore design a link stream generating process by mimicking the behaviour of a random moving group of particles under natural simulation, and confront our algorithms to these generated instances of link streams. all the implementations are open source. | arxiv:1812.08615 |
we examine the validity of the generalized second law of thermodynamics in a non - flat universe in the presence of viscous dark energy. at first we assume that the universe filled only with viscous dark energy. then, we extend our study to the case where there is an interaction between viscous dark energy and pressureless dark matter. we examine the time evolution of the total entropy, including the entropy associated with the apparent horizon and the entropy of the viscous dark energy inside the apparent horizon. our study show that the generalized second law of thermodynamics is always protected in a universe filled with interacting viscous dark energy and dark matter in a region enclosed by the apparent horizon. finally, we show that the the generalized second law of thermodynamics is fulfilled for a universe filled with interacting viscous dark energy and dark matter in the sense that we take into account the casimir effect. | arxiv:1103.1067 |
motivated by the recent interest in cyber - physical and autonomous robotic systems, we study the problem of dynamically coupled multi - agent systems under a set of signal temporal logic tasks. in particular, the satisfaction of each of these signal temporal logic tasks depends on the behavior of a distinct set of agents. instead of abstracting the agent dynamics and the temporal logic tasks into a discrete domain and solving the problem therein or using optimization - based methods, we derive collaborative feedback control laws. these control laws are based on a decentralized control barrier function condition that results in discontinuous control laws, as opposed to a centralized condition resembling the single - agent case. the benefits of our approach are inherent robustness properties typically present in feedback control as well as satisfaction guarantees for continuous - time multi - agent systems. more specifically, time - varying control barrier functions are used that account for the semantics of the signal temporal logic tasks at hand. for a certain fragment of signal temporal logic tasks, we further propose a systematic way to construct such control barrier functions. finally, we show the efficacy and robustness of our framework in an experiment including a group of three omnidirectional robots. | arxiv:2102.02609 |
the creation of a quality summarization dataset is an expensive, time - consuming effort, requiring the production and evaluation of summaries by both trained humans and machines. if such effort is made in one language, it would be beneficial to be able to use it in other languages without repeating human annotations. to investigate how much we can trust machine translation of such a dataset, we translate the english dataset summeval to seven languages and compare performance across automatic evaluation measures. we explore equivalence testing as the appropriate statistical paradigm for evaluating correlations between human and automated scoring of summaries. while we find some potential for dataset reuse in languages similar to the source, most summary evaluation methods are not found to be statistically equivalent across translations. | arxiv:2109.08129 |
weyl semimetals exhibitinging the topologically nontrivial touching points in electronic band dispersion of solids pave the wave way to novel electronic devices and functionalities. here, we demonstrate the signature of topologically nontrivial weyl points ( wps ) in phonon dispersion of solids through a first - principles investigations. oft noncentrosymmetric wurtzite cui ( cuprous iodide at high temperature ). these type - ii phononic wp in phonon dispersion of wurtzite cui is manifested in noncentrosymmetric wurtzite cui by six pairs of the nontrivial touching points in the k _ z = 0. 0 plane. the ideal wps in phonon dispersion are completely isolated from bulk phonon continuum, are different distinct from many type - ii wps in electronic band dispersion of solids associated with the overlapping band states. the opposite chirality of weyl phonon nodes with quantized berry curvature are utilized forproduces weyl phonon hall effect, in analogous analogy towith valley hall effect of electrons. such ideal type - ii weyl phonons are phase is readily observable in experiment, and could provideing a unique platform to study novel thermal transport properties different distinct from that in type - i weyl phonons phase. | arxiv:1904.12466 |
the thesis is considering aspects of su ( 2 ) yang - mills thermodynamics in its deconfining high - temperature phase. we calculate the two - point correlation function of the energy density of the photon in a thermalized gas, at first in the conventional u ( 1 ) gauge theory, followed by a calculation, where the photon is identified with the massless gauge mode in deconfining su ( 2 ) yang - mills thermodynamics. apart from the fact, that this calculation is interesting from a technical point of view, we can consider several aspects of phenomenological relevance. since we interpret the two - point correlator of energy density as a measure for the energy transfer, and thus for the electromagnetic interaction of microscopic objects, such as atoms immersed into a photon gas, we are able to give an explanation for the unexpected stability of cold, innergalactic clouds consisting of atomic hydrogen. subsequently, we evaluate the spatial string tension in deconfining su ( 2 ) yang - mills thermodynamics, which can be regarded as measure for the magnetic flux through the area enclosed by the associated wilson loop. on the level of on - shell polarization effects for the massless mode we observe a perimeter - law, and we speculate that the lattice - obtained area - law is induced by off - shell contributions to the polarization tensor. moreover, we discuss an interesting two - loop result for the pressure which seems to be associated with the presence of screened magnetic monopoles being responsible for an area - law. | arxiv:0801.3961 |
we present the $ \ beta $ - expansion of the helmholtz free energy of the classical $ xyz $ model, with a single - ion anisotropy term and in the presence of an external magnetic field, up to order $ \ beta ^ { 12 } $. we compare our results to the numerical solution of joyce ' s [ phys. rev. lett. 19, 581 ( 1967 ) ] expression for the thermodynamics of the $ xxz $ classical model, with neither single - ion anisotropy term nor external magnetic field. this comparison shows that the derived analytical expansion is valid for intermediate temperatures such as $ kt / j _ x \ approx 0. 5 $. we show that the specific heat and magnetic susceptibility of the spin - 2 antiferromagnetic chain can be approximated by their respective classical results, up to $ kt / j \ approx 0. 8 $, within an error of 2. 5 %. in the absence of an external magnetic field, the ferromagnetic and antiferromagnetic chains have the same classical helmholtz free energy. we show how this two types of media react to the presence of an external magnetic field. | arxiv:cond-mat/0601615 |
quantum devices made from van der waals ( vdw ) heterostructures of two dimensional ( 2d ) materials may herald a new frontier in designer materials that exhibit novel electronic properties and unusual electronic phases. however, due to the complexity of layered atomic structures and the physics that emerges, experimental realization of devices with tailored physical properties will require comprehensive measurements across a large domain of material and device parameters. such multi - parameter measurements require new strategies that combine data - intensive techniques - often applied in astronomy and high energy physics - with the experimental tools of solid state physics and materials science. we discuss the challenges of comprehensive experimental science and present a technique, called multi - parameter dynamic photoresponse microscopy ( mpdpm ), that utilizes ultrafast lasers, diffraction limited scanning beam optics, and hardware automation to characterize the photoresponse of 2d heterostructures in a time efficient manner. using comprehensive methods on vdw heterostructures results in large and complicated data sets ; in the case of mpdpm, we measure a large set of images requiring advanced image analysis to extract the underlying physics. we discuss how to approach such data sets in general, and in the specific case of a graphene - boron nitride - graphite heterostructure photocell. | arxiv:1812.03232 |
the neutrino fluxes calculated from 14 standard solar models published recently in refereed journals are inconsistent with the results of the 4 pioneering solar neutrino experiments if nothing happens to the neutrinos after they are created in the solar interior. the sound speeds calculated from standard solar models are in excellent agreement with helioseismological measurements of sound speeds. some statements made by dar at neutrino 96 are answered here. | arxiv:hep-ph/9610542 |
a new entanglement measure, which is called d - concurrence, is proposed. then the upper and lower bounds for d - concurrence are obtained and the relationship between d - concurrence and the usual concurrence of wootters was established. in addition, comparing with the usual concurrence, d - concurrence has some special merits. | arxiv:0910.5769 |
membership inference attacks ( mias ) aim to identify specific data samples within the private training dataset of machine learning models, leading to serious privacy violations and other sophisticated threats. many practical black - box mias require query access to the data distribution ( the same distribution where the private data is drawn ) to train shadow models. by doing so, the adversary obtains models trained " with " or " without " samples drawn from the distribution, and analyzes the characteristics of the samples under consideration. the adversary is often required to train more than hundreds of shadow models to extract the signals needed for mias ; this becomes the computational overhead of mias. in this paper, we propose that by strategically choosing the samples, mi adversaries can maximize their attack success while minimizing the number of shadow models. first, our motivational experiments suggest memorization as the key property explaining disparate sample vulnerability to mias. we formalize this through a theoretical bound that connects mi advantage with memorization. second, we show sample complexity bounds that connect the number of shadow models needed for mias with memorization. lastly, we confirm our theoretical arguments with comprehensive experiments ; by utilizing samples with high memorization scores, the adversary can ( a ) significantly improve its efficacy regardless of the mia used, and ( b ) reduce the number of shadow models by nearly two orders of magnitude compared to state - of - the - art approaches. | arxiv:2310.08015 |
we use ab initio calculations to examine thermodynamic factors that could promote the formation of recently proposed unique op10 - feb4 and op12 - feb2 compounds. we demonstrate that these compact boron - rich phases are stabilized further under pressure. we also show that chromium tetraboride is more stable in the new op10 rather than the reported oi10 structure which opens up the possibility of realizing an op10 - fe ( x ) cr ( 1 - x ) b4 pseudobinary material. in addition to exhibiting remarkable electronic features, op10 - feb4 and op12 - feb2 are expected to be harder than the known fe - b compounds commonly used for hard coating applications. | arxiv:1104.2136 |
the engineer title. in this case the master ' s degree is obtained after 1 year of studies. only people with an engineer title can be employed as engineers. still, some with competence and experience in an engineering field that do not have such a title, can still be employed to perform engineering tasks as " specialist ", " assistant ", " technologist " or " technician ". but, only engineers can take legal responsibility and provide guarantee upon the work done by a team in their area of expertise. sometimes a company working in this area, which temporarily does not have any employees with an engineer title must pay for an external service of an engineering audit to provide legal guarantee for their products or services. = = = russia = = = moscow school of mathematics and navigation was a first russian educational institution founded by peter the great in 1701. it provided russians with technical education for the first time and much of its curriculum was devoted to producing sailors, engineers, cartographers and bombardiers to support russian expanding navy and army. then in 1810, the saint petersburg military engineering - technical university becomes the first engineering higher learning institution in the russian empire, after addition of officers classes and application of five - year term of teaching. so initially more rigorisms of standards and teaching terms became the traditional historical feature of the russian engineering education in the 19th century. = = = slovakia = = = in slovakia, an engineer ( inzinier ) is considered to be a person holding master ' s degree in technical sciences or economics. several technical and economic universities offer 4 - 5 - year master study in the fields of chemistry, agriculture, material technology, computer science, electrical and mechanical engineering, nuclear physics and technology or economics. a bachelor ' s degree in similar field is prerequisite. absolvents are awarded with the ing. title always put in front of one ' s name ; eventual follow - up doctoral study is offered both by universities and some institutes of the slovak academy of sciences. = = = spain = = = in spain, the engineering degree is delivered by universities in engineering schools, called " escuelas de ingenieria ". like with any other degree in spain, students need to pass a series of examinations based on bachillerato ' s subjects ( selectividad ), select their bachelor ' s degree, and their marks determine whether they are access the degree they want or not. students receive first a grado degree ( 4 years of studies ) followed by a master ' s degree ( 1 – 2 years of studies ) according | https://en.wikipedia.org/wiki/Engineering_education |
to characterize circumstellar systems in high contrast imaging, the fundamental step is to construct a best point spread function ( psf ) template for the non - circumstellar signals ( i. e., star light and speckles ) and separate it from the observation. with existing psf construction methods, the circumstellar signals ( e. g., planets, circumstellar disks ) are unavoidably altered by over - fitting and / or self - subtraction, making forward modeling a necessity to recover these signals. we present a forward modeling - - free solution to these problems with data imputation using sequential non - negative matrix factorization ( di - snmf ). di - snmf first converts this signal separation problem to a " missing data " problem in statistics by flagging the regions which host circumstellar signals as missing data, then attributes psf signals to these regions. we mathematically prove it to have negligible alteration to circumstellar signals when the imputation region is relatively small, which thus enables precise measurement for these circumstellar objects. we apply it to simulated point source and circumstellar disk observations to demonstrate its proper recovery of them. we apply it to gemini planet imager ( gpi ) k1 - band observations of the debris disk surrounding hr 4796a, finding a tentative trend that the dust is more forward scattering as the wavelength increases. we expect di - snmf to be applicable to other general scenarios where the separation of signals is needed. | arxiv:2001.00563 |
we prove a logarithmic improvement of the caffarelli - kohn - nirenberg partial regularity theorem for the navier - stokes equations. the key idea is to find a quantitative counterpart for the absolute continuity of the dissipation energy using the pigeonhole principle. based on the same method, for any suitable weak solution, we show the existence of intervals of regularity in one spatial direction with length depending exponentially on the natural local energies of the solution. then, we give two applications of the latter result in the axially symmetric case. the first one is a local quantitative regularity criterion for suitable weak solutions with small swirl. the second one is a slightly improved one - point ckn criterion which implies all known ( slightly supercritical ) type i regularity results in the literature. | arxiv:2210.01783 |
we prove the convergence of greedy and randomized versions of schwarz iterative methods for solving linear elliptic variational problems based on infinite space splittings of a hilbert space. for the greedy case, we show a squared error decay rate of $ o ( ( m + 1 ) ^ { - 1 } ) $ for elements of an approximation space $ \ mathcal { a } _ 1 $ related to the underlying splitting. for the randomized case, we show an expected squared error decay rate of $ o ( ( m + 1 ) ^ { - 1 } ) $ on a class $ \ mathcal { a } _ { \ infty } ^ { \ pi } \ subset \ mathcal { a } _ 1 $ depending on the probability distribution. | arxiv:1501.00938 |
let $ v $ be a continuous flow with arbitrary singularities on a compact surface. then we show that if $ v $ is non - wandering then $ v $ is topologically equivalent to a $ c ^ { \ infty } $ flow such that there are no exceptional orbits and $ \ mathrm { p } \ sqcup \ mathop { \ mathrm { sing } } ( v ) = \ { x \ in m \ mid \ omega ( x ) \ cup \ alpha ( x ) \ subseteq \ mathop { \ mathrm { sing } } ( v ) \ } $, where $ \ mathrm { p } $ is the union of non - closed proper orbits and $ \ sqcup $ is the disjoint union symbol. moreover, $ v $ is non - wandering if and only if $ \ overline { \ mathrm { ld } \ sqcup \ mathop { \ mathrm { per } } ( v ) } \ supseteq m - \ mathop { \ mathrm { sing } } ( v ) $, where $ \ mathrm { ld } $ is the union of locally dense orbits and $ \ overline { a } $ is the closure of a subset $ a \ subseteq m $. on the other hand, $ v $ is topologically transitive if and only if $ v $ is non - wandering such that $ \ mathop { \ mathrm { int } } ( \ mathop { \ mathrm { per } } ( v ) \ sqcup \ mathop { \ mathrm { sing } } ( v ) ) = \ emptyset $ and $ m - ( \ mathrm { p } \ sqcup \ mathop { \ mathrm { sing } } ( v ) ) $ is connected, where $ \ mathrm { int } { a } $ is the interior of a subset $ a \ subseteq m $. in addition, we construct a smooth flow on $ \ mathbb { t } ^ 2 $ with $ \ overline { \ mathrm { p } } = \ overline { \ mathrm { ld } } = \ mathbb { t } ^ 2 $. | arxiv:1210.7623 |
we generalize the hierarchy construction to generic 2 + 1d topological orders ( which can be non - abelian ) by condensing abelian anyons in one topological order to construct a new one. we show that such construction is reversible and leads to a new equivalence relation between topological orders. we refer to the corresponding equivalent class ( the orbit of the hierarchy construction ) as " the non - abelian family ". each non - abelian family has one or a few root topological orders with the smallest number of anyon types. all the abelian topological orders belong to the trivial non - abelian family whose root is the trivial topological order. we show that abelian anyons in root topological orders must be bosons or fermions with trivial mutual statistics between them. the classification of topological orders is then greatly simplified, by focusing on the roots of each family : those roots are given by non - abelian modular extensions of representation categories of abelian groups. | arxiv:1701.07820 |
federated learning ( fl ) is a distributed machine learning paradigm designed for privacy - sensitive applications that run on resource - constrained devices with non - identically and independently distributed ( iid ) data. traditional fl frameworks adopt the client - server model with a single - level aggregation ( agr ) process, where the server builds the global model by aggregating all trained local models received from client devices. however, this conventional approach encounters challenges, including susceptibility to model / data poisoning attacks. in recent years, advancements in the internet of things ( iot ) and edge computing have enabled the development of hierarchical fl systems with a two - level agr process running at edge and cloud servers. in this paper, we propose a secure hierarchical fl ( shfl ) framework to address poisoning attacks in hierarchical edge networks. by aggregating trained models at the edge, shfl employs two novel methods to address model / data poisoning attacks in the presence of client adversaries : 1 ) a client selection algorithm running at the edge for choosing iot devices to participate in training, and 2 ) a model agr method designed based on convex optimization theory to reduce the impact of edge models from networks with adversaries in the process of computing the global model ( at the cloud level ). the evaluation results reveal that compared to state - of - the - art methods, shfl significantly increases the maximum accuracy achieved by the global model in the presence of client adversaries applying model / data poisoning attacks. | arxiv:2409.15067 |
we review the calculation of fermi ' s golden rule for a system of $ n $ - body dipoles, magnetic or electric, weakly interacting with a blackbody radiation. by using the magnetic or electric field - field correlation function evaluated in the 1960s for the black body radiation, we deduce a general formula for the transition rates and study its limiting, fully coherent or fully incoherent, regimes. | arxiv:1606.08276 |
the sunyaev - zeldovich ( sz ) effect provides a powerful cosmological probe, which traditionally is approached independently as cluster number count ( cnc ) or power spectrum ( ps ) analysis. here, we devise a new method for analysing the $ y $ - map by introducing the survey completeness function, conventionally only used in the cnc analysis, in the $ yy $ - ps modeling. this provides a systematic method, based mainly on sz observables, for obtaining two complementary $ y $ - maps, one incorporating detected / resolved clusters and the other relying only on diffuse / unresolved sz contributions. we use the catalogue of clusters obtained in the \ planck cnc analysis to define the completeness function linking these two $ y $ - maps. the split depends on the chosen signal - to - noise detection threshold, which we vary in our discussion. we carefully propagate the effect of completeness cuts on the non - gaussian error contributions in the $ yy $ - ps analysis, highlighting the benefits of masking massive clusters. our analysis of the \ planck $ yy $ - ps for the unresolved component yields a mass bias of $ b = 0. 15 \ pm0. 04 $, consistent with the standard value ( $ b \ approx0. 2 $ ), in comparison to $ b = 0. 4 \ pm 0. 05 $ for the total $ yy $ - ps. we find indications for this drift being driven by the cib - tsz cross correlation, which dominantly originates from clusters in the resolved component of the $ y $ - map. another possible explanation is the presence of a mass - dependent bias, which has been theoretically motivated and can be quantified with our novel method. we furthermore find first hints for the presence of the 2 - halo terms in the $ yy $ - ps. finally, the proposed method provides a new framework for combining the complementary information of the cnc and ps analyses in upcoming sz surveys. | arxiv:2010.07797 |
we introduce the notion of families of n - marked smooth rational tropical curves over smooth tropical varieties and establish a one - to - one correspondence between ( equivalence classes of ) these families and morphisms from smooth tropical varieties into the moduli space of n - marked abstract rational tropical curves. | arxiv:1105.1674 |
the problem of action recognition involves locating the action in the video, both over time and spatially in the image. the dominant current approaches use supervised learning to solve this problem, and require large amounts of annotated training data, in the form of frame - level bounding box annotations around the region of interest. in this paper, we present a new approach based on continual learning that uses feature - level predictions for self - supervision. it does not require any training annotations in terms of frame - level bounding boxes. the approach is inspired by cognitive models of visual event perception that propose a prediction - based approach to event understanding. we use a stack of lstms coupled with cnn encoder, along with novel attention mechanisms, to model the events in the video and use this model to predict high - level features for the future frames. the prediction errors are used to continuously learn the parameters of the models. this self - supervised framework is not complicated as other approaches but is very effective in learning robust visual representations for both labeling and localization. it should be noted that the approach outputs in a streaming fashion, requiring only a single pass through the video, making it amenable for real - time processing. we demonstrate this on three datasets - ucf sports, jhmdb, and thumos ' 13 and show that the proposed approach outperforms weakly - supervised and unsupervised baselines and obtains competitive performance compared to fully supervised baselines. finally, we show that the proposed framework can generalize to egocentric videos and obtain state - of - the - art results in unsupervised gaze prediction. | arxiv:2003.12185 |
accenture plc is a global multinational professional services company originating in the united states and headquartered in dublin, ireland, that specializes in information technology ( it ) services and management consulting. it was founded in 1989. a fortune global 500 company, it reported revenues of $ 64. 9 billion in 2024. = = history = = = = = formation and early years = = = accenture began as the business and technology consulting division of accounting firm arthur andersen in the early 1950s. the division conducted a feasibility study for general electric to install a computer at appliance park in louisville, kentucky, which led to ge ' s installation of a univac i computer and printer, believed to be the first commercial use of a computer in the united states. = = = split from arthur andersen = = = in 1989, arthur andersen and andersen consulting became separate units of andersen worldwide societe cooperative ( awsc ). throughout the 1990s, tensions grew between the two units. andersen consulting was paying arthur andersen up to 15 % of its profits each year ( a provision of the 1989 split was that the more profitable unit – whether aa or ac, pay the other the 15 percent ), while at the same time arthur andersen was competing with andersen consulting through its own newly established business consulting service line called arthur andersen business consulting. this dispute came to a head in 1998, when andersen consulting put the 15 % transfer payment for that year and future years into escrow and issued a claim for breach of contract against awsc and arthur andersen. in 2000, as a result of arbitration, andersen consulting broke all contractual ties with awsc and arthur andersen. as part of the arbitration settlement, andersen consulting paid $ 1. 2 billion to arthur andersen. on 1 january 2001, andersen consulting adopted the name, " accenture ". the word " accenture " was derived from " accent on the future ". the name " accenture " was submitted by kim petersen, a danish employee from the company ' s oslo, norway office. petersen hoped that the name would not be offensive in any country in which accenture operates, because the word itself was meaningless. = = = incorporation and public listing = = = accenture was incorporated in bermuda in 2001. on 19 july 2001, accenture ' s initial public offering ( ipo ) was priced at $ 14. 50 per share, and the shares began trading on the new york stock exchange. because of the split from andersen, accenture avoided prosecution on june 16, 2002, when the u. s. securities and exchange commission prosecuted arthur | https://en.wikipedia.org/wiki/Accenture |
we study many - body localization in a hardcore boson model in the presence of random disorder on finite generation fractal lattices with different hausdorff dimensions and different local lattice structures. in particular, we consider the vicsek, t - shaped, sierpinski gasket, and modified koch - curve fractal lattices. in the single - particle case, these systems display anderson localization for arbitrary disorder strength if they are large enough. in the many - body case, the systems available to exact diagonalization exhibit a transition between a delocalized and localized regime, visible in the spectral and entanglement properties of these systems. the position of this transition depends on the hausdorff dimension of the given fractal, as well as on its local structure. | arxiv:2111.13516 |
lowered symmetry enables access to a wide set of responses not typically accessible in high symmetry materials. prime examples are time - reversal forbidden quantum geometric photocurrent responses ( e. g., linear injection and circular shift photocurrents ) that are thought to vanish in non - magnetic materials. here we argue that polariton - drag processes enable to unblock such quantum geometric photocurrents even in non - magnetic and centrosymmetric materials. strikingly, we uncover how a cooperative effect between finite q irradiation and the fermi surface position leads to a polariton selective photoexcitation ( psp ). psp enables to directly address carriers within tight momentum resolved windows of the fermi surface to yield giant enhancements of quantum geometric photocurrents. this selectivity enables to directly track momentum resolved quantum geometric quantities along the fermi surface providing a new tool to interrogate the quantum geometry of high symmetry materials. | arxiv:2108.07823 |
results from several recent experiments provide inderect evidences in the favor of existence of a 4th generation neutrino. such a neutrino of mass about 50 gev is compatible with current physical and astrophysical constraints and well motivated in the framework of superstring phenomenology. if sufficiently stable the existence of such a neutrino leads to the drastic change of higgs boson physics : for a wide range of higgs boson masses the dominant mode of higgs boson decay is invisible and the branching ratios for the most promising modes of higgs boson search are significantly reduced. the proper strategy of higgs boson searches in such a framework is discussed. it is shown that in the same framework the absence of a signal in the search for invisible higgs boson decay at lep means either that the mass of higgs is greater than 113. 5 gev or that the mass difference between the higgs mass and doubled neutrino mass is small. | arxiv:hep-ph/0210153 |
non - orthogonal multiple access ( noma ) has shown potential for scalable multicast of video data. however, one key drawback for noma - based video multicast is the limited number of layers allowed by the embedded successive interference cancellation algorithm, failing to meet satisfaction of heterogeneous receivers. we propose a novel receiver - driven superposed video multicast ( supcast ) scheme by integrating softcast, an analog - like transmission scheme, into the noma - based system to achieve high bandwidth efficiency as well as gradual decoding quality proportional to channel conditions at receivers. although softcast allows gradual performance by directly transmitting power - scaled transformation coefficients of frames, it suffers performance degradation due to discarding coefficients under insufficient bandwidth and its power allocation strategy cannot be directly applied in noma due to interference. in supcast, coefficients are grouped into chunks, which are basic units for power allocation and superposition scheduling. by bisecting chunks into base - layer chunks and enhanced - layer chunks, the joint power allocation and chunk scheduling is formulated as a distortion minimization problem. a two - stage power allocation strategy and a near - optimal low - complexity algorithm for chunk scheduling based on the matching theory are proposed. simulation results have shown the advantage of supcast against softcast as well as the reference scheme in noma under various practical scenarios. | arxiv:1812.06713 |
just recently, the concept of augmented and virtual reality ( ar / vr ) over wireless has taken the entire 5g ecosystem by storm spurring an unprecedented interest from both academia, industry and others. yet, the success of an immersive vr experience hinges on solving a plethora of grand challenges cutting across multiple disciplines. this article underscores the importance of vr technology as a disruptive use case of 5g ( and beyond ) harnessing the latest development of storage / memory, fog / edge computing, computer vision, artificial intelligence and others. in particular, the main requirements of wireless interconnected vr are described followed by a selection of key enablers, then, research avenues and their underlying grand challenges are presented. furthermore, we examine three vr case studies and provide numerical results under various storage, computing and network configurations. finally, this article exposes the limitations of current networks and makes the case for more theory, and innovations to spearhead vr for the masses. | arxiv:1611.05356 |
we solve for the statistics of the first detection of a quantum system in a particular desired state, when the system is subject to a projective measurement at independent identically distributed random time intervals. we present formulas for the probability of detection in the $ n $ th attempt. we calculate as well the mean and mean square both of the number of the first successful detection attempt and the time till first detection. we present explicit results for a particle initially localized at a site on a ring of size $ l $, probed at some arbitrary given site, in the case when the detection intervals are distributed exponentially. we prove that, for all interval distributions and finite - dimensional hamiltonians, the mean detection time is equal to the mean attempt number times the mean time interval between attempts. we further prove that for the return problem when the initial and target state are identical, the total detection probability is unity and the mean attempts till detection is an integer, which is the size of the hilbert space ( symmetrized about the target state ). we study an interpolation between the fixed time interval case to an exponential distribution of time intervals via the gamma distribution with constant mean and varying width. the mean arrival time as a function of the mean interval changes qualitatively as we tune the inter - arrival time distribution from very narrow ( delta peaked ) to exponential, as resonances are wiped out by the randomness of the sampling. | arxiv:2012.01763 |
at hadron colliders, the differential cross section for $ w $ production can be factorized and it is sensitive transverse momentum dependent distributions ( tmd ) for low boson transverse momentum. while, often, the corresponding non - perturbative qcd contributions are extrapolated from $ z $ boson production, here we use an existing extraction ( based on the code artemide ) of tmd which includes data coming from drell - yan and semi - inclusive deep inelastic scattering, to provide checks and predictions for the $ w $ case. including fiducial cuts with different configurations and kinematical power corrections, we consider transverse momentum dependent cross sections within several intervals of the vector boson transverse mass. we perform the same study for the $ p _ t ^ { w ^ - } / p _ t ^ { w ^ + } $ and $ p _ t ^ z / p _ t ^ w $ distributions. we compare our predictions with recent extractions of these quantities at atlas and cms and results from tevatron. the results encourage a broader experimental and phenomenological work, and a deeper study of tmd for the $ w $ case. | arxiv:2011.05351 |
let $ ( e, \ mathcal e, \ mu ) $ be a measure space and $ g \ colon e \ times e \ to [ 0, \ infty ] $ be measurable. moreover, let $ \ mathcal f \! _ { ui } $ denote the set of all $ q \ in \ mathcal e ^ + $ ( measurable numerical functions $ q \ ge 0 $ on $ e $ ) such that $ \ { g ( x, \ cdot ) q \ colon x \ in e \ } $ is uniformly integrable, and let $ \ mathcal f \! _ { co } $ denote the set of all $ q \ in \ mathcal e ^ + $ such that the mapping $ f \ mapsto g ( fq ) : = \ int g ( \ cdot, y ) f ( y ) q ( y ) \, d \ mu ( y ) $ is a compact operator on the space $ \ mathcal e _ b $ of bounded measurable functions on $ e $ ( equipped with the sup - norm ). it is shown that $ \ mathcal f \! _ { ui } = \ mathcal f \! _ { co } $ provided both $ \ mathcal f \! _ { ui } $ and $ \ mathcal f \! _ { co } $ contain strictly positive functions. | arxiv:2201.09080 |
introduced by mallows as a ranking model in statistics, mallows permutation model is a class of non - uniform probability distributions on the symmetric group $ s _ n $. the model depends on a distance metric on $ s _ n $ and a scale parameter $ \ beta $. in this paper, we take the distance metric to be the $ l ^ 1 $ distance ( also known as spearman ' s footrule in the statistics literature ), and investigate the cycle structure of random permutations drawn from mallows permutation model with the $ l ^ 1 $ distance. we focus on the parameter regime where $ \ beta > 0 $. we show that the expected length of the cycle containing a given point is of order $ \ min \ { \ max \ { \ beta ^ { - 2 }, 1 \ }, n \ } $, and the expected diameter of the cycle containing a given point is of order $ \ min \ { e ^ { - 2 \ beta } \ max \ { \ beta ^ { - 2 }, 1 \ }, n - 1 \ } $. moreover, when $ \ beta \ ll n ^ { - 1 \ slash 2 } $, the sorted cycle lengths ( in descending order ) normalized by $ n $ converge in distribution to the poisson - dirichlet law with parameter $ 1 $. the proofs of the results rely on the hit and run algorithm, a markov chain for sampling from the model. | arxiv:2312.15833 |
the fast and accessible verification of nonclassical resources is an indispensable step towards a broad utilization of continuous - variable quantum technologies. here, we use machine learning methods for the identification of nonclassicality of quantum states of light by processing experimental data obtained via homodyne detection. for this purpose, we train an artificial neural network to classify classical and nonclassical states from their quadrature - measurement distributions. we demonstrate that the network is able to correctly identify classical and nonclassical features from real experimental quadrature data for different states of light. furthermore, we show that nonclassicality of some states that were not used in the training phase is also recognized. circumventing the requirement of the large sample sizes needed to perform homodyne tomography, our approach presents a promising alternative for the identification of nonclassicality for small sample sizes, indicating applicability for fast sorting or direct monitoring of experimental data. | arxiv:2101.07112 |
knowledge graphs ( kgs ) are crucial in the field of artificial intelligence and are widely applied in downstream tasks, such as enhancing question answering ( qa ) systems. the construction of kgs typically requires significant effort from domain experts. recently, large language models ( llms ) have been used for knowledge graph construction ( kgc ), however, most existing approaches focus on a local perspective, extracting knowledge triplets from individual sentences or documents. in this work, we introduce graphusion, a zero - shot kgc framework from free text. the core fusion module provides a global view of triplets, incorporating entity merging, conflict resolution, and novel triplet discovery. we showcase how graphusion could be applied to the natural language processing ( nlp ) domain and validate it in the educational scenario. specifically, we introduce tutorqa, a new expert - verified benchmark for graph reasoning and qa, comprising six tasks and a total of 1, 200 qa pairs. our evaluation demonstrates that graphusion surpasses supervised baselines by up to 10 % in accuracy on link prediction. additionally, it achieves average scores of 2. 92 and 2. 37 out of 3 in human evaluations for concept entity extraction and relation recognition, respectively. | arxiv:2407.10794 |
the $ \ lambda $ cdm model of structure formation makes strong predictions on concentration and shape of dm ( dark matter ) halos, which are determined by mass accretion processes. comparison between predicted shapes and observations provides a geometric test of the $ \ lambda $ cdm model. accurate and precise measurements needs a full three - dimensional analysis of the cluster mass distribution. we accomplish this with a multi - probe 3d analysis of the x - ray regular clash ( cluster lensing and supernova survey with hubble ) clusters combining strong and weak lensing, x - ray photometry and spectroscopy, and the sunyaev - zel ' dovich effect. the cluster shapes and concentrations are consistent with $ \ lambda $ cdm predictions. the clash clusters are randomly oriented, as expected given the sample selection criteria. shapes agree with numerical results for dm - only halos, which hints at baryonic physics being not so effective in making halos rounder. | arxiv:1804.00667 |
this manual describes a set of utilities developed for lattice qcd computations. they are collectively called qcdutils. they comprise a set of python programs each of them with a specific function : download gauge ensembles from the public nersc repository, convert between formats, split files by time - slices, compile and run physics algorithms, generate visualizations in the form of vtk files, convert the visualizations into images, perform bootstrap analysis of results, fit the results of the analysis, and plot those results. these tools implement the typical workflow of most lattice qcd computations and automate it by enforcing filename conventions : the output of one tool is read by the next tool in the workflow. this manual is organized as a series of autonomous recipes which can be combined together. | arxiv:1202.4813 |
the logic of bunched implications ( bi ) freely combines additive and multiplicative connectives, including implications ; however, despite its well - studied proof theory, proof - search in bi has always been a difficult problem. the focusing principle is a restriction of the proof - search space that can capture various goal - directed proof - search procedures. in this paper, we show that focused proof - search is complete for bi by first reformulating the traditional bunched sequent calculus using the simpler data - structure of nested sequents, following with a polarised and focused variant that we show is sound and complete via a cut - elimination argument. this establishes an operational semantics for focused proof - search in the logic of bunched implications. | arxiv:2010.08352 |
chain event graphs are a family of probabilistic graphical models that generalise bayesian networks and have been successfully applied to a wide range of domains. unlike bayesian networks, these models can encode context - specific conditional independencies as well as asymmetric developments within the evolution of a process. more recently, new model classes belonging to the chain event graph family have been developed for modelling time - to - event data to study the temporal dynamics of a process. however, existing model selection algorithms for chain event graphs and its variants rely on all parameters having conjugate priors. this is unrealistic for many real - world applications. in this paper, we propose a mixture modelling approach to model selection in chain event graphs that does not rely on conjugacy. moreover, we also show that this methodology is more amenable to being robustly scaled than the existing model selection algorithms used for this family. we demonstrate our techniques on simulated datasets. | arxiv:2211.03427 |
we investigate a simple model for social learning with two agents : a teacher and a student. the teacher ' s goal is to teach the student the state of the world ; however, the teacher himself is not certain about the state of the world and needs to simultaneously learn this parameter and teach it to the student. we model the teacher ' s and student ' s uncertainties via noisy transmission channels, and employ two simple decoding strategies for the student. we focus on two teaching strategies : a " low - effort " strategy of simply forwarding information, and a " high - effort " strategy of communicating the teacher ' s current best estimate of the world at each time instant, based on his own cumulative learning. using tools from large deviation theory, we calculate the exact learning rates for these strategies and demonstrate regimes where the low - effort strategy outperforms the high - effort strategy. finally, we present a conjecture concerning the optimal learning rate for the student over all joint strategies between the student and the teacher. | arxiv:1901.07063 |
considering the lack of a unified framework for image description and deep cultural analysis at the subject level in the field of ancient chinese paintings ( acp ), this study utilized the beijing palace museum ' s acp collections to develop a semantic model integrating the iconological theory with a new workflow for term extraction and mapping. our findings underscore the model ' s effectiveness. sdm can be used to support further art - related knowledge organization and cultural exploration of acps. | arxiv:2501.08352 |
the distribution of galaxies and clusters of galaxies on the mega - parsec scale of the universe follows an intricate pattern now famously known as the large - scale structure or the cosmic web. to study the environments of this network, several techniques have been developed that are able to describe its properties and the properties of groups of galaxies as a function of their environment. in this work we analyze the previously introduced framework : 1 - dimensional recovery, extraction, and analysis of manifolds ( 1 - dream ) on n - body cosmological simulation data of the cosmic web. the 1 - dream toolbox consists of five machine learning methods, whose aim is the extraction and modelling of 1 - dimensional structures in astronomical big data settings. we show that 1 - dream can be used to extract structures of different density ranges within the cosmic web and to create probabilistic models of them. for demonstration, we construct a probabilistic model of an extracted filament and move through the structure to measure properties such as local density and velocity. we also compare our toolbox with a collection of methodologies which trace the cosmic web. we show that 1 - dream is able to split the network into its various environments with results comparable to the state - of - the - art methodologies. a detailed comparison is then made with the public code disperse, in which we find that 1 - dream is robust against changes in sample size making it suitable for analyzing sparse observational data, and finding faint and diffuse manifolds in low density regions. | arxiv:2302.03779 |
##heet, or using a framework for designing learning environments ) = = = artificial intelligence = = = the academic study and development of artificial intelligence can be dated to at least 1956 when cognitive scientists began to investigate thought and learning processes in humans and machines. the earliest uses of ai in education can be traced to the development of intelligent tutoring systems ( its ) and their application in enhancing educational experiences. they are designed to provide immediate and personalized feedback to students. the incentive to develop its comes from educational studies showing that individual tutoring is much more effective than group teaching, in addition to the need for promoting learning on a larger scale. over the years, a combination of cognitive science and data - driven techniques have enhanced the capabilities of its, allowing it to model a wide range of students ' characteristics, such as knowledge, affect, off - task behavior, and wheel spinning. there is ample evidence that its are highly effective in helping students learn. its can be used to keep students in the zone of proximal development ( zpd ) : the space wherein students may learn with guidance. such systems can guide students through tasks slightly above their ability level. generative artificial intelligence ( genai ) gained widespread public attention with the introduction of chatgpt in november 2022. this caused alarm among k - 12 and higher education institutions, with a few large school districts quickly banning genai, due to concerns about potential academic misconduct. however, as the debate developed, these bans were largely reversed within a few months. to combat academic misconduct, detection tools have been developed, but their accuracy is limited. there have been various use cases in education, including providing personalized feedback, brainstorming classroom activities, support for students with special needs, streamlining administrative tasks, and simplifying assessment processes. however, genai can output incorrect information, also known as hallucination. its outputs can also be biased, leading to calls for transparency regarding the data used to train genai models and their use. providing professional development for teachers and developing policies and regulations can help mitigate the ethical concerns of genai. and while ai systems can provide individualized instruction and adaptive feedback to students, they have the potential to impact students ' sense of classroom community. = = settings and sectors = = = = = preschool = = = various forms of electronic media can be a feature of preschool life. although parents report a positive experience, the impact of such use has not been systematically assessed. the age when a given child might start using a particular technology, such as a | https://en.wikipedia.org/wiki/Educational_technology |
this paper conceptualizes the deep weight spaces ( dws ) of neural architectures as hierarchical, fractal - like, coarse geometric structures observable at discrete integer scales through recursive dilation. we introduce a coarse group action termed the fractal transformation, $ t _ { r _ k } $, acting under the symmetry group $ g = ( \ mathbb { z }, + ) $, to analyze neural parameter matrices or tensors, by segmenting the underlying discrete grid $ \ omega $ into $ n ( r _ k ) $ fractals across varying observation scales $ r _ k $. this perspective adopts a box count technique, commonly used to assess the hierarchical and scale - related geometry of physical structures, which has been extensively formalized under the topic of fractal geometry. we assess the structural complexity of neural layers by estimating the hausdorff - besicovitch dimension of their layers and evaluating a degree of self - similarity. the fractal transformation features key algebraic properties such as linearity, identity, and asymptotic invertibility, which is a signature of coarse structures. we show that the coarse group action exhibits a set of symmetries such as discrete scale invariance ( dsi ) under recursive dilation, strong invariance followed by weak equivariance to permutations, alongside respecting the scaling equivariance of activation functions, defined by the intertwiner group relations. our framework targets large - scale structural properties of dws, deliberately overlooking minor inconsistencies to focus on significant geometric characteristics of neural networks. experiments on cifar - 10 using resnet - 18, vgg - 16, and a custom cnn validate our approach, demonstrating effective fractal segmentation and structural analysis. | arxiv:2503.14298 |
there is a strong correlation between linguistics and artificial intelligence ( ai ), best manifested by deep learning language models. this study provides a thorough scientometric analysis of this correlation, synthesizing the intellectual production during 51 years, from 1974 to 2024. it involves 5750 web of science - indexed articles published in 2124 journals, which are written by 20835 authors belonging to 13773 research centers in 794 countries. two powerful software, viz., citespace and vosviewer, were used to generate mapping visualizations of the intellectual landscape, trending issues and ( re ) emerging hotspots. the results indicate that in the 1980s and 1990s, linguistics and ai research was not robust, characterized by unstable publication over time. it has, however, witnessed a remarkable increase of publication since then, reaching 1478 articles in 2023, and 546 articles in january - march timespan in 2024, involving emerging issues and hotspots, addressing new horizons, new topics, and launching new applications and powerful deep learning language models including chatgpt. | arxiv:2411.19858 |
conventional causal estimands, such as the average treatment effect ( ate ), reflect how the mean outcome in a population or subpopulation would change if all units received treatment versus control. real - world policy changes, however, are often incremental, changing the treatment status for only a small segment of the population who are at or near " the margin of participation. " to capture this notion, two parallel lines of inquiry have developed in economics and in statistics and epidemiology that define, identify, and estimate what we call interventional effects. in this article, we bridge these two strands of literature by defining interventional effect ( ie ) as the per capita effect of a treatment intervention on an outcome of interest, and marginal interventional effect ( mie ) as its limit when the size of the intervention approaches zero. the ie and mie can be viewed as the unconditional counterparts of the policy - relevant treatment effect ( prte ) and marginal prte ( mprte ) proposed in the economics literature. however, different from prte and mprte, ie and mie are defined without reference to a latent index model, and, as we show, can be identified either under unconfoundedness or through the use of instrumental variables. for both scenarios, we show that mies are typically identified without the strong positivity assumption required of the ate, highlight several " stylized interventions " that may be of particular interest in policy analysis, discuss several parametric and semiparametric estimation strategies, and illustrate the proposed methods with an empirical example. | arxiv:2206.10717 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.