text
stringlengths
1
3.65k
source
stringlengths
15
79
large scale manned space flight within the solar system is still confronted with the solution of two problems : 1. a propulsion system to transport large payloads with short transit times between different planetary orbits. 2. a cost effective lifting of large payloads into earth orbit. for the solution of the first problem a deuterium fusion bomb propulsion system is proposed where a thermonuclear detonation wave is ignited in a small cylindrical assembly of deuterium with a gigavolt - multimegampere proton beam, drawn from the magnetically insulated spacecraft acting in the ultrahigh vacuum of space as a gigavolt capacitor. for the solution of the second problem, the ignition is done by argon ion lasers driven by high explosives, with the lasers destroyed in the fusion explosion and becoming part of the exhaust.
arxiv:0812.0397
for a hypersurface in $ { \ mathbb r } ^ 3 $, willmore flow is defined as the $ l ^ 2 $ - - gradient flow of the classical willmore energy : the integral of the squared mean curvature. this geometric evolution law is of interest in differential geometry, image reconstruction and mathematical biology. in this paper, we propose novel numerical approximations for the willmore flow of axisymmetric hypersurfaces. for the semidiscrete continuous - in - time variants we prove a stability result. we consider both closed surfaces, and surfaces with a boundary. in the latter case, we carefully derive weak formulations of suitable boundary conditions. furthermore, we consider many generalizations of the classical willmore energy, particularly those that play a role in the study of biomembranes. in the generalized models we include spontaneous curvature and area difference elasticity ( ade ) effects, gaussian curvature and line energy contributions. several numerical experiments demonstrate the efficiency and robustness of our developed numerical methods.
arxiv:1911.01132
cross - view geo - localization is a promising solution for large - scale localization problems, requiring the sequential execution of retrieval and metric localization tasks to achieve fine - grained predictions. however, existing methods typically focus on designing standalone models for these two tasks, resulting in inefficient collaboration and increased training overhead. in this paper, we propose unifygeo, a novel unified hierarchical geo - localization framework that integrates retrieval and metric localization tasks into a single network. specifically, we first employ a unified learning strategy with shared parameters to jointly learn multi - granularity representation, facilitating mutual reinforcement between these two tasks. subsequently, we design a re - ranking mechanism guided by a dedicated loss function, which enhances geo - localization performance by improving both retrieval accuracy and metric localization references. extensive experiments demonstrate that unifygeo significantly outperforms the state - of - the - arts in both task - isolated and task - associated settings. remarkably, on the challenging vigor benchmark, which supports fine - grained localization evaluation, the 1 - meter - level localization recall rate improves from 1. 53 \ % to 39. 64 \ % and from 0. 43 \ % to 25. 58 \ % under same - area and cross - area evaluations, respectively. code will be made publicly available.
arxiv:2505.07622
this paper presents a novel algorithm named the motion - encoded particle swarm optimization ( mpso ) for finding a moving target with unmanned aerial vehicles ( uavs ). from the bayesian theory, the search problem can be converted to the optimization of a cost function that represents the probability of detecting the target. here, the proposed mpso is developed to solve that problem by encoding the search trajectory as a series of uav motion paths evolving over the generation of particles in a pso algorithm. this motion - encoded approach allows for preserving important properties of the swarm including the cognitive and social coherence, and thus resulting in better solutions. results from extensive simulations with existing methods show that the proposed mpso improves the detection performance by 24 \ % and time performance by 4. 71 times compared to the original pso, and moreover, also outperforms other state - of - the - art metaheuristic optimization algorithms including the artificial bee colony ( abc ), ant colony optimization ( aco ), genetic algorithm ( ga ), differential evolution ( de ), and tree - seed algorithm ( tsa ) in most search scenarios. experiments have been conducted with real uavs in searching for a dynamic target in different scenarios to demonstrate mpso merits in a practical application.
arxiv:2010.02039
a key resource for quantum optics experiments is an on - demand source of single and multiple photon states at telecommunication wavelengths. this letter presents a heralded single photon source based on a hybrid technology approach, combining high efficiency periodically poled lithium niobate waveguides, low - loss laser inscribed circuits, and fast ( > 1 mhz ) fibre coupled electro - optic switches. hybrid interfacing different platforms is a promising route to exploiting the advantages of existing technology and has permitted the demonstration of the multiplexing of four identical sources of single photons to one output. since this is an integrated technology, it provides scalability and can immediately leverage any improvements in transmission, detection and photon production efficiencies.
arxiv:1402.7202
it is known that a markov basis of the binary graph model of a graph $ g $ corresponds to a set of binomial generators of cut ideals $ i _ { \ widehat { g } } $ of the suspension $ \ widehat { g } $ of $ g $. in this paper, we give another application of cut ideals to statistics. we show that a set of binomial generators of cut ideals is a markov basis of some regular two - level fractional factorial design. as application, we give a markov basis of degree 2 for designs defined by at most two relations.
arxiv:1302.2882
we sketch some of the different roles played by whitham times in connection with averaging, adiabatic invariants, soliton theory, hamiltonian structures, seiberg - witten theory, isomonodromy problems, hitchin systems, wdvv and picard - fuchs equations, renormalization, soft susy breaking, etc.
arxiv:math-ph/9905010
comparative genetic studies of non - model organisms are transforming rapidly due to major advances in sequencing technology. a limiting factor in these studies has been the identification and screening of orthologous loci across an evolutionarily distant set of taxa. here, we evaluate the efficacy of genomic markers targeting ultraconserved dna elements ( uces ) for analyses at shallow evolutionary timescales. using sequence capture and massively parallel sequencing to generate uce data for five co - distributed neotropical rainforest bird species, we recovered 776 - 1, 516 uce loci across the five species. across species, 53 - 77 percent of the loci were polymorphic, containing between 2. 0 and 3. 2 variable sites per polymorphic locus, on average. we performed species tree construction, coalescent modeling, and species delimitation, and we found that the five co - distributed species exhibited discordant phylogeographic histories. we also found that species trees and divergence times estimated from uces were similar to those obtained from mtdna. the species that inhabit the understory had older divergence times across barriers, contained a higher number of cryptic species, and exhibited larger effective population sizes relative to species inhabiting the canopy. because orthologous uces can be obtained from a wide array of taxa, are polymorphic at shallow evolutionary time scales, and can be generated rapidly at low cost, they are effective genetic markers for studies investigating evolutionary patterns and processes at shallow time scales.
arxiv:1308.5342
phylogenetic networks are becoming of increasing interest to evolutionary biologists due to their ability to capture complex non - treelike evolutionary processes. from a combinatorial point of view, such networks are certain types of rooted directed acyclic graphs whose leaves are labelled by, for example, species. a number of mathematically interesting classes of phylogenetic networks are known. these include the biologically relevant class of stable phylogenetic networks whose members are defined via certain " fold - up " and " un - fold " operations that link them with concepts arising within the theory of, for example, graph fibrations. despite this exciting link, the structural complexity of stable phylogenetic networks is still relatively poorly understood. employing the popular tree - based, reticulation - visible, and tree - child properties which allow one to gauge this complexity in one way or another, we provide novel characterizations for when a stable phylogenetic network satisfies either one of these three properties.
arxiv:1804.01841
with the advent of faster internet services and growth of multimedia content, we observe a massive growth in the number of online videos. the users generate these video contents at an unprecedented rate, owing to the use of smart - phones and other hand - held video capturing devices. this creates immense potential for the advertising and marketing agencies to create personalized content for the users. in this paper, we attempt to assist the video editors to generate augmented video content, by proposing candidate spaces in video frames. we propose and release a large - scale dataset of outdoor scenes, along with manually annotated maps for candidate spaces. we also benchmark several deep - learning based semantic segmentation algorithms on this proposed dataset.
arxiv:1903.08943
latent periodic elements in genomes play important roles in genomic functions. many complex periodic elements in genomes are difficult to be detected by commonly used digital signal processing ( dsp ). we present a novel method to compute the periodic power spectrum of a dna sequence based on the nucleotide distributions on periodic positions of the sequence. the method directly calculates full periodic spectrum of a dna sequence rather than frequency spectrum by fourier transform. the magnitude of the periodic power spectrum reflects the strength of the periodicity signals, thus, the algorithm can capture all the latent periodicities in dna sequences. we apply this method on detection of latent periodicities in different genome elements, including exons and microsatellite dna sequences. the results show that the method minimizes the impact of spectral leakage, captures a much broader latent periodicities in genomes, and outperforms the conventional fourier transform.
arxiv:1504.02367
by adding exiting layers to the deep learning networks, early exit can terminate the inference earlier with accurate results. the passive decision - making of whether to exit or continue the next layer has to go through every pre - placed exiting layer until it exits. in addition, it is also hard to adjust the configurations of the computing platforms alongside the inference proceeds. by incorporating a low - cost prediction engine, we propose a predictive exit framework for computation - and energy - efficient deep learning applications. predictive exit can forecast where the network will exit ( i. e., establish the number of remaining layers to finish the inference ), which effectively reduces the network computation cost by exiting on time without running every pre - placed exiting layer. moreover, according to the number of remaining layers, proper computing configurations ( i. e., frequency and voltage ) are selected to execute the network to further save energy. extensive experimental results demonstrate that predictive exit achieves up to 96. 2 % computation reduction and 72. 9 % energy - saving compared with classic deep learning networks ; and 12. 8 % computation reduction and 37. 6 % energy - saving compared with the early exit under state - of - the - art exiting strategies, given the same inference accuracy and latency.
arxiv:2206.04685
we characterize the wave front set $ wf ^ p _ \ ast ( u ) $ with respect to the iterates of a linear partial differential operator with constant coefficients of a classical distribution $ u \ in { \ mathcal d } ' ( \ omega ) $, $ \ omega $ an open subset in $ { \ mathbb r } ^ n $. we use recent paley - wiener theorems for generalized ultradifferentiable classes in the sense of braun, meise and taylor. we also give several examples and applications to the regularity of operators with variable coefficients and constant strength. finally, we construct a distribution with prescribed wave front set of this type.
arxiv:1412.4954
ex situ ' ) which is often necessary if the climate is too cold. factors influencing the duration of bioremediation would include to the extent of the contamination, environmental conditions, and with timelines that can range from months to years. = = = examples = = = biofiltration bioreactor bioremediation composting toilet desalination thermal depolymerization pyrolysis = = sustainable energy = = concerns over pollution and greenhouse gases have spurred the search for sustainable alternatives to fossil fuel use. the global reduction of greenhouse gases requires the adoption of energy conservation as well as sustainable generation. that environmental harm reduction involves global changes such as : substantially reducing methane emissions from melting perma - frost, animal husbandry, pipeline and wellhead leakage. virtually eliminating fossil fuels for vehicles, heat, and electricity. carbon dioxide capture and sequestration at point of combustion. widespread use of public transport, battery, and fuel cell vehicles extensive implementation of wind / solar / water generated electricity reducing peak demands with carbon taxes and time of use pricing. since fuel used by industry and transportation account for the majority of world demand, by investing in conservation and efficiency ( using less fuel ), pollution and greenhouse gases from these two sectors can be reduced around the globe. advanced energy - efficient electric motor ( and electric generator ) technology that are cost - effective to encourage their application, such as variable speed generators and efficient energy use, can reduce the amount of carbon dioxide ( co2 ) and sulfur dioxide ( so2 ) that would otherwise be introduced to the atmosphere, if electricity were generated using fossil fuels. some scholars have expressed concern that the implementation of new environmental technologies in highly developed national economies may cause economic and social disruption in less - developed economies. = = = renewable energy = = = renewable energy is the energy that can be replenished easily. for years we have been using sources such as wood, sun, water, etc. for means for producing energy. energy that can be produced by natural objects like the sun, wind, etc. is considered to be renewable. technologies that have been in usage include wind power, hydropower, solar energy, geothermal energy, and biomass / bioenergy. it refers to any form of energy that naturally regenerates over time, and does not run out. this form of energy naturally replenishes and is characterized by a low carbon footprint. some of the most common types of renewable energy sources include ; solar power, wind power, hydroelectric power, and bioenergy which is generated by burning
https://en.wikipedia.org/wiki/Environmental_technology
online social networks have become primary means of communication. as they often exhibit undesirable effects such as hostility, polarisation or echo chambers, it is crucial to develop analytical tools that help us better understand them. in this paper, we are interested in the evolution of discord in social networks. formally, we introduce a method to calculate the probability of discord between any two agents in the multi - state voter model with and without zealots. our work applies to any directed, weighted graph with any finite number of possible opinions, allows for various update rates across agents, and does not imply any approximation. under certain topological conditions, their opinions are independent and the joint distribution can be decoupled. otherwise, the evolution of discord probabilities is described by a linear system of ordinary differential equations. we prove the existence of a unique equilibrium solution, which can be computed via an iterative algorithm. the classical definition of active links density is generalized to take into account long - range, weighted interactions. we illustrate our findings on real - life and synthetic networks. in particular, we investigate the impact of clustering on discord, and uncover a rich landscape of varied behaviors in polarised networks. this sheds lights on the evolution of discord between, and within, antagonistic communities.
arxiv:2203.02002
let $ \ mathbb { a } $ and $ \ mathbb { b } $ be circular annuli in the complex plane and consider the dirichlet energy integral of $ j - $ degree mappings between $ \ mathbb { a } $ and $ \ mathbb { b } $. then we minimize this energy integral. the minimizer is a $ j - $ degree harmonic mapping between annuli $ \ mathbb { a } $ and $ \ mathbb { b } $ provided it exits. if such a harmonic mapping does not exist, then the minimizer is still a $ j - $ degree mapping which is harmonic in $ \ mathbb { a } ' \ subset \ mathbb { a } $ and it is a squeezing mapping in its complementary annulus $ \ mathbb { a } ' ' = \ mathbb { a } \ setminus \ mathbb { a } $. such a result is an extension of the certain result of astala, iwaniec and martin \ cite { astala2010 }.
arxiv:2405.08902
neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. information encoding and learning in neural circuits depend on how well time - varying stimuli can control spontaneous network activity. we show that in firing - rate networks in the balanced state, external control of recurrent dynamics, i. e., the suppression of internally - generated chaotic variability, strongly depends on correlations in the input. a unique feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far easier to suppress chaos with independent inputs into each neuron than through common input. to study this phenomenon we develop a non - stationary dynamic mean - field theory that determines how the activity statistics and largest lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. we also show that uncorrelated inputs facilitate learning in balanced networks.
arxiv:2201.09916
using information theory and data for all ( 0. 5 million ) norwegian firms, the national and regional innovation systems are decomposed into three subdynamics : ( i ) economic wealth generation, ( ii ) technological novelty production, and ( iii ) government interventions and administrative control. the mutual information in three dimensions can then be used as an indicator of potential synergy, that is, reduction of uncertainty. we aggregate the data at the nuts3 level for 19 counties, the nuts2 level for seven regions, and the single nuts1 level for the nation. measured as in - between group reduction of uncertainty, 11. 7 % of the synergy was found at the regional level, whereas only another 2. 7 % was added by aggregation at the national level. using this triple - helix indicator, the counties along the west coast are indicated as more knowledge - based than the metropolitan area of oslo or the geographical environment of the technical university in trondheim. foreign direct investment seems to have larger knowledge spill - overs in norway ( oil, gas, offshore, chemistry, and marine ) than the institutional knowledge infrastructure in established universities. the northern part of the country, which receives large government subsidies, shows a deviant pattern.
arxiv:1109.6597
in this paper, we study a big bounce universe typified by a non - singular big bounce, as opposed to a singular big bang. this cosmological model can describe radiation dominated early universe and matter dominated late universe in frw model. the connections between thermodynamics and gravity are observed here. in the early stage of both cold and hot universes, we find there is only one geometry containing a 4d de sitter universe with a general state parameter. we also find the form of the apparent horizon in the early universe strongly depends on the extra dimension, which suggests that the influence of the extra dimension could in principle be found in the early universe. moreover, we show that in the late stages of both cold and hot universes, the moment when the apparent horizon begins to bounce keeps essentially in step with the behavior of the cosmological scalar factor.
arxiv:1412.2427
to approximate the trajectories of a stochastic process by the solution of some differential equation is widely used in the fields of probability, computer science and combinatorics. in this paper, the convergence of coupon collecting processes is studied via the differential equation techniques originally proposed in wormald ( 1995 ) and modified in warnke ( 2019 ). in other words, we give a novel approach to analysis the classical coupon collector ' s problem and keep track of all coupons in the process of collecting.
arxiv:1912.02582
this paper investigates the harnack inequality for nonnegative solutions to second - order parabolic equations in double divergence form. we impose conditions where the principal coefficients satisfy the dini mean oscillation condition in $ x $, while the drift and zeroth - order coefficients belong to specific morrey classes. our analysis contributes to advancing the theoretical foundations of parabolic equations in double divergence form, including fokker - planck - kolmogorov equations for probability densities.
arxiv:2405.04482
efficient mixing in high - speed compressible flows, crucial for scramjet operation, can be significantly enhanced by shock wave interactions. this study employs direct numerical simulations ( dns ) to comprehensively examine the interaction between an oblique shock and a spatially developing turbulent mixing layer, contrasting inert and reacting ( hydrogen - air combustion ) cases. utilizing streaming dynamic mode decomposition ( sdmd ), we analyze four configurations : inert and reacting shear layers, both with and without shock impingement ( at $ \ mathrm { ma } _ c = 0. 48 $ ). we evaluate the temporal mode growth rates, the evolution of vorticity thickness, and the spatial structures of dominant dmd modes to elucidate how shocks and heat release synergistically influence flow stability, mixing, and the underlying coherent dynamics. results reveal that the oblique shock significantly amplifies kelvin - helmholtz instabilities, excites a broader spectrum of unstable temporal modes, and accelerates the growth of the vorticity thickness. combustion - induced heat release further modifies this response, leading to a redistribution of energy among the dmd modes and indicating a complex coupled effect with shock dynamics, particularly in the enhanced excitation of high - frequency modes and the alteration of spatial structures. the modal analysis identifies distinct frequency bands associated with shock and combustion effects and characterizes the dominant spatial patterns, offering refined insights for controlling and enhancing mixing in high - speed propulsion flows.
arxiv:2505.07636
we studied the two - qubit quantum rabi model and found its dark state solutions with at most n photons. one peculiar case presents when $ n = 3 $, which has constant eigenenergy in the whole coupling regime and leads to level crossings within the same parity subspace. we also discovered asymptotic solutions with at most $ n = 2i + 3 $ $ ( i = 1, 2, 3, \ dots ) $ photons, and constant eigenenergy $ n \ hbar \ omega $ when coupling $ g $ becomes much larger than photon frequency $ \ omega $. although generally all photon number states are involved in the two - qubit quantum rabi model, such $ n $ - photon solutions exist and may have applications in quantum information processing with ultrastrong couplings.
arxiv:2406.02418
this paper presents a new kind of self - balancing ternary search trie that uses a randomized balancing strategy adapted from aragon and seidel ' s randomized binary search trees ( " treaps " ). after any sequence of insertions and deletions of strings, the tree looks like a ternary search trie built by inserting strings in random order. as a result, the time cost of searching, inserting, or deleting a string of length k in a tree with n strings is at most o ( k + log n ) with high probability.
arxiv:1606.04042
intermediate - mass black holes ( imbhs ) of mass $ m _ { \ bullet } \ approx 10 ^ { 2 } - 10 ^ { 5 } $ solar masses, $ m _ { \ odot } $, are the long - sought missing link between stellar black holes, born of supernovae, and massive black holes, tied to galaxy evolution by the empirical $ m _ { \ bullet } / \ sigma _ { \ star } $ correlation. we show that low - mass black hole seeds that accrete stars from locally dense environments in galaxies following a universal $ m _ { \ bullet } / \ sigma _ { \ star } $ relation grow over the age of the universe to be above $ { \ mathcal { m } } _ { 0 } \ approx3 \ times10 ^ { 5 } m _ { \ odot } $ ( $ 5 \ % $ lower limit ), independent of the unknown seed masses and formation processes. the mass $ { \ mathcal { m } } _ { 0 } $ depends weakly on the uncertain formation redshift, and sets a universal minimal mass scale for present - day black holes. this can explain why no imbhs have yet been found, and it implies that present - day galaxies with $ { \ sigma _ { \ star } < { \ mathcal { s } } _ { 0 } \ approx40 \, \ mathrm { km \, s } ^ { - 1 } } $ lack a central black hole, or formed it only recently. a dearth of imbhs at low redshifts has observable implications for tidal disruptions and gravitational wave mergers.
arxiv:1701.00415
ostwald ripening is a well - known physicochemical phenomenon in which smaller particles, characterized by high surface energy, dissolve and feed the bigger ones that are thermodynamically more stable. the effect is commonly observed in solid and liquid solutions, as well as in systems consisting of supported metal clusters or liquid droplets. here, we provide the first evidence for the occurrence of ostwald ripening in an oxide - on - metal system which, in our case, consists of ultrathin iron monoxide ( feo ) islands grown on ru ( 0001 ) single - crystal support. the results reveal that the thermally - driven sintering of islands allows altering their fine structural characteristics, including size, perimeter length, defect density and stoichiometry, which are crucial, e. g., from the point of view of heterogeneous catalysis.
arxiv:2105.01229
in the pharmaceutical industry, where it is common to generate many qsar models with large numbers of molecules and descriptors, the best qsar methods are those that can generate the most accurate predictions but that are also insensitive to hyperparameters and are computationally efficient. here we compare light gradient boosting machine ( lightgbm ) to random forest, single - task deep neural nets, and extreme gradient boosting ( xgboost ) on 30 in - house data sets. while any boosting algorithm has many adjustable hyperparameters, we can define a set of standard hyperparameters at which lightgbm makes predictions about as accurate as single - task deep neural nets, but is a factor of 1000 - fold faster than random forest and ~ 4 - fold faster than xgboost in terms of total computational time for the largest models. another very useful feature of lightgbm is that it includes a native method for estimating prediction intervals.
arxiv:2105.08626
the effective mass m *, and the lande g - factor of the uniform 2 - d electron fluid ( 2def ) are calculated as a function of the spin polarization zeta, and the density parameter r _ s, using a non - perturbative analytic approach. our theory is in good accord with the m * g * data of zhu et al. for zeta = 0 for the gaas - 2def, and striking agreement with the data of shashkin et al for the si - 2def. while g * is enhanced in gaas, m * is enhanced in si. the latter arises from singlet - pair excitations in the two valleys forming a coupled - valley state occurring at the critical density of ~ 1. 10 ^ { 11 } $ e / cm ^ 2.
arxiv:cond-mat/0307153
collapse models describe the breakdown of the quantum superposition principle when moving from microscopic to macroscopic scales. they are among the possible solutions to the quantum measurement problem and thus describe the emergence of classical mechanics from the quantum one. testing collapse models is equivalent to test the limits of quantum mechanics. i will provide an overview on how one can test collapse models, and which are the future theoretical and experimental challenges ahead.
arxiv:2303.05284
we present the results from a high cadence, multi - wavelength observation campaign of at 2016jbu ( aka gaia16cfr ), an interacting transient. this dataset complements the current literature by adding higher cadence as well as extended coverage of the lightcurve evolution and late - time spectroscopic evolution. photometric coverage reveals that at 2016jbu underwent significant photometric variability followed by two luminous events, the latter of which reached an absolute magnitude of m $ _ v \ sim $ - 18. 5 mag. this is similar to the transient sn 2009ip whose nature is still debated. spectra are dominated by narrow emission lines and show a blue continuum during the peak of the second event. at 2016jbu shows signatures of a complex, non - homogeneous circumstellar material ( csm ). we see slowly evolving asymmetric hydrogen line profiles, with velocities of 500km $ s ^ { - 1 } $ seen in narrow emission features from a slow moving csm, and up to 10, 000km $ s ^ { - 1 } $ seen in broad absorption from some high velocity material. late - time spectra ( $ \ sim $ + 1 year ) show a lack of forbidden emission lines expected from a core - collapse supernova and are dominated by strong emission from h, he i and ca ii. strong asymmetric emission features, a bumpy lightcurve, and continually evolving spectra suggest an inhibit nebular phase. we compare the evolution of h $ \ alpha $ among sn 2009ip - like transients and find possible evidence for orientation angle effects. the light - curve evolution of at 2016jbu suggests similar, but not identical, circumstellar environments to other sn 2009ip - like transients.
arxiv:2102.09572
motivated by the polarization anomaly in the b - > phi ( 1020 ) k * ( 892 ) decay, we extend our search for other k * final states in the decay b0 - > phi ( 1020 ) k ^ * 0 with the k * 0 - > k + pi - invariant mass above 1. 6 gev. the final states considered include the k * ( 1680 ) 0, k3 * ( 1780 ) 0, k4 * ( 2045 ) 0, and a kpi spin - zero nonresonant component. we also search for b0 - > phidbar0 decay with the same final state. the analysis is based on a sample of about 384 million bbbar pairs recorded with the babar detector. we place upper limits on the branching fractions br ( b0 - > phik * ( 1680 ) 0 ) < 3. 5 * 10 ^ - 6, br ( b0 - > phik3 * ( 1780 ) 0 ) < 2. 7 * 10 ^ - 6, br ( b0 - > phik4 * ( 2045 ) 0 ) < 15. 3 * 10 ^ - 6, and br ( b0 - > phidbar0 ) < 11. 7 * 10 ^ - 6 at 90 % c. l. the nonresonant contribution is consistent with the measurements in the lower invariant mass range.
arxiv:0705.0398
large - scale text - to - image diffusion models have made amazing advances. however, the status quo is to use text input alone, which can impede controllability. in this work, we propose gligen, grounded - language - to - image generation, a novel approach that builds upon and extends the functionality of existing pre - trained text - to - image diffusion models by enabling them to also be conditioned on grounding inputs. to preserve the vast concept knowledge of the pre - trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. our model achieves open - world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. gligen ' s zero - shot performance on coco and lvis outperforms that of existing supervised layout - to - image baselines by a large margin.
arxiv:2301.07093
the starting point of this paper is the interplay between the construction principle of a sequence and the characters of the compact abelian group that underlies the construction. in case of the halton sequence in base $ \ mathbf b = ( b _ 1, \ ldots, b _ s ) $ in the $ s $ - dimensional unit cube $ [ 0, 1 ) ^ s $, which is an important type of a digital sequence, this kind of duality principle leads to the so - called $ \ mathbf b $ - adic function system and provides the basis for the $ \ mathbf b $ - adic method, which we present in connection with hybrid sequences. this method employs structural properties of the compact group of $ \ mathbf b $ - adic integers as well as $ \ mathbf b $ - adic arithmetic to derive tools for the analysis of the uniform distribution of sequences in $ [ 0, 1 ) ^ s $. we first clarify the point which function systems are needed to analyze digital sequences. then, we present the hybrid spectral test in terms of trigonometric -, walsh -, and $ \ mathbf b $ - adic functions. various notions of diaphony as well as many figures of merit for rank - 1 quadrature rules in quasi - monte carlo integration and for certain linear types of pseudo - random number generators are included in this measure of uniform distribution. further, discrepancy may be approximated arbitrarily close by suitable versions of the spectral test.
arxiv:1306.3120
we show that generalized orbital varieties for mirkovic - vybornov slices can be indexed by semi - standard young tableaux. we also check that the mirkovic - vybornov isomorphism sends generalized orbital varieties to ( dense subsets of ) mirkovic - vilonen cycles, such that the ( combinatorial ) lusztig datum of a generalized orbital variety, which it inherits from its tableau, is equal to the ( geometric ) lusztig datum of its mv cycle.
arxiv:1905.08174
this paper describes the use of simple lattice models for studying the properties of structurally disordered systems like glasses and granulates. the models considered have crystalline states as ground states, finite connectivity, and are not subject to constrained evolution rules. after a short review of some of these models, the paper discusses how two particularly simple kinds of models, the potts model and the exclusion models, evolve after a quench at low temperature to glassy states rather than to crystalline states.
arxiv:cond-mat/0312653
given a knot $ k $ in a closed orientable manifold $ m $ we define the growth rate of the tunnel number of $ k $ to be $ gr _ t ( k ) = \ limsup _ { n \ to \ infty } \ frac { t ( nk ) - n t ( k ) } { n - 1 } $. as our main result we prove that the heegaard genus of $ m $ is strictly less than the heegaard genus of the knot exterior if and only if the growth rate is less than 1. in particular this shows that a non - trivial knot in $ s ^ 3 $ is never asymptotically super additive. the main result gives conditions that imply falsehood of morimoto ' s conjecture.
arxiv:math/0402025
articulatory trajectories like electromagnetic articulography ( ema ) provide a low - dimensional representation of the vocal tract filter and have been used as natural, grounded features for speech synthesis. differentiable digital signal processing ( ddsp ) is a parameter - efficient framework for audio synthesis. therefore, integrating low - dimensional ema features with ddsp can significantly enhance the computational efficiency of speech synthesis. in this paper, we propose a fast, high - quality, and parameter - efficient ddsp articulatory vocoder that can synthesize speech from ema, f0, and loudness. we incorporate several techniques to solve the harmonics / noise imbalance problem, and add a multi - resolution adversarial loss for better synthesis quality. our model achieves a transcription word error rate ( wer ) of 6. 67 % and a mean opinion score ( mos ) of 3. 74, with an improvement of 1. 63 % and 0. 16 compared to the state - of - the - art ( sota ) baseline. our ddsp vocoder is 4. 9x faster than the baseline on cpu during inference, and can generate speech of comparable quality with only 0. 4m parameters, in contrast to the 9m parameters required by the sota.
arxiv:2409.02451
metric $ f ( r ) $ gravity theories are conformally equivalent to models of quintessence in which matter is coupled to dark energy. we derive a condition for stable tracker solution for metric $ f ( r ) $ gravity in the einstein frame. we find that tracker solutions with $ - 0. 361 < \ omega _ { \ varphi } < 1 $ exist if $ 0 < \ gamma < 0. 217 $ and $ \ frac { d } { dt } \ ln f ' ( \ tilde { r } ) > 0 $, where $ \ gamma = \ frac { v _ { \ varphi \ varphi } v } { v _ { \ varphi } ^ { 2 } } $ is dimensionless function, $ \ omega _ { \ varphi } $ is the equation of state parameter of the scalar field and $ \ tilde { r } $ refers to jordan frame ' s curvature scalar. also, we show that there exists $ f ( \ tilde { r } ) $ gravity models which have tracking behavior in the einstein frame and so the curvature of space time is decreasing with time while they lead to the solutions in the jordan frame that the curvature of space time can be increasing with time.
arxiv:0905.0247
two square matrices of ( arbitrary ) order n are introduced. they are defined in terms of n arbitrary numbers z _ { n }, and of an arbitrary additional parameter ( a respectively q ), and provide finite - dimensional representations of the two operators acting on a function f ( z ) as follows : [ f ( z + a ) - f ( z ) ] / a respectively [ f ( qz ) - f ( z ) ] / [ ( q - 1 ) z ]. these representations are exact - - - in a sense explained in the paper - - - when the function f ( z ) is a polynomial in z of degree less than n. this formalism allows to transform difference equations valid in the space of polynomials of degree less than n into corresponding matrix - vector equations. as an application of this technique several remarkable square matrices of order n are identified, which feature explicitly n arbitrary numbers z _ { n }, or the n zeros of polynomials belonging to the askey and q - askey schemes. several of these findings have a diophantine character.
arxiv:1411.3527
the dynamic nature of real - world information necessitates efficient knowledge editing ( ke ) in large language models ( llms ) for knowledge updating. however, current ke approaches, which typically operate on ( subject, relation, object ) triples, ignore the contextual information and the relation among different knowledge. such editing methods could thus encounter an uncertain editing boundary, leaving a lot of relevant knowledge in ambiguity : queries that could be answered pre - edit cannot be reliably answered afterward. in this work, we analyze this issue by introducing a theoretical framework for ke that highlights an overlooked set of knowledge that remains unchanged and aids in knowledge deduction during editing, which we name as the deduction anchor. we further address this issue by proposing a novel task of event - based knowledge editing that pairs facts with event descriptions. this task manifests not only a closer simulation of real - world editing scenarios but also a more logically sound setting, implicitly defining the deduction anchor to address the issue of indeterminate editing boundaries. we empirically demonstrate the superiority of event - based editing over the existing setting on resolving uncertainty in edited models, and curate a new benchmark dataset evedit derived from the counterfact dataset. moreover, while we observe that the event - based setting is significantly challenging for existing approaches, we propose a novel approach self - edit that showcases stronger performance, achieving 55. 6 % consistency improvement while maintaining the naturalness of generation.
arxiv:2402.11324
scholars. alumni in united states politics and public service include former chairman of the federal reserve ben bernanke, former ma - 1 representative john olver, former ca - 13 representative pete stark, ky - 4 representative thomas massie, california senator alex padilla, former national economic council chairman lawrence h. summers, and former council of economic advisers chairman christina romer. mit alumni in international politics include foreign affairs minister of iran ali akbar salehi, education minister of nepal sumana shrestha, president of colombia virgilio barco vargas, former president of the european central bank mario draghi, former governor of the reserve bank of india raghuram rajan, former british foreign minister david miliband, former greek prime minister lucas papademos, former un secretary general kofi annan, former iraqi deputy prime minister ahmed chalabi, former minister of education and culture of the republic of indonesia yahya muhaimin, former jordanian minister of education, higher education and scientific research and former jordanian minister of energy and mineral resources khaled toukan. alumni in sports have included olympic fencing champion johan harmenberg. mit alumni founded or co - founded many notable companies, such as intel, mcdonnell douglas, texas instruments, 3com, qualcomm, bose, raytheon, apotex, koch industries, rockwell international, genentech, dropbox, and campbell soup. according to the british newspaper the guardian, " a survey of living mit alumni found that they have formed 25, 800 companies, employing more than three million people including about a quarter of the workforce of silicon valley. those firms collectively generate global revenues of about $ 1. 9 trillion ( £1. 2 trillion ) a year ". if the companies founded by mit alumni were a country, they would have the 11th - highest gdp of any country in the world. mit alumni have founded or co - founded many successful nonprofit organizations, such as khan academy. mit alumni have led prominent institutions of higher education, including the university of california system, harvard university, the new york institute of technology, johns hopkins university, carnegie mellon university, tufts university, rochester institute of technology, rhode island school of design ( risd ), uc berkeley college of environmental design, the new jersey institute of technology, northeastern university, tel aviv university, lahore university of management sciences, rensselaer polytechnic institute, tecnologico de monterrey, purdue university, virginia polytechnic institute, korea advanced institute of science and technology, and quaid - e - azam university. be
https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
this paper investigates the feasibility of using pre - trained generative large language models ( llms ) to automate the assignment of icd - 10 codes to historical causes of death. due to the complex narratives often found in historical causes of death, this task has traditionally been manually performed by coding experts. we evaluate the ability of gpt - 3. 5, gpt - 4, and llama 2 llms to accurately assign icd - 10 codes on the hicad dataset that contains causes of death recorded in the civil death register entries of 19, 361 individuals from ipswich, kilmarnock, and the isle of skye from the uk between 1861 - 1901. our findings show that gpt - 3. 5, gpt - 4, and llama 2 assign the correct code for 69 %, 83 %, and 40 % of causes, respectively. however, we achieve a maximum accuracy of 89 % by standard machine learning techniques. all llms performed better for causes of death that contained terms still in use today, compared to archaic terms. also they perform better for short causes ( 1 - 2 words ) compared to longer causes. llms therefore do not currently perform well enough for historical icd - 10 code assignment tasks. we suggest further fine - tuning or alternative frameworks to achieve adequate performance.
arxiv:2405.07560
we describe a dispersive unit consisting of cascaded volume - phase holographic gratings for spectroscopic applications. each of the gratings provides high diffractive efficiency in a relatively narrow wavelength range and transmits the rest of the radiation to the 0th order of diffraction. the spectral lines formed by different gratings are centered in the longitudal direction and separated in the transverse direction due to tilt of the gratings around two axes. we consider a technique of design and optimization of such a scheme. it allows to define modulation of index of refraction and thickness of the holographic layer for each of the gratings as well as their fringes frequencies and inclination angles. at the first stage the gratings parameters are found approximately using analytical expressions of kogelnik ' s coupled wave theory. then each of the grating starting from the longwave sub - range is optimized separately by using of numerical optimization procedure and rigorous coupled wave analysis to achieve a high diffraction efficiency profile with a steep shortwave edge. in parallel such targets as ray aiming and linear dispersion maintenance are controlled by means of ray tracing. we demonstrate this technique on example of a small - sized spectrograph for astronomical applications. it works in the range of 500 - 650 nm and uses three gratings covering 50 nm each. it has spectral resolution of 6130 - 12548. obtaining of the asymmetrical efficiency curve is shown with use of dichromated gelatin and a photopolymer. change of the curve shape allows to increase filling coefficient for the target sub - range up to 2. 3 times.
arxiv:1705.01264
we consider the growth of the norms of transfer matrices of ergodic discrete schr \ " odinger operators in one dimension. it is known that the set of energies at which the rate of exponential growth is slower than prescribed by the lyapunov exponent is residual in the part of the spectrum at which the lyapunov exponent is positive. on the other hand, this exceptional set is of vanishing hausdorff measure with respect to any gauge function $ \ rho ( t ) $ such that $ \ rho ( t ) / t $ is integrable at zero. here we show that this condition on $ \ rho ( t ) $ can not in general be improved : for operators with independent, identically distributed potentials of sufficiently regular distribution, the set of energies at which the rate of exponential growth is arbitrarily slow has infinite hausdorff measure with respect to any gauge function $ \ rho ( t ) $ such that $ \ rho ( t ) / t $ is non - increasing and not integrable at zero. the main technical ingredient, possibly of independent interest, is a jarn \ ' ik - type theorem describing the hausdorff measure of the set of real numbers well approximated by the eigenvalues of the schr \ " odinger operator. the proof of this result relies on the theory of anderson localisation and on the mass transference principle of beresnevich - - velani.
arxiv:2112.14662
in this article, we show that conjugacy classes of classical weyl groups $ w ( b _ { n } ) $ and $ w ( d _ { n } ) $ are of $ \ textit { type d } $. consequently, we obtain that nichols algebras of irreducible yetter - drinfeld modules over the classical weyl groups $ \ mathbb w _ { n } $ ( $ n \ geq5 $ ) are infinite dimensional.
arxiv:2410.07743
in this paper, we propose new linearly convergent second - order methods for minimizing convex quartic polynomials. this framework is applied for designing optimization schemes, which can solve general convex problems satisfying a new condition of quartic regularity. it assumes positive definiteness and boundedness of the fourth derivative of the objective function. for such problems, an appropriate quartic regularization of damped newton method has global linear rate of convergence. we discuss several important consequences of this result. in particular, it can be used for constructing new second - order methods in the framework of high - order proximal - point schemes. these methods have convergence rate $ \ tilde o ( k ^ { - p } ) $, where $ k $ is the iteration counter, $ p $ is equal to 3, 4, or 5, and tilde indicates the presence of logarithmic factors in the complexity bounds for the auxiliary problems, which are solved at each iteration of the schemes.
arxiv:2201.04852
the combined algorithm selection and hyperparameter tuning ( cash ) problem is characterized by large hierarchical hyperparameter spaces. model - free hyperparameter tuning methods can explore such large spaces efficiently since they are highly parallelizable across multiple machines. when no prior knowledge or meta - data exists to boost their performance, these methods commonly sample random configurations following a uniform distribution. in this work, we propose a novel sampling distribution as an alternative to uniform sampling and prove theoretically that it has a better chance of finding the best configuration in a worst - case setting. in order to compare competing methods rigorously in an experimental setting, one must perform statistical hypothesis testing. we show that there is little - to - no agreement in the automated machine learning literature regarding which methods should be used. we contrast this disparity with the methods recommended by the broader statistics literature, and identify a suitable approach. we then select three popular model - free solutions to cash and evaluate their performance, with uniform sampling as well as the proposed sampling scheme, across 67 datasets from the openml platform. we investigate the trade - off between exploration and exploitation across the three algorithms, and verify empirically that the proposed sampling distribution improves performance in all cases.
arxiv:1909.07140
we compute temperate fundamental solutions of homogeneous differential operators with real - principal type symbols. via analytic continuation of meromorphic distributions, fundamental solutions for these non - elliptic operators can be constructed in terms of radial averages and invariant distributions on the unit sphere.
arxiv:0704.0801
we derive a sharp bound on the location of non - positive eigenvalues of schroedinger operators on the halfline with complex - valued potentials.
arxiv:0903.2053
we analyze the high energy scattering of hadrons in qcd in an effective theory model inspired from a gravity dual description. the nucleons are skyrmion - like solutions of a dbi action, and boosted nucleons give pions field shockwaves necessary for the saturation of the froissart bound. nuclei are analogs of bion crystals, with the dbi skyrmions forming a fluid with a fixed inter - nucleon distance. in shockwave collisions one creates scalar ( pion field ) ` ` fireballs ' ' with horizons of nonzero temperature, whose scaling with mass we calculated. they are analogous to the hydrodynamic ` ` dumb holes, ' ' and their thermal horizons are places where the pion field becomes apparently singular. the information paradox becomes then a purely field theoretic phenomenon, not directly related to quantum gravity ( except via ads - cft ).
arxiv:hep-th/0512171
field, including thomas bradbury ' s 2002 article, " research on relationships as a prelude to action " — an article focussed on the mechanisms for improvement of relationship research including better integration of research findings, more ethnically and culturally diverse sampling, and interdisciplinary, problem - centered approaches to research. reis argued the need for integrating and organizing theories, for paying more attention to non - romantic relationships ( the primary focus of the area ) in research and intervention development, and the use of his theory of perceived partner responsiveness to enable this progress. fast - forwarding to 2012, relationship researchers again heeded berscheid ' s advice of using relationships science to inform real - world issues. eli finkel, paul eastwick, benjamin karney, harry reis, and susan sprecher wrote an article discussing the impact of online dating on relationship formation and both its positive and negative implications for relationship outcomes compared to traditional offline dating. additionally, in 2018, emily impett and amy muise published their follow - up to berscheid ' s article, " the sexing of relationship science : impetus for the special issue on sex and relationships ". here, they called on the field to draw more attention to and place greater weight on the role of sexual satisfaction ; they identified this area of research as nascent but fertile territory to explore sexuality in relationships and establish it as an integral part of relationship science. = = types of relationships studied = = the field recognizes that, for two individuals to be in the most basic form of a social relationship, they must be interdependent — that is, have interconnected behaviors and mutual influence on one another. = = = personal relationships = = = a relationship is said to be personal when there is not only interdependence ( the defining feature of all relationships ), but when two people recognize each other as unique and unable to be replaced. personal relationships can include colleagues, acquaintances, family members, and others, so long as the criteria for the relationship are met. = = = close relationships = = = the definition of close relationships that is frequently referred back to is one from harold kelley and colleague ' s 1983 book, close relationships. this asserts that a close relationship is " one of strong, frequent, and diverse interdependence that lasts over a considerable period of time ". : 38 this definition indicates that not even all personal relationships may be considered close relationships. close relationships can include family relationships ( e. g., parent – child, siblings, grandparent –
https://en.wikipedia.org/wiki/Relationship_science
this paper is focused on the language modelling for task - oriented domains and presents an accurate analysis of the utterances acquired by the dialogos spoken dialogue system. dialogos allows access to the italian railways timetable by using the telephone over the public network. the language modelling aspects of specificity and behaviour to rare events are studied. a technique for getting a language model more robust, based on sentences generated by grammars, is presented. experimental results show the benefit of the proposed technique. the increment of performance between language models created using grammars and usual ones, is higher when the amount of training material is limited. therefore this technique can give an advantage especially for the development of language models in a new domain.
arxiv:cmp-lg/9711007
a sequence $ s $ is potentially $ k _ { p, 1, 1 } $ graphical if it has a realization containing a $ k _ { p, 1, 1 } $ as a subgraph, where $ k _ { p, 1, 1 } $ is a complete 3 - partite graph with partition sizes $ p, 1, 1 $. let $ \ sigma ( k _ { p, 1, 1 }, n ) $ denote the smallest degree sum such that every $ n $ - term graphical sequence $ s $ with $ \ sigma ( s ) \ geq \ sigma ( k _ { p, 1, 1 }, n ) $ is potentially $ k _ { p, 1, 1 } $ graphical. in this paper, we prove that $ \ sigma ( k _ { p, 1, 1 }, n ) \ geq 2 [ ( ( p + 1 ) ( n - 1 ) + 2 ) / 2 ] $ for $ n \ geq p + 2. $ we conjecture that equality holds for $ n \ geq 2p + 4. $ we prove that this conjecture is true for $ p = 3 $.
arxiv:math/0408292
it is shown that the lagrangian density of the supersymmetric 3 - brane can be regarded as a component of an infinite - dimensional supermultiplet of n = 2, d = 4 supersymmetry spontaneously broken down to n = 1. the latter is described by n = 1 hermitian bosonic matrix superfield v _ { mn } = v ^ \ dagger _ { nm }, [ v _ { mn } ] = m + n, m, n = 0, 1,... in which the component v _ { 01 } is identified with a chiral goldstone n = 1 multiplet associated with central charge of the n = 2, d = 4 superalgebra, and v _ { 11 } obeys a specific nonlinear recursive equation providing the possibility to express v _ { 11 } ( as well as the other components v _ { mn } ) covariantly in terms of v _ { 01 }. we demonstrate that the solution of v _ { 11 } gives the right \ emph { pbgs } action for the super - 3 - brane.
arxiv:hep-th/0212311
emotion detection in text is an important task in nlp and is essential in many applications. most of the existing methods treat this task as a problem of single - label multi - class text classification. to predict multiple emotions for one instance, most of the existing works regard it as a general multi - label classification ( mlc ) problem, where they usually either apply a manually determined threshold on the last output layer of their neural network models or train multiple binary classifiers and make predictions in the fashion of one - vs - all. however, compared to labels in the general mlc datasets, the number of emotion categories are much fewer ( less than 10 ). additionally, emotions tend to have more correlations with each other. for example, the human usually does not express " joy " and " anger " at the same time, but it is very likely to have " joy " and " love " expressed together. given this intuition, in this paper, we propose a latent variable chain ( lvc ) transformation and a tailored model - - seq2emo model that not only naturally predicts multiple emotion labels but also takes into consideration their correlations. we perform the experiments on the existing multi - label emotion datasets as well as on our newly collected datasets. the results show that our model compares favorably with existing state - of - the - art methods.
arxiv:1911.02147
a box model of the inter - hemispheric atlantic meridional overturning circulation is developed, including a variable pycnocline depth for the tropical and subtropical regions. the circulation is forced by winds over a periodic channel in the south and by freshwater forcing at the surface. the model is aimed at investigating the ocean feedbacks related to perturbations in freshwater forcing from the atmosphere, and to changes in freshwater transport in the ocean. these feedbacks are closely connected with the stability properties of the meridional overturning circulation, in particular in response to freshwater perturbations.
arxiv:1211.1289
planetesimal formation is one of the most important unsolved problems in planet formation theory. in particular, rocky planetesimal formation is difficult because silicate dust grains are easily broken when they collide. recently, it has been proposed that they can grow as porous aggregates when their monomer radius is smaller than $ \ sim $ 10 nm, which can also avoid the radial drift toward the central star. however, the stability of a layer composed of such porous silicate dust aggregates has not been investigated. therefore, we investigate the gravitational instability of this dust layer. to evaluate the disk stability, we calculate toomre ' s stability parameter $ q $, for which we need to evaluate the equilibrium random velocity of dust aggregates. we calculate the equilibrium random velocity considering gravitational scattering and collisions between dust aggregates, drag by mean flow of gas, stirring by gas turbulence, and gravitational scattering by gas density fluctuation due to turbulence. we derive the condition of the gravitational instability using the disk mass, dust - to - gas ratio, turbulent strength, orbital radius, and dust monomer radius. we find that, for the minimum mass solar nebula model at 1 au, the dust layer becomes gravitationally unstable when the turbulent strength $ \ alpha \ lesssim10 ^ { - 5 } $. if the dust - to - gas ratio is increased twice, the gravitational instability occurs for $ \ alpha \ lesssim10 ^ { - 4 } $. we also find that the dust layer is more unstable in disks with larger mass, higher dust - to - gas ratio, and weaker turbulent strength, at larger orbital radius, and with a larger monomer radius.
arxiv:1802.03121
the tensor polarization of particles and nuclei becomes constant in the coordinate system rotating with the same angular velocity as the spin and rotates in the lab frame with the above angular velocity. the general equation defining the time dependence of the tensor polarization is derived. an explicit form of dynamics of this polarization is found in the case when the initial polarization is axially symmetric.
arxiv:1503.03005
we propose and evaluate alternative ensemble schemes for a new instance based learning classifier, the randomised sphere cover ( rsc ) classifier. rsc fuses instances into spheres, then bases classification on distance to spheres rather than distance to instances. the randomised nature of rsc makes it ideal for use in ensembles. we propose two ensemble methods tailored to the rsc classifier ; $ \ alpha \ beta $ rse, an ensemble based on instance resampling and $ \ alpha $ rsse, a subspace ensemble. we compare $ \ alpha \ beta $ rse and $ \ alpha $ rsse to tree based ensembles on a set of uci datasets and demonstrates that rsc ensembles perform significantly better than some of these ensembles, and not significantly worse than the others. we demonstrate via a case study on six gene expression data sets that $ \ alpha $ rsse can outperform other subspace ensemble methods on high dimensional data when used in conjunction with an attribute filter. finally, we perform a set of bias / variance decomposition experiments to analyse the source of improvement in comparison to a base classifier.
arxiv:1409.4936
as a technical exercise with possible relevance to the holographic principle and string theory, the effective actions ( functional determinants ) for scalars and spinors on the squashed three - sphere identified under the action of a cyclic group, z _ m, are determined. especially in the extreme oblate squashing limit, which has a thermodynamic interpretation, the high temperature behaviour is found as a function of m. although the intermediate details for odd and even m are different, the final answers are the same. a thermodynamic interpretation for spinors is possible only for twisted periodicity conditions and m even.
arxiv:hep-th/0008059
reinforcement learning has shown great potential in developing high - level autonomous driving. however, for high - dimensional tasks, current rl methods suffer from low data efficiency and oscillation in the training process. this paper proposes an algorithm called learn to drive with virtual memory ( lvm ) to overcome these problems. lvm compresses the high - dimensional information into compact latent states and learns a latent dynamic model to summarize the agent ' s experience. various imagined latent trajectories are generated as virtual memory by the latent dynamic model. the policy is learned by propagating gradient through the learned latent model with the imagined latent trajectories and thus leads to high data efficiency. furthermore, a double critic structure is designed to reduce the oscillation during the training process. the effectiveness of lvm is demonstrated by an image - input autonomous driving task, in which lvm outperforms the existing method in terms of data efficiency, learning stability, and control performance.
arxiv:2102.08072
scaling laws in language modeling traditionally quantify training loss as a function of dataset size and model parameters, providing compute - optimal estimates but often neglecting the impact of data quality on model generalization. in this paper, we extend the conventional understanding of scaling law by offering a microscopic view of data quality within the original formulation - - effective training tokens - - which we posit to be a critical determinant of performance for parameter - constrained language models. specifically, we formulate the proposed term of effective training tokens to be a combination of two readily - computed indicators of text : ( i ) text diversity and ( ii ) syntheticity as measured by a teacher model. we pretrained over $ 200 $ models of 25m to 1. 5b parameters on a diverse set of sampled, synthetic data, and estimated the constants that relate text quality, model size, training tokens, and eight reasoning task accuracy scores. we demonstrated the estimated constants yield + 0. 83 pearson correlation with true accuracies, and analyzed it in scenarios involving widely - used data techniques such as data sampling and synthesis which aim to improve data quality.
arxiv:2410.03083
source code authorship attribution ( scaa ) is crucial for software classification because it provides insights into the origin and behavior of software. by accurately identifying the author or group behind a piece of code, experts can better understand the motivations and techniques of developers. in the cybersecurity era, this attribution helps trace the source of malicious software, identify patterns in the code that may indicate specific threat actors or groups, and ultimately enhance threat intelligence and mitigation strategies. this paper presents authattlyzer - v2, a new source code feature extractor for scaa, focusing on lexical, semantic, syntactic, and n - gram features. our research explores author identification in c + + by examining 24, 000 source code samples from 3, 000 authors. our methodology integrates random forest, gradient boosting, and xgboost models, enhanced with shap for interpretability. the study demonstrates how ensemble models can effectively discern individual coding styles, offering insights into the unique attributes of code authorship. this approach is pivotal in understanding and interpreting complex patterns in authorship attribution, especially for malware classification.
arxiv:2406.19896
we study an extensive connection between factor forcings of borel subsets of polish spaces modulo a sigma - ideal, and factor forcings of subsets of countable sets modulo an ideal.
arxiv:math/0407182
how can probabilities make sense in a deterministic many - worlds theory? we address two facets of this problem : why should rational agents assign subjective probabilities to branching events, and why should branching events happen with relative frequencies matching their objective probabilities. to address the first question, we generalise the deutsch - wallace theorem to a wide class of many - world theories, and show that the subjective probabilities are given by a norm that depends on the dynamics of the theory : the 2 - norm in the usual many - worlds interpretation of quantum mechanics, and the 1 - norm in a classical many - worlds theory known as kent ' s universe. to address the second question, we show that if one takes the objective probability of an event to be the proportion of worlds in which this event is realised, then in most worlds the relative frequencies will approximate well the objective probabilities. this suggests that the task of determining the objective probabilities in a many - worlds theory reduces to the task of determining how to assign a measure to the worlds.
arxiv:1805.01753
in theories with gauge mediated supersymmetry breaking, the scalar tau, ( $ { \ tilde \ tau _ 1 } $ ) is the lightest superpartner for a large range of the parameter space. at the large electron positron collider ( lep 2 ) this scenario can give rise to events with four $ \ tau $ leptons and large missing energy. two of the $ \ tau $ ' s ( coming from the decays of $ { \ tilde \ tau _ 1 } $ ' s ) will have large energy and transverse momentum, and can have similar sign electrical charges. such events are very different from the usual photonic events that have been widely studied, and could be a very distinct signal for the discovery of supersymmetry.
arxiv:hep-ph/9701341
starting from plebanski formulation of gravity as a constrained bf theory we propose a new spin foam model for 4d riemannian quantum gravity that generalises the well - known barrett - crane model and resolves the inherent to it ultra - locality problem. the bf formulation of 4d gravity possesses two sectors : gravitational and topological ones. the model presented here is shown to give a quantization of the gravitational sector, and is dual to the recently proposed spin foam model of engle et al. which, we show, corresponds to the topological sector. our methods allow us to introduce the immirzi parameter into the framework of spin foam quantisation. we generalize some of our considerations to the lorentzian setting and obtain a new spin foam model in that context as well.
arxiv:0708.1595
this article presents revamp $ ^ 2 $ t, real - time edge video analytics for multi - camera privacy - aware pedestrian tracking, as an integrated end - to - end iot system for privacy - built - in decentralized situational awareness. revamp $ ^ 2 $ t presents novel algorithmic and system constructs to push deep learning and video analytics next to iot devices ( i. e. video cameras ). on the algorithm side, revamp $ ^ 2 $ t proposes a unified integrated computer vision pipeline for detection, re - identification, and tracking across multiple cameras without the need for storing the streaming data. at the same time, it avoids facial recognition, and tracks and re - identifies pedestrians based on their key features at runtime. on the iot system side, revamp $ ^ 2 $ t provides infrastructure to maximize hardware utilization on the edge, orchestrates global communications, and provides system - wide re - identification, without the use of personally identifiable information, for a distributed iot network. for the results and evaluation, this article also proposes a new metric, accuracy $ \ cdot $ efficiency ( \ ae ), for holistic evaluation of iot systems for real - time video analytics based on accuracy, performance, and power efficiency. revamp $ ^ 2 $ t outperforms current state - of - the - art by as much as thirteen - fold \ ae ~ improvement.
arxiv:1911.09217
in this paper, we analyse the performance of physical layer security over fluctuating beckmann ( fb ) fading channel which is an extended model of both the $ \ kappa - \ mu $ shadowed and the classical beckmann distributions. specifically, the average secrecy capacity ( asc ), secure outage probability ( sop ), the lower bound of sop ( sop $ ^ l $ ), and the probability of strictly positive secrecy capacity ( spsc ) are derived in exact closed - form expressions using two different values of the fading parameters, namely, $ m $ and $ \ mu $ which represent the multipath and shadowing severity impacts, respectively. firstly, when the fading parameters are arbitrary values, the performance metrics are derived in exact closed - form in terms of the extended generalised bivariate fox ' s $ h $ - function ( egbfhf ) that has been widely implemented in the open literature. in the second case, to obtain simple mathematically tractable expressions in terms of analytic functions as well as to gain more insight on the behaviour of the physical layer security over fluctuating beckmann fading channel models, $ m $ and $ \ mu $ are assumed to be integer and even numbers, respectively. the numerical results of this analysis are verified via monte carlo simulations.
arxiv:1904.08230
a k - ellipse is a plane curve consisting of all points whose distances from k fixed foci sum to a constant. we determine the singularities and genus of its zariski closure in the complex projective plane. the paper resolves an open problem stated by nie, parrilo and sturmfels in 2008.
arxiv:1908.01414
we prove a tauberian theorem for the laplace - - stieltjes transform and karamata - type theorems in the framework of regularly log - periodic functions. as an application we determine the exact tail behavior of fixed points of certain type smoothing transforms.
arxiv:1709.01996
the formation of spacetime singularities is a quite common phenomenon in general relativity and it is regulated by specific theorems. it is widely believed that spacetime singularities do not exist in nature, but that they represent a limitation of the classical theory. while we do not yet have any solid theory of quantum gravity, toy models of black hole solutions without singularities have been proposed. so far, there are only non - rotating regular black holes in the literature. these metrics can be hardly tested by astrophysical observations, as the black hole spin plays a fundamental role in any astrophysical process. in this letter, we apply the newman - janis algorithm to the hayward and to the bardeen black hole metrics. in both cases, we obtain a family of rotating solutions. every solution corresponds to a different matter configuration. each family has one solution with special properties, which can be written in kerr - like form in boyer - lindquist coordinates. these special solutions are of petrov type d, they are singularity free, but they violate the weak energy condition for a non - vanishing spin and their curvature invariants have different values at $ r = 0 $ depending on the way one approaches the origin. we propose a natural prescription to have rotating solutions with a minimal violation of the weak energy condition and without the questionable property of the curvature invariants at the origin.
arxiv:1302.6075
the emergence of semiconducting materials with inert or dangling bond - free surfaces has created opportunities to form van der waals heterostructures without the constraints of traditional epitaxial growth. for example, layered two - dimensional ( 2d ) semiconductors have been incorporated into heterostructure devices with gate - tunable electronic and optical functionalities. however, 2d materials present processing challenges that have prevented these heterostructures from being produced with sufficient scalability and / or homogeneity to enable their incorporation into large - area integrated circuits. here, we extend the concept of van der waals heterojunctions to semiconducting p - type single - walled carbon nanotube ( s - swcnt ) and n - type amorphous indium gallium zinc oxide ( a - igzo ) thin films that can be solution - processed or sputtered with high spatial uniformity at the wafer scale. the resulting large - area, low - voltage p - n heterojunctions exhibit anti - ambipolar transfer characteristics with high on / off ratios that are well - suited for electronic, optoelectronic, and telecommunication technologies.
arxiv:1412.4304
we prove that for every positive integer k, there exists an mso _ 1 - transduction that given a graph of linear cliquewidth at most k outputs, nondeterministically, some cliquewidth decomposition of the graph of width bounded by a function of k. a direct corollary of this result is the equivalence of the notions of cmso _ 1 - definability and recognizability on graphs of bounded linear cliquewidth.
arxiv:1803.05937
zwcl 2341. 1 + 0000, a merging galaxy cluster with disturbed x - ray morphology and widely separated ( $ \ sim $ 3 mpc ) double radio relics, was thought to be an extremely massive ( $ 10 - 30 \ times 10 ^ { 14 } m _ \ odot $ ) and complex system with little known about its merger history. we present jvla 2 - 4 ghz observations of the cluster, along with new spectroscopy from our keck / deimos survey, and apply gaussian mixture modeling to the three - dimensional distribution of 227 confirmed cluster galaxies. after adopting the bayesian information criterion to avoid overfitting, which we discover can bias total dynamical mass estimates high, we find that a three - substructure model with a total dynamical mass estimate of $ 9. 39 \ pm 0. 81 \ times 10 ^ { 14 } m _ \ odot $ is favored. we also present deep subaru imaging and perform the first weak lensing analysis on this system, obtaining a weak lensing mass estimate of $ 5. 57 \ pm 2. 47 \ times 10 ^ { 14 } m _ \ odot $. this is a more robust estimate because it does not depend on the dynamical state of the system, which is disturbed due to the merger. our results indicate that zwcl 2341. 1 + 0000 is a multiple merger system comprised of at least three substructures, with the main merger that produced the radio relics occurring near to the plane of the sky, and a younger merger in the north occurring closer to the line of sight. dynamical modeling of the main merger reproduces observed quantities ( relic positions and polarizations, subcluster separation and radial velocity difference ), if the merger axis angle of $ \ sim $ 10 $ ^ { + 34 } _ { - 6 } $ degrees and the collision speed at pericenter is $ \ sim $ 1900 $ ^ { + 300 } _ { - 200 } $ km / s.
arxiv:1707.00009
we prove an analogue of the yomdin - gromov lemma for $ p $ - adic definable sets and more broadly in a non - archimedean, definable context. this analogue keeps track of piecewise approximation by taylor polynomials, a nontrivial aspect in the totally disconnected case. we apply this result to bound the number of rational points of bounded height on the transcendental part of $ p $ - adic subanalytic sets, and to bound the dimension of the set of complex polynomials of bounded degree lying on an algebraic variety defined over $ \ mathbb { c } ( ( t ) ) $, in analogy to results by pila and wilkie, resp. by bombieri and pila. along the way we prove, for definable functions in a general context of non - archimedean geometry, that local lipschitz continuity implies piecewise global lipschitz continuity.
arxiv:1404.1952
in this paper, we present the asymptotically flat black hole solutions for arbitrary values of coefficients in third order lovelock gravity, and then derive gravitational mass, hawking temperature and entropy of the black holes. in addition, based on a hamilton - jacobi approach beyond the semiclassical approximation, we compute the corrected temperature and entropy of the third order lovelock black holes in seven dimensional spacetimes. by considering the coefficients { \ alpha } 2 = { \ alpha } and { \ alpha } 3 = { \ alpha } 2 / 3, we obtain a special black hole solution. later, we perform the local and global stability analysis of the black holes with different horizon structures k = \ pm1 for coefficient { \ alpha } 2 < 0 and { \ alpha } 2 > 0, respectively.
arxiv:1011.4149
amorphous fe - gluconate was studied by means of the x - ray diffraction and m \ " ossbauer spectroscopy. spectra measured in the temperature range between 78 and 295 k were analysed in terms of three doublets using a thin absorber approximation method. two of the doublets were associated with the major fe ( ii ) phase ( 72 % ) and one with the minor fe ( iii ) phase ( 28 % ). based on the obtained results the following quantities characteristic of lattice dynamical properties were determined : debye temperature from the temperature dependence of the center shift and that of the spectral area ( recoil - free factor ), force constant, change of the kinetic and potential energies of vibrations. the lattice vibrations of fe ions present in both ferrous and ferric phases are not perfectly harmonic, yet on average they are. similarities and differences to the crystalline fe - gluconate are also reported.
arxiv:1909.03008
negativity alone is sufficient to localise point sources beyond the essential sensor resolution.
arxiv:1804.01490
the generalized tight - binding model is developed to investigate the feature - rich magneto - optical properties of aab - stacked trilayer graphene. three intragroup and six intergroup inter - landau - level ( inter - ll ) optical excitations largely enrich the magneto - absorption peaks. in general, the former are much higher than the latter, depending on the phases and amplitudes of ll wavefunctions. the absorption spectra exhibit the single - or twin - peak structures which are determined by the quantum modes, ll energy spectra and fermion distribution. the splitting lls, with different localization centers ( 2 / 6 and 4 / 6 positions in a unit cell ), can generate very distinct absorption spectra. there exist extra single peaks because of ll anticrosings. aab, aaa, aba, and abc stackings quite differ from one another in terms of the inter - ll category, frequency, intensity, and structure of absorption peaks. the main characteristics of ll wavefunctions and energy spectra and the fermi - dirac function are responsible for the configuration - enriched magneto - optical spectra.
arxiv:1509.02253
we present here a technique for developing a high - throughput algorithm to fit a combination of template pulse shapes while simultaneously subtracting parameterized background noise. by convolving the psuedoinverse of the least - squares fit design matrix along a regularly sampled waveform trace, the time evolution of the fit parameters for each basis function can be determined in real - time. we approximate these sliding linear fit response functions using piecewise polynomials, and develop an fpga - friendly algorithm to be implemented in high sample - rate data acquisition systems. this is a robust universal filter that compares well to common filters optimized for energy calibration / resolution, as well as filters optimized for timing performance, even when significant noise components are present.
arxiv:2012.05937
introduction. reservoir computing is a growing paradigm for simplified training of recurrent neural networks, with a high potential for hardware implementations. numerous experiments in optics and electronics yield comparable performance to digital state - of - the - art algorithms. many of the most recent works in the field focus on large - scale photonic systems, with tens of thousands of physical nodes and arbitrary interconnections. while this trend significantly expands the potential applications of photonic reservoir computing, it also complicates the optimisation of the high number of hyper - parameters of the system. methods. in this work, we propose the use of bayesian optimisation for efficient exploration of the hyper - parameter space in a minimum number of iteration. results. we test this approach on a previously reported large - scale experimental system, compare it to the commonly used grid search, and report notable improvements in performance and the number of experimental iterations required to optimise the hyper - parameters. conclusion. bayesian optimisation thus has the potential to become the standard method for tuning the hyper - parameters in photonic reservoir computing.
arxiv:2004.02535
rhythms in electrical activity in the membrane of cells in the suprachiasmatic nucleus ( scn ) are crucial for the function of the circadian timing system, which is characterized by the expression of the so - called clock genes. intracellular ca $ ^ { 2 + } $ ions seem to connect, at least in part, the electrical activity of scn neurons with the expression of clock genes. in this paper, we introduce a simple mathematical model describing the linking of membrane activity to the transcription of one gene by means of a feedback mechanism based on the dynamics of intracellular calcium ions.
arxiv:1503.00908
an idealized experiment estimating the spacetime topology is considered in both classical and quantum frameworks. the latter is described in terms of histories approach to quantum theory. a procedure creating combinatorial models of topology is suggested. the correspondence between these models and discretized spacetime models is established.
arxiv:gr-qc/9703011
we investigate the morphological and magnetic characteristics of solar active region ( ar ) noaa 12192. ar 12192 was the largest region of solar cycle 24 ; it underwent noticeable growth and produced 6 x - class flares, 22 m - class flares, and 53 c - class flares in the course of its disc passage. however, the most peculiar fact of this ar is that it was associated with only one cme in spite of producing several x - class flares. in this work, we carry out a comparative study between the eruptive and non - eruptive flares produced by ar 12192. we find that the magnitude of abrupt and permanent changes in the horizontal magnetic field and lorentz force are significantly smaller in the case of the confined flares compared to the eruptive one. we present the areal evolution of ar 12192 during its disc passage. we find the flare - related morphological changes to be weaker during the confined flares, whereas the eruptive flare exhibits a rapid and permanent disappearance of penumbral area away from the magnetic neutral line after the flare. furthermore, from the extrapolated nonlinear force - free magnetic field, we examine the overlying coronal magnetic environment over the eruptive and non - eruptive zones of the ar. we find that the critical decay index for the onset of torus instability was achieved at a lower height over the eruptive flaring region, than for the non - eruptive core area. these results suggest that the decay rate of the gradient of overlying magnetic field strength may play a decisive role to determine the cme productivity of the ar. in addition, the magnitude of changes in the flare - related magnetic characteristics are found to be well correlated with the nature of solar eruptions.
arxiv:1801.00473
we analyze the single microlensing event ogle - 2015 - blg - 1482 simultaneously observed from two ground - based surveys and from \ textit { spitzer }. the \ textit { spitzer } data exhibit finite - source effects due to the passage of the lens close to or directly over the surface of the source star as seen from \ textit { spitzer }. such finite - source effects generally yield measurements of the angular einstein radius, which when combined with the microlens parallax derived from a comparison between the ground - based and the \ textit { spitzer } light curves, yields the lens mass and lens - source relative parallax. from this analysis, we find that the lens of ogle - 2015 - blg - 1482 is a very low - mass star with the mass $ 0. 10 \ pm 0. 02 \ m _ \ odot $ or a brown dwarf with the mass $ 55 \ pm 9 \ m _ { j } $, which are respectively located at $ d _ { \ rm ls } = 0. 80 \ pm 0. 19 \ \ textrm { kpc } $ and $ d _ { \ rm ls } = 0. 54 \ pm 0. 08 \ \ textrm { kpc } $, and thus it is the first isolated low - mass microlens that has been decisively located in the galactic bulge. the fundamental reason for the degeneracy is that the finite - source effect is seen only in a single data point from \ textit { spitzer } and this single data point gives rise to two solutions for $ \ rho $. because the $ \ rho $ degeneracy can be resolved only by relatively high cadence observations around the peak, while the \ textit { spitzer } cadence is typically $ \ sim 1 \, { \ rm day } ^ { - 1 } $, we expect that events for which the finite - source effect is seen only in the \ textit { spitzer } data may frequently exhibit this $ \ rho $ degeneracy. for ogle - 2015 - blg - 1482, the relative proper motion of the lens and source for the low - mass star is $ \ mu _ { \ rm rel } = 9. 0 \ pm 1. 9 \ \ textrm { mas yr $ ^ { - 1 } $ } $, while for the brown dwarf it is $ 5. 5 \ pm 0
arxiv:1703.05887
we introduce a class of interesting stochastic processes based on brownian - time processes. these are obtained by taking markov processes and replacing the time parameter with the modulus of brownian motion. they generalize the iterated brownian motion ( ibm ) of burdzy and the markov snake of le gall, and they introduce new interesting examples. after defining brownian - time processes, we relate them to fourth order parabolic pdes. we then study their exit problem as they exit nice domains in $ \ rd $, and connect it to elliptic pdes. we show that these processes have the peculiar property that they solve fourth order parabolic pdes, but their exit distribution - at least in the standard brownian - time process case - solves the usual second order dirichlet problem. we recover fourth order pdes in the elliptic setting by encoding the iterative nature of the brownian - time process, through its exit time, in a standard brownian motion. we also show that it is possible to assign a formal generator to these non - markovian processes by giving such a generator in the half - derivative sense.
arxiv:1005.3801
the escalating complexity of software systems and accelerating development cycles pose a significant challenge in managing code errors and implementing business logic. traditional techniques, while cornerstone for software quality assurance, exhibit limitations in handling intricate business logic and extensive codebases. to address these challenges, we introduce the intelligent code analysis agent ( icaa ), a novel concept combining ai models, engineering process designs, and traditional non - ai components. the icaa employs the capabilities of large language models ( llms ) such as gpt - 3 or gpt - 4 to automatically detect and diagnose code errors and business logic inconsistencies. in our exploration of this concept, we observed a substantial improvement in bug detection accuracy, reducing the false - positive rate to 66 \ % from the baseline ' s 85 \ %, and a promising recall rate of 60. 8 \ %. however, the token consumption cost associated with llms, particularly the average cost for analyzing each line of code, remains a significant consideration for widespread adoption. despite this challenge, our findings suggest that the icaa holds considerable potential to revolutionize software quality assurance, significantly enhancing the efficiency and accuracy of bug detection in the software development process. we hope this pioneering work will inspire further research and innovation in this field, focusing on refining the icaa concept and exploring ways to mitigate the associated costs.
arxiv:2310.08837
we consider a family of random trees satisfying a markov branching property. roughly, this property says that the subtrees above some given height are independent with a law that depends only on their total size, the latter being either the number of leaves or vertices. such families are parameterized by sequences of distributions on partitions of the integers that determine how the size of a tree is distributed in its different subtrees. under some natural assumption on these distributions, stipulating that " macroscopic " splitting events are rare, we show that markov branching trees admit the so - called self - similar fragmentation trees as scaling limits in the gromov - hausdorff - prokhorov topology. the main application of these results is that the scaling limit of random uniform unordered trees is the brownian continuum random tree. this extends a result by marckert - miermont and fully proves a conjecture by aldous. we also recover, and occasionally extend, results on scaling limits of consistent markov branching models and known convergence results of galton - watson trees toward the brownian and stable continuum random trees.
arxiv:1003.3632
we consider the non - metric data placement problem and develop distributed algorithms for computing or approximating its optimal integral solution. we first show that the non - metric data placement problem is inapproximable up to a logarithmic factor. we then provide a game - theoretic decomposition of the objective function and show that natural glauber dynamics in which players update their resources with probability proportional to the utility they receive from caching those resources will converge to an optimal global solution for a sufficiently large noise parameter. in particular, we establish the polynomial mixing time of the glauber dynamics for a certain range of noise parameters. finally, we provide another auction - based distributed algorithm, which allows us to approximate the optimal global solution with a performance guarantee that depends on the ratio of the revenue vs. social welfare obtained from the underlying auction. our results provide the first distributed computation algorithms for the non - metric data placement problem.
arxiv:2210.07461
transition - metal perovskite oxides exhibit a wide range of extraordinary but imperfectly understood phenomena. charge, spin, orbital, and lattice degrees of freedom all undergo order - disorder transitions in regimes not far from where the best - known of these phenomena, namely high - temperature superconductivity of the copper oxides, and the ' colossal ' magnetoresistance of the manganese oxides, occur. mostly diffraction techniques, sensitive either to the spin or the ionic core, have been used to measure the order. unfortunately, because they are only weakly sensitive to valence electrons and yield superposition of signals from distinct mesoscopic phases, they cannot directly image mesoscopic phase coexistence and charge ordering, two key features of the manganites. here we describe the first experiment to image charge ordering and phase separation in real space with atomic - scale resolution in a transition metal oxide. our scanning tunneling microscopy ( stm ) data show that charge order is correlated with structural order, as well as with whether the material is locally metallic or insulating, thus giving an atomic - scale basis for descriptions of the manganites as mixtures of electronically and structurally distinct phases.
arxiv:cond-mat/0204146
we explore the parameter space of a u ( 1 ) extension of the standard model - - also called the super - weak model - - from the point of view of explaining the observed dark matter energy density in the universe. the new particle spectrum contains a complex scalar singlet and three right - handed neutrinos, among which the lightest one is the dark matter candidate. we explore both freeze - in and freeze - out mechanisms of dark matter production. in both cases, we find regions in the plane of the super - weak coupling vs. the mass of the new gauge boson that are not excluded by current experimental constraints. these regions are distinct and the one for freeze - out will be explored in searches for neutral gauge boson in the near future.
arxiv:2104.11248
this study introduces birdshot, an integrated bayesian materials discovery framework designed to efficiently explore complex compositional spaces while optimizing multiple material properties. we applied this framework to the cocrfenival fcc high entropy alloy ( hea ) system, targeting three key performance objectives : ultimate tensile strength / yield strength ratio, hardness, and strain rate sensitivity. the experimental campaign employed an integrated cyber - physical approach that combined vacuum arc melting ( vam ) for alloy synthesis with advanced mechanical testing, including tensile and high - strain - rate nanoindentation testing. by incorporating batch bayesian optimization schemes that allowed the parallel exploration of the alloy space, we completed five iterative design - make - test - learn loops, identifying a non - trivial three - objective pareto set in a high - dimensional alloy space. notably, this was achieved by exploring only 0. 15 % of the feasible design space, representing a significant acceleration in discovery rate relative to traditional methods. this work demonstrates the capability of birdshot to navigate complex, multi - objective optimization challenges and highlights its potential for broader application in accelerating materials discovery.
arxiv:2405.08900
lattice qcd at finite baryon chemical potential has the infamous sign problem which hinders monte carlo simulations. this can be remedied by a dual representation that makes the sign problem mild. in the strong coupling limit, the dual formulation with staggered quarks is well established. we have used this formulation to study the quark mass dependence of the baryon mass and the nuclear transition. this allows us to quantify the nuclear interaction. we have also compared our monte carlo results with mean field predictions.
arxiv:2212.03118
omnidirectional antenna transmits or receives radio waves in all directions, while a directional antenna transmits radio waves in a beam in a particular direction, or receives waves from only one direction. radio waves travel at the speed of light in vacuum and at slightly lower velocity in air. the other types of electromagnetic waves besides radio waves, infrared, visible light, ultraviolet, x - rays and gamma rays, can also carry information and be used for communication. the wide use of radio waves for telecommunication is mainly due to their desirable propagation properties stemming from their longer wavelength. radio waves have the ability to pass through the atmosphere in any weather, foliage, and at longer wavelengths through most building materials. by diffraction, longer wavelengths can bend around obstructions, and unlike other electromagnetic waves they tend to be scattered rather than absorbed by objects larger than their wavelength. = = radio communication = = in radio communication systems, information is carried across space using radio waves. at the sending end, the information to be sent is converted by some type of transducer to a time - varying electrical signal called the modulation signal. the modulation signal may be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal consisting of a sequence of bits representing binary data from a computer. the modulation signal is applied to a radio transmitter. in the transmitter, an electronic oscillator generates an alternating current oscillating at a radio frequency, called the carrier wave because it serves to generate the radio waves that carry the information through the air. the modulation signal is used to modulate the carrier, varying some aspect of the carrier wave, impressing the information in the modulation signal onto the carrier. different radio systems use different modulation methods : amplitude modulation ( am ) – in an am transmitter, the amplitude ( strength ) of the radio carrier wave is varied by the modulation signal ; : 3 frequency modulation ( fm ) – in an fm transmitter, the frequency of the radio carrier wave is varied by the modulation signal ; : 33 frequency - shift keying ( fsk ) – used in wireless digital devices to transmit digital signals, the frequency of the carrier wave is shifted between frequencies. : 58 orthogonal frequency - division multiplexing ( ofdm ) – a family of digital modulation methods widely used in high - bandwidth systems such as wi - fi networks, cellphones, digital television broadcasting, and digital audio broadcasting ( dab ) to transmit digital data using a minimum of radio spectrum bandwidth. it has higher spectral efficiency and
https://en.wikipedia.org/wiki/Radio
we study the potential between a static center monopole and antimonopole in 4d su ( 2 ) yang - mills theory. using a new numerical method, we show that the ' t hooft loop is a dual order parameter with respect to the wilson loop, for the deconfinement phase transition. we observe a 3d ising - like critical behaviour for the dual string tension related to the spatial ' t hooft loop as a function of the temperature.
arxiv:hep-lat/0010072
given a square matrix $ a $ with entries in a commutative ring $ s $, the ideal of $ s [ x ] $ consisting of polynomials $ f $ with $ f ( a ) = 0 $ is called the null ideal of $ a $. very little is known about null ideals of matrices over general commutative rings. we compute a generating set of the null ideal of a matrix in case $ s = d / dd $ is the residue class ring of a principal ideal domain $ d $ modulo $ d \ in d $. we discuss two applications. at first, we compute a decomposition of the $ s $ - module $ s [ a ] $ into cyclic $ s $ - modules and explain the strong relationship between this decomposition and the determined generating set of the null ideal of $ a $. and finally, we give a rather explicit description of the ring \ inta of all integer - valued polynomials on $ a $.
arxiv:1506.02172
we describe the spaces of minimal rank last syzygies for the mukai varieties of sectional genus 6, 7 and 8. based on this we show : 1. the first geometric syzygies of a general canonical curve of genus 6 form a non degenerate configuration of 5 lines in p ^ 4. 2. the first geometric syzygies of a general canonical curve of genus 7 form a non degenerate, linearly normal, ruled surface of degree 84 on a spinor variety s in p ^ 15. 3. the second geometric syzygies of a general canonical curve of genus 8 form a non degenerate configuration of 14 conics on a 2 - uple embedded p ^ 5 in p ^ 20. this proves a natural generalization of green ' s conjecture [ 1984 ], namely that the geometric syzygies should span the space of all syzygies, in these cases. we have generalized results 1 and 3 to general curves of even genus in math. ag / 0108078. result 2 is the main new result of this paper.
arxiv:math/0202133
we construct a numerical light curve model for interaction - powered supernovae that arise from an interaction between the ejecta and the circumstellar matter ( csm ). in order to resolve the shocked region of an interaction - powered supernova, we solve the fluid equations and radiative transfer equation assuming the steady states in the rest frames of the reverse and forward shocks at each time step. then we numerically solve the radiative transfer equation and the energy equation in the csm with the thus obtained radiative flux from the forward shock as a radiation source. we also compare results of our models with observational data of two supernovae 2005kj and 2005ip classified as type iin and discuss the validity of our assumptions. we conclude that our model can predict physical parameters associated with supernova ejecta and the csm from the observed features of the light curve as long as the csm is sufficiently dense. furthermore, we found that the absorption of radiation in the csm is an important factor to calculate the luminosity.
arxiv:1912.08486
we have developed hints, the { \ bf hi } erarchical { \ bf n } anoparticle { \ bf t } ransport { \ bf s } imulator, and adapted it to study commensuration effects in two classes of nanoparticle ( np ) solids : ( 1 ) a bilayer np solid ( bns ) with an energy offset, and ( 2 ) a bns as part of a field - effect transistor ( fet ). hints integrates the ab initio characterization of single nps with the phonon - assisted tunneling transition model of the np - np transitions into a kinetic monte carlo based simulation of the charge transport in np solids. first, we studied a bns with an inter - layer energy offset $ \ delta $, possibly caused by a fixed electric field. our results include the following. ( 1 ) in the independent energy - offset model, we observed the emergence of commensuration effects when scanning the electron filling factor $ ff $ across integer values. these commensuration effects were profound as they reduced the mobility by several orders of magnitude. we analyzed these commensuration effects in a five dimensional parameter space, as a function of the on - site charging energy $ e _ c $, energy offset $ \ delta $, the disorder $ d $, the electron filling factor $ ff $, and the temperature $ k _ { b } t $. we demonstrated the complexity of our model by showing that at integer filling factors $ ff $ commensuration effects are present in some regions of the parameter space, while they vanish in other regions, thus defining distinct dynamical phases of the model. we determined the phase boundaries between these dynamical phases. ( 2 ) using these results as a foundation, we shifted our focus to the experimentally much - studied np - fets. np - fets are also characterized by an inter - layer energy offset $ \ delta $, which, in contrast to our first model, is set by the gate voltage $ v _ g $ and thereby related to the electron filling $ ff $. we demonstrated the emergence of commensuration effects and distinct dynamical phases in these np - fets.
arxiv:1908.09960