text
stringlengths
1
3.65k
source
stringlengths
15
79
assume that $ d \ subset \ mathbb { r } ^ 3 $ is a bounded domain with $ c ^ 1 - $ smooth boundary. our result is : { \ bf theorem 1. } { \ em if $ d $ has $ p - $ property, then $ d $ is a ball. } four equivalent formulations of the pompeiu problem are discussed. a domain $ d $ has $ p - $ property if there exists an $ f \ neq 0 $, $ f \ in l ^ 1 _ { loc } ( \ mathbb { r } ^ 3 ) $ such that $ \ int _ { d } f ( gx + y ) dx = 0 $ for all $ y \ in \ mathbb { r } ^ 3 $ and all $ g \ in so ( 2 ) $, where $ so ( 2 ) $ is the rotation group. the result obtained concerning the related symmetry problem is : { \ bf theorem 2. } { \ em if $ ( \ nabla ^ 2 + k ^ 2 ) u = 0 $ in $ d $, $ u | _ s = 1 $, $ u _ n | _ s = 0 $, and $ k > 0 $ is a constant, then $ d $ is a ball. }
arxiv:1606.05976
we demonstrate background - free imaging and sideband cooling of a single 133cs atom via the narrow - line 6s1 / 2 to 5d5 / 2 electric quadrupole transition in a 1064 nm optical tweezer. the 5d5 / 2 state decays through the 6p3 / 2 state to the ground state, emitting an 852 nm wavelength photon that allows for background - free imaging. by encoding both spin and orbital angular momentum onto the 685 nm excitation light, we achieve background - free fluorescence histograms with 99. 58 ( 3 ) % fidelity by positioning the atom at the dark center of a vortex beam. tuning the tweezer polarization ellipticity realizes a magic trap for the stretched f = 4, mf = 4 to f ' = 6, mf ' = 6 cycling transition. we cool to 5 uk in a 1. 1 mk trap and outline a strategy for ground - state cooling. we compare cooling performance across different sideband regimes, while also exploring how the orbital angular momentum of structured light controls the selection rules for quadrupole transitions. these results expand the toolbox for high - fidelity quantum control and cooling in alkali - atom tweezer arrays.
arxiv:2505.10540
how can we teach a robot to predict what will happen next for an activity it has never seen before? we address this problem of zero - shot anticipation by presenting a hierarchical model that generalizes instructional knowledge from large - scale text - corpora and transfers the knowledge to the visual domain. given a portion of an instructional video, our model predicts coherent and plausible actions multiple steps into the future, all in rich natural language. to demonstrate the anticipation capabilities of our model, we introduce the tasty videos dataset, a collection of 2511 recipes for zero - shot learning, recognition and anticipation.
arxiv:1812.02501
why does dark matter ( dm ) live longer than the age of the universe? here we study a novel sub - ev scalar dm candidate whose stability is due to the pauli exclusion of its fermionic decay products. we analyze the stability of the dm condensate against decays, scatterings ( i. e., evaporation ), and parametric resonance, delineating the viable parameter regions in which dm is cosmologically stable. in a minimal scenario in which the scalar dm decays to a pair of new exotic fermions, we find that scattering can populate an interacting thermal dark sector component to energies far above the dm mass. this self - interacting dark radiation may potentially alleviate the hubble tensions. furthermore, our scenario can be probed through precise measurements of the halo mass function or the masses of dwarf spheroidal galaxies since scattering prevents the dm from becoming too dense. on the other hand, if the lightest neutrino stabilizes the dm, the cosmic neutrino background ( c $ \ nu $ b ) can be significantly altered from the $ \ lambda $ cdm prediction and thus be probed in the future by c $ \ nu $ b detection experiments.
arxiv:2406.17028
a proof of quantumness is an efficiently verifiable interactive test that an efficient quantum computer can pass, but all efficient classical computers cannot ( under some cryptographic assumption ). such protocols play a crucial role in the certification of quantum devices. existing single - round protocols ( like asking the quantum computer to factor a large number ) require large quantum circuits, whereas multi - round ones use smaller circuits but require experimentally challenging mid - circuit measurements. as such, current proofs of quantumness are out of reach for near - term devices. in this work, we construct efficient single - round proofs of quantumness based on existing knowledge assumptions. while knowledge assumptions have not been previously considered in this context, we show that they provide a natural basis for separating classical and quantum computation. specifically, we show that multi - round protocols based on decisional diffie - hellman ( ddh ) or learning with errors ( lwe ) can be " compiled " into single - round protocols using a knowledge - of - exponent assumption or knowledge - of - lattice - point assumption, respectively. we also prove an adaptive hardcore - bit statement for a family of claw - free functions based on ddh, which might be of independent interest. previous approaches to constructing single - round protocols relied on the random oracle model and thus incurred the overhead associated with instantiating the oracle with a cryptographic hash function. in contrast, our protocols have the same resource requirements as their multi - round counterparts without necessitating mid - circuit measurements, making them, arguably, the most efficient single - round proofs of quantumness to date. our work also helps in understanding the interplay between black - box / white - box reductions and cryptographic assumptions in the design of proofs of quantumness.
arxiv:2405.15736
applications in cloud platforms motivate the study of efficient load balancing under job - server constraints and server heterogeneity. in this paper, we study load balancing on a bipartite graph where left nodes correspond to job types and right nodes correspond to servers, with each edge indicating that a job type can be served by a server. thus edges represent locality constraints, i. e., each job can only be served at servers which contained certain data and / or machine learning ( ml ) models. servers in this system can have heterogeneous service rates. in this setting, we investigate the performance of two policies named join - the - fastest - of - the - shortest - queue ( jfsq ) and join - the - fastest - of - the - idle - queue ( jfiq ), which are simple variants of join - the - shortest - queue and join - the - idle - queue, where ties are broken in favor of the fastest servers. under a " well - connected " graph condition, we show that jfsq and jfiq are asymptotically optimal in the mean response time when the number of servers goes to infinity. in addition to asymptotic optimality, we also obtain upper bounds on the mean response time for finite - size systems. we further show that the well - connectedness condition can be satisfied by a random bipartite graph construction with relatively sparse connectivity.
arxiv:2008.08830
we drive a quantum kinetic equation under discrete impurities for the wigner function from the quantum liouville equation. to attain this goal, the electrostatic coulomb potential is separated into the long - and short - range parts, and the self - consistent coupling with poisson ' s equation is explicitly taken into account. it is shown that the collision integral associated with impurity scattering as well as the usual drift term is derived on an equal footing and that the conventional treatment of impurity scattering under the wigner function scheme is inconsistent in the sense that the collision integral is introduced in an ad hoc way and, thus, the short - range part of the impurity potential is double - counted. the boltzmann transport equation ( bte ) is derived without imposing an assumption of random impurity configurations over the substrate. the derived bte is able to describe the discrete nature of impurities such as potential fluctuations and, thus, appropriate to analyze electron transport under semiconductor nanostructures.
arxiv:2105.11336
we study adaptive data - dependent dimensionality reduction in the context of supervised learning in general metric spaces. our main statistical contribution is a generalization bound for lipschitz functions in metric spaces that are doubling, or nearly doubling. on the algorithmic front, we describe an analogue of pca for metric spaces : namely an efficient procedure that approximates the data ' s intrinsic dimension, which is often much lower than the ambient dimension. our approach thus leverages the dual benefits of low dimensionality : ( 1 ) more efficient algorithms, e. g., for proximity search, and ( 2 ) more optimistic generalization bounds.
arxiv:1302.2752
the pioneering concept of connected vehicles has transformed the way of thinking for researchers and entrepreneurs by collecting relevant data from nearby objects. however, this data is useful for a specific vehicle only. moreover, vehicles get a high amount of data ( e. g., traffic, safety, and multimedia infotainment ) on the road. thus, vehicles expect adequate storage devices for this data, but it is infeasible to have a large memory in each vehicle. hence, the vehicular cloud computing ( vcc ) framework came into the picture to provide a storage facility by connecting a road - side - unit ( rsu ) with the vehicular cloud ( vc ). in this, data should be saved in an encrypted form to preserve security, but there is a challenge to search for information over encrypted data. next, we understand that many of vehicular communication schemes are inefficient for data transmissions due to its poor performance results and vulnerable to different fundamental security attacks. accordingly, on - device performance is critical, but data damages and secure on - time connectivity are also significant challenges in a public environment. therefore, we propose reliable data transmission protocols for cutting - edge architecture to search data from the storage, to resist against various security attacks, and provide better performance results. thus, the proposed data transmission protocol is useful in diverse smart city applications ( business, safety, and entertainment ) for the benefits of society.
arxiv:1912.12884
we explore the effect of agn activity on the star formation history of galaxies by analysing the stellar population properties of ten pairs of nearby twin galaxies - - selected as being visually similar except for the presence of an agn. the selection of such twin samples represents a method to study agn feedback, as recently proposed by del moral castro et al. we use integral field unit ( ifu ) data from califa, stacked within three fixed apertures. agn galaxies in a twin pair suggest more evolved stellar populations than their non - agn counterpart 90 % of the time, regardless of aperture size. a comparison with a large sample from sdss confirms that most twins are representative of the general population, but in each twin the differences between twin members is significant. a set of targeted line strengths reveal the agn member of a twin pair is older and more metal rich than the non - agn galaxy, suggesting agn galaxies in our sample may either have an earlier formation time or follow a different star formation and chemical enrichment history. these results are discussed within two simple, contrasting hypotheses for the role played by agn in galaxy evolution, which can be tested in the future at a greater detail with the use of larger data sets.
arxiv:2206.07805
in this paper we study monte carlo estimators based on the likelihood ratio approach for steady - state sensitivity. we first extend the result of glynn and olvera - cravioto [ doi : doi : 10. 1287 / stsy. 2018. 002 ] to the setting of continuous time markov chains with a countable state space which include models such as stochastic reaction kinetics and kinetic monte carlo lattice system. we show that the variance of the centered likelihood ratio estimators does not grow in time. this result suggests that the centered likelihood ratio estimators should be favored for sensitivity analysis when the mixing time of the underlying continuous time markov chain is large, which is typically the case when systems exhibit multi - scale behavior. we demonstrate a practical implication of this analysis on a numerical benchmark of two examples for the biochemical reaction networks.
arxiv:1804.00585
we study numerically and analytically the quench dynamics of isolated many - body quantum systems. using full random matrices from the gaussian orthogonal ensemble, we obtain analytical expressions for the evolution of the survival probability, density imbalance, and out - of - time - ordered correlator. they are compared with numerical results for a one - dimensional disordered model with two - body interactions and shown to bound the decay rate of this realistic system. power - law decays are seen at intermediate times and dips below the infinite time averages ( correlation holes ) occur at long times for all three quantities when the system exhibits level repulsion. the fact that these features are shared by both the random matrix and the realistic disordered model indicates that they are generic to nonintegrable interacting quantum systems out of equilibrium. assisted by the random matrix analytical results, we propose expressions that describe extremely well the dynamics of the realistic chaotic system at different time scales.
arxiv:1704.06272
we consider a one - dimensional variational problem arising in connection with a model for cholesteric liquid crystals. the principal feature of our study is the assumption that the twist deformation of the nematic director incurs much higher energy penalty than other modes of deformation. the appropriate ratio of the elastic constants then gives a small parameter $ \ varepsilon $ entering an allen - cahn - type energy functional augmented by a twist term. we consider the behavior of the energy as $ \ varepsilon $ tends to zero. we demonstrate existence of the local energy minimizers classified by their overall twist, find the $ \ gamma $ - limit of the relaxed energies and show that it consists of the twist and jump terms. further, we extend our results to include the situation when the cholesteric pitch vanishes along with $ \ varepsilon $.
arxiv:2008.04492
efficient omission of symmetric solution candidates is essential for combinatorial problem - solving. most of the existing approaches are instance - specific and focus on the automatic computation of symmetry breaking constraints ( sbcs ) for each given problem instance. however, the application of such approaches to large - scale instances or advanced problem encodings might be problematic since the computed sbcs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. as a result, a time - consuming recomputation of sbcs must be done before every invocation of a solver. to overcome these limitations, we introduce a new model - oriented approach for answer set programming that lifts the sbcs of small problem instances into a set of interpretable first - order constraints using the inductive logic programming paradigm. experiments demonstrate the ability of our framework to learn general constraints from instance - specific sbcs for a collection of combinatorial problems. the obtained results indicate that our approach significantly outperforms a state - of - the - art instance - specific method as well as the direct application of a solver.
arxiv:2112.11806
we have mesured the carrier recombination dynamics in ingan / gan multiple quantum wells over an unprecedented range in intensity. we find that at times shorter than 30 \, ns, they follow an exponential form, and a power law at times longer than 1 \, $ \ mu $ s. to explain these biphasic dynamics, we propose a simple three - level model where a charge - separated state interplays with the radiative state through charge transfer following a tunneling mechanism. we show how the distribution of distances in charge - separated states controls the dynamics at long time. our results imply that charge recombination happens on nearly - isolated clusters of localization centers.
arxiv:1004.2463
in recent works we have constructed axisymmetric solutions to the euler - poisson equations which give mathematical models of slowly uniformly rotating gaseous stars. we try to extend this result to the study of solutions of the einstein - euler equations in the framework of the general theory of relativity. although many interesting studies have been done about axisymmetric metric in the general theory of relativity, they are restricted to the region of the vacuum. mathematically rigorous existence theorem of the axisymmetric interior solutions of the stationary metric corresponding to the energy - momentum tensor of the perfect fluid with non - zero pressure may be not yet established until now except only one found in the pioneering work by u. heilig done in 1993. in this article, along a different approach to that of heilig ' s work, axisymmetric stationary solutions of the einstein - euler equations are constructed near those of the euler - poisson equations when the speed of light is sufficiently large in the considered system of units, or, when the gravitational field is sufficiently weak.
arxiv:1705.07392
sheared wet foam, which stores elastic energy in bubble deformations, relaxes stress through bubble rearrangements. the intermittency of bubble rearrangements in foam leads to effectively stochastic drops in stress that are followed by periods of elastic increase. we investigate global characteristics of highly disordered foams over three decades of strain rate and almost two decades of system size. we characterize the behavior using a range of measures : average stress, distribution of stress drops, rate of stress drops, and a normalized fluctuation intensity. there is essentially no dependence on system size. as a function of strain rate, there is a change in behavior around shear rates of $ 0. 07 { \ rm s ^ { - 1 } } $.
arxiv:cond-mat/0212548
we report on an ab initio strategy based on density functional theory to identify the muon sites. two issues must be carefully addressed, muon delocalization about candidate interstitial sites and local structural relaxation of the atomic positions due to $ \ mu ^ + $ - sample interaction. here, we report on the validation of our strategy on two wide band gap materials, lif and yf3, where localization issues are important because of the interplay between muon localization and lattice relaxation
arxiv:1302.2031
in this article, we provide a pedagogical review of the tolman - oppenheimer - volkoff ( tov ) equation and its solutions which describe static, spherically symmetric gaseous stars in general relativity. our discussion starts with a systematic derivation of the tov equation from the einstein field equations and the relativistic euler equations. next, we give a proof for the existence and uniqueness of solutions of the tov equation describing a star of finite radius, assuming suitable conditions on the equation of state characterizing the gas. we also prove that the compactness of the gas contained inside a sphere centered at the origin satisfies the well - known buchdahl bound, independent of the radius of the sphere. further, we derive the equation of state for an ideal, classical monoatomic relativistic gas from statistical mechanics considerations and show that it satisfies our assumptions for the existence of a unique solution describing a finite radius star. although none of the results discussed in this article are new, they are usually scattered in different articles and books in the literature ; hence it is our hope that this article will provide a self - contained and useful introduction to the topic of relativistic stellar models.
arxiv:2010.02859
van der waals heterostructures composed of transition metal dichalcogenide monolayers ( tmds ) are characterized by their truly rich excitonic properties which are determined by their structural, geometric and electronic properties : in contrast to pure monolayers, electrons and holes can be hosted in different materials, resulting in highly tunable dipolar manyparticle complexes. however, for genuine spatially indirect excitons, the dipolar nature is usually accompanied by a notable quenching of the exciton oscillator strength. via electric and magnetic field dependent measurements, we demonstrate, that a slightly biased pristine bilayer mos $ _ 2 $ hosts strongly dipolar excitons, which preserve a strong oscillator strength. we scrutinize their giant dipole moment, and shed further light on their orbital - and valley physics via bias - dependent magnetic field measurements.
arxiv:2004.12753
in this paper, we propose an hb - like protocol for privacy - preserving authentication of rfid tags, whereby a tag can remain anonymous and untraceable to an adversary during the authentication process. previous proposals of such protocols were based on prf computations. our protocol can instead be used on low - cost tags that may be incapable of computing standard prfs. moreover, since the underlying computations in hb protocols are very efficient, our protocol also reduces reader load compared to prf - based protocols. we suggest a tree - based approach that replaces the prf - based authentication from prior work with a procedure such as hb + or hb #. we optimize the tree - traversal stage through usage of a " light version " of the underlying protocol and shared random challenges across all levels of the tree. this provides significant reduction of the communication resources, resulting in a privacy - preserving protocol almost as efficient as the underlying hb + or hb #
arxiv:0907.1227
modern deep learning methods constitute incredibly powerful tools to tackle a myriad of challenging problems. however, since deep learning methods operate as black boxes, the uncertainty associated with their predictions is often challenging to quantify. bayesian statistics offer a formalism to understand and quantify the uncertainty associated with deep neural network predictions. this tutorial provides an overview of the relevant literature and a complete toolset to design, implement, train, use and evaluate bayesian neural networks, i. e. stochastic artificial neural networks trained using bayesian methods.
arxiv:2007.06823
this work presents a model - based development methodology for verified software systems as well as a tool support for it : an applied autofocus tool chain and its basic principles emphasizing the verification of the system under development as well as the check mechanisms we used to raise the level of confidence in the correctness of the implementation of the automatic generators.
arxiv:1207.2236
we investigate the raman spectra in the geometry where both incident and scattered photon polarizations are parallel to the $ \ hat { z } $ - direction, for a plane - chain bilayer coupled via a single - particle tunneling $ t _ \ perp $. the raman vertex is derived in the tight - binding limit and in the absence of coulomb screening, the raman intensity can be separated into intraband ( $ \ propto t _ \ perp ^ 4 $ ) and interband ( $ \ propto t _ \ perp ^ 2 $ ) transitions. in the small - $ t _ \ perp $ limit, the interband part dominates and a pseudogap will appear as it does in the conductivity. coulomb interactions bring in a two - particle coupling and result in the breakdown of intra - and interband separation. nevertheless, when $ t _ \ perp $ is small, the coulomb screening ( $ \ propto t _ \ perp ^ 4 $ ) has little effect on the intensity to which the unscreened interband transitions contribute most. in general, the total raman spectra are strongly dependent on the magnitude of $ t _ \ perp $.
arxiv:cond-mat/9612149
carbon emissions have long been attributed to the increase in climate change. with the effects of climate change escalating in the past few years, there has been an increased effort to find green alternatives to power generation, which has been a major contributor to carbon emissions. one prominent way that has arisen is biomechanical energy, or harvesting energy based on natural human movement. this study will evaluate the feasibility of electric generation using a gear and generator - based biomechanical energy harvester in the elbow joint. the joint was chosen using kinetic arm analysis through mediapipe, in which the elbow joint showed much higher angular velocity during walking, thus showing more potential as a place to construct the harvester. leg joints were excluded to not obstruct daily movement. the gear and generator type was decided to maximize energy production in the elbow joint. the device was constructed using a gearbox and a generator. the results show that it generated as much as 0. 16 watts using the optimal resistance. this demonstrates the feasibility of electric generation with an elbow joint gear and generator - type biomechanical energy harvester.
arxiv:2410.09036
a bright quasar residing in a dense and largely neutral intergalactic medium ( igm ) at high redshifts ( z > 6 ) will be surrounded by a large cosmological stromgren sphere. the quasar ' s spectrum will then show a sharp increase in resonant lyman line absorption at wavelengths approaching and shorter than that corresponding to the stromgren sphere ' s boundary along the line of sight. we show here that simultaneously considering the measured absorption in two or more hydrogen lyman lines can provide the dynamical range required to detect this feature. we model broad and robust features of the lyman alpha and lyman beta regions of the spectrum of the z = 6. 28 quasar sdss j1030 + 0524, using a hydrodynamical simulation. from the steep wavelength - dependence of the inferred absorption opacity, we detect the boundary of the stromgren sphere at a proper distance of 6. 0 + / - 0. 2 mpc away from the source redshift. from the spectrum alone, we also find that beyond this distance, cosmic hydrogen turns nearly neutral, with a neutral fraction of x _ hi > 0. 2, and that the ionizing luminosity of this quasar is in the range ( 5. 2 + / - 2. 5 ) times 10 ^ { 56 } photons / sec. the method presented here, when applied to future quasars, can probe the complex topology of overlapping ionized regions, and can be used to study the details of the reionization process.
arxiv:astro-ph/0406188
[ abridged ] we present an integrated photometric spectral energy distribution ( sed ) of the magellanic - type galaxy ngc 4449 from the far - ultraviolet ( uv ) to the submillimetre, including new observations acquired by the herschel space observatory. we include integrated uv photometry from the swift ultraviolet and optical telescope using a measurement technique which is appropriate for extended sources with coincidence loss. in this paper, we examine the available multiwavelength data to infer a range of ages, metallicities and star formation rates for the underlying stellar populations, as well as the composition and the total mass of dust in ngc 4449. we present an iterative scheme, which allows us to build an in - depth and multicomponent representation of ngc 4449 ` bottom - up ', taking advantage of the broad capabilities of the photoionization and radiative transfer code mocassin ( monte carlo simulations of ionized nebulae ). we fit the observed sed, the global ionization structure and the emission line intensities, and infer a recent sfr of 0. 4 msolar / yr and a total stellar mass of approximately 1e9 msolar emitting with a bolometric luminosity of 5. 7e9 lsolar. our fits yield a total dust mass of 2. 9e6 msolar including 2 per cent attributed to polycyclic aromatic hydrocarbons. we deduce a dust to gas mass ratio of 1 / 190 within the modelled region. while we do not consider possible additional contributions from even colder dust, we note that including the extended hi envelope and the molecular gas is likely to bring the ratio down to as low as ~ 1 / 800.
arxiv:1302.5430
we derive the schwinger - keldysh effective field theories for diffusion including the lowest non - hydrodynamic degree of freedom from holographic gubser - rocha systems. at low temperature the dynamical non - hydrodynamic mode could be either an ir mode or a slow mode, which is related to ir quantum critical excitations or encodes the information of all energy scales. this additional dynamical vector mode could be viewed as an ultraviolet sector of the diffusive hydrodynamic theory. we construct two different effective actions for each case and discuss their physical properties. in particular we show that the kubo - martin - schwinger symmetry is preserved.
arxiv:2411.16306
in a cell - free massive mimo architecture a very large number of distributed access points simultaneously and jointly serves a much smaller number of mobile stations ; a variant of the cell - free technique is the user - centric approach, wherein each access point just serves a reduced set of mobile stations. this paper introduces and analyzes the cell - free and user - centric architectures at millimeter wave frequencies, considering a training - based channel estimation phase, and the downlink and uplink data transmission phases. first of all, a multiuser clustered millimeter wave channel model is introduced in order to account for the correlation among the channels of nearby users ; second, an uplink multiuser channel estimation scheme is described along with low - complexity hybrid analog / digital beamforming architectures. third, the non - convex problem of power allocation for downlink global energy efficiency maximization is addressed. interestingly, in the proposed schemes no channel estimation is needed at the mobile stations, and the beamforming schemes used at the mobile stations are channel - independent and have a very simple structure. numerical results show the benefits granted by the power control procedure, that the considered architectures are effective, and permit assessing the loss incurred by the use of the hybrid beamformers and by the channel estimation errors.
arxiv:1903.11365
we consider a metric which describes ba $ \ tilde { \ text { n } } $ ados geometries and show that the considered metric is a solution of generalized minimal massive gravity ( gmmg ) model. we consider the killing vector field which preserves the form of considered metric. using the off - shell quasi - local approach we obtain the asymptotic conserved charges of given solution. similar to the einstein gravity in the presence of negative cosmological constant, for the gmmg model also, we show that the algebra among the asymptotic conserved charges is isomorphic to two copies of the virasoro algebra. eventually, we find relation between the algebra of the near horizon and the asymptotic conserved charges. this relation show that the main part of the horizon fluffs proposal of refs. \ cite { 140, 14 } appear for generic black holes in the class of ba $ \ tilde { \ text { n } } $ ados geometries in the context of gmmg model.
arxiv:1611.04259
an elementary proof of the attainability of random coding exponent with linear codes for additive channels is presented. the result and proof are from hamada ( proc. itw, chendu, china, 2006 ), and the present material explains the proof in detail for those unfamiliar with elementary calculations on probabilities related to linear codes.
arxiv:1001.1806
this paper addresses the problems of conditional variance estimation and confidence interval construction in nonparametric regression using dense networks with the rectified linear unit ( relu ) activation function. we present a residual - based framework for conditional variance estimation, deriving nonasymptotic bounds for variance estimation under both heteroscedastic and homoscedastic settings. we relax the sub - gaussian noise assumption, allowing the proposed bounds to accommodate sub - exponential noise and beyond. building on this, for a relu neural network estimator, we derive non - asymptotic bounds for both its conditional mean and variance estimation, representing the first result for variance estimation using relu networks. furthermore, we develop a relu network based robust bootstrap procedure ( efron, 1992 ) for constructing confidence intervals for the true mean that comes with a theoretical guarantee on the coverage, providing a significant advancement in uncertainty quantification and the construction of reliable confidence intervals in deep learning settings.
arxiv:2412.20355
to draw real - world evidence about the comparative effectiveness of multiple time - varying treatments on patient survival, we develop a joint marginal structural survival model and a novel weighting strategy to account for time - varying confounding and censoring. our methods formulate complex longitudinal treatments with multiple start / stop switches as the recurrent events with discontinuous intervals of treatment eligibility. we derive the weights in continuous time to handle a complex longitudinal dataset without the need to discretize or artificially align the measurement times. we further use machine learning models designed for censored survival data with time - varying covariates and the kernel function estimator of the baseline intensity to efficiently estimate the continuous - time weights. our simulations demonstrate that the proposed methods provide better bias reduction and nominal coverage probability when analyzing observational longitudinal survival data with irregularly spaced time intervals, compared to conventional methods that require aligned measurement time points. we apply the proposed methods to a large - scale covid - 19 dataset to estimate the causal effects of several covid - 19 treatments on the composite of in - hospital mortality and icu admission.
arxiv:2109.13368
we demonstrate quantum interference between indistinguishable photons emitted by two nitrogen - vacancy ( nv ) centers in distinct diamond samples separated by two meters. macroscopic solid immersion lenses are used to enhance photon collection efficiency. quantum interference is verified by measuring a value of the second - order cross - correlation function $ g ^ { ( 2 ) } ( 0 ) = 0. 35 \ pm 0. 04 < 0. 5 $. in addition, optical transition frequencies of two separated nv centers are tuned into resonance with each other by applying external electric fields. extension of the present approach to generate entanglement of remote solid - state qubits is discussed.
arxiv:1112.3975
logic - qubit entanglement has attracted much attention in both quantum communication and quantum computation. here, we present an efficient protocol to distill the logic - qubit entanglement with the help of cross - kerr nonlinearity. this protocol not only can purify the logic bit - flip error and logic phase - flip error, but also can correct the physical bit - flip error completely. we use cross - kerr nonlinearity to construct quantum nondemolition detectors. our distillation protocol for logic - qubit entanglement may be useful for the practical applications in quantum information, especially in long - distance quantum communication.
arxiv:1605.04633
reducing the impact of errors and decoherence in near - term quantum computers, such as noisy intermediate - scale quantum ( nisq ) devices, is critical for their practical implementation. these factors significantly limit the applicability of quantum algorithms, necessitating a comprehensive understanding of their physical origins to establish effective error mitigation strategies. in this study, we present a non - markovian model of quantum state evolution and a quantum error mitigation cost function tailored for nisq devices interacting with an environment represented by a set of simple harmonic oscillators as a noise source. employing the projection operator formalism and both advanced and retarded propagators in time, we derive the reduced - density operator for the output quantum states in a time - convolutionless form by solving the quantum liouville equation. we examine the output quantum state fluctuations for both identity and controlled - not ( cnot ) gate operations in two - qubit operations using a range of input states. subsequently, these results are compared with experimental data from ion - trap and superconducting quantum computing systems to estimate the crucial parameters of the cost functions for quantum error mitigation. our findings reveal that the cost function for quantum error mitigation increases as the coupling strength between the quantum system and its environment intensifies. this study underscores the significance of non - markovian models in understanding quantum state evolution and highlights the practical implications of the quantum error mitigation cost function when assessing experimental results from nisq devices.
arxiv:2302.05053
it has recently been shown that a hagedorn phase of string gas cosmology can provide a causal mechanism for generating a nearly scale - invariant spectrum of scalar metric fluctuations, without the need for an intervening period of de sitter expansion. in this paper we compute the spectrum of tensor metric fluctuations ( gravitational waves ) in this scenario, and show that it is also nearly scale - invariant. however, whereas the spectrum of scalar modes has a small red - tilt, the spectrum of tensor modes has a small blue tilt, unlike what occurs in slow - roll inflation. this provides a possible observational way to distinguish between our cosmological scenario and conventional slow - roll inflation.
arxiv:hep-th/0604126
i discuss two - particle intensity interferometry as a method to extract from measured 1 - and 2 - particle momentum spectra information on the space - time geometry and dynamics of the particle emitting source. particular attention is given to the rapid expansion and short lifetime of the sources created in relativistic heavy - ion collisions. model - independent expressions for the hbt size parameters in terms of the space - time variances of the source are derived, and a new parametrization of the correlation function is suggested which allows to separate the transverse, longitudinal and temporal extension of the source and to measure its transverse and longitudinal expansion velocity. the effects of resonance decays are also discussed.
arxiv:nucl-th/9609029
in this paper we give new existence results for complete non - orientable minimal surfaces in $ \ mathbb { r } ^ 3 $ with prescribed topology and asymptotic behavior.
arxiv:1312.0513
we propose a randomized block - coordinate variant of the classic frank - wolfe algorithm for convex optimization with block - separable constraints. despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full frank - wolfe algorithm. we also show that, when applied to the dual structural support vector machine ( svm ) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. however, unlike stochastic subgradient methods, the block - coordinate frank - wolfe algorithm allows us to compute the optimal step - size and yields a computable duality gap guarantee. our experiments indicate that this simple algorithm outperforms competing structural svm solvers.
arxiv:1207.4747
this is part ii of three - part work. here, we present a second set of inter - related five variants of simplified long short - term memory ( lstm ) recurrent neural networks by further reducing adaptive parameters. two of these models have been introduced in part i of this work. we evaluate and verify our model variants on the benchmark mnist dataset and assert that these models are comparable to the base lstm model while use progressively less number of parameters. moreover, we observe that in case of using the relu activation, the test accuracy performance of the standard lstm will drop after a number of epochs when learning parameter become larger. however all of the new model variants sustain their performance.
arxiv:1707.04623
when conduction electrons are forced to follow the local spin texture, the resulting berry phase can induce an anomalous hall effect ( ahe ). in gadolinium, as in double - exchange magnets, the exchange interaction is mediated by the conduction electrons and the ahe may therefore resemble that of chromium dioxide and other metallic double - exchange ferromagnets. the hall resistivity, magnetoresistance, and magnetization of single crystal gadolinium were measured in fields up to 30 t. measurements between 2 k and 400 k are consistent with previously reported data. a scaling analysis for the hall resistivity as a function of the magnetization suggests the presence of a berry ' s - phase contribution to the anomalous hall effect.
arxiv:cond-mat/0404485
the common thread behind the recent nobel prize in physics to john hopfield and those conferred to giorgio parisi in 2021 and philip anderson in 1977 is disorder. quoting philip anderson : " more is different ". this principle has been extensively demonstrated in magnetic systems and spin glasses, and, in this work, we test its validity on hopfield neural networks to show how an assembly of these models displays emergent capabilities that are not present at a single network level. such an assembly is designed as a layered associative hebbian network that, beyond accomplishing standard pattern recognition, spontaneously performs also pattern disentanglement. namely, when inputted with a composite signal - - e. g., a musical chord - - it can return the single constituting elements - - e. g., the notes making up the chord. here, restricting to notes coded as rademacher vectors and chords that are their mixtures ( i. e., spurious states ), we use tools borrowed from statistical mechanics of disordered systems to investigate this task, obtaining the conditions over the model control - parameters such that pattern disentanglement is successfully executed.
arxiv:2501.16789
we present deep optical photometry of the afterglow of gamma - ray burst ( grb ) 041006 and its associated hypernova obtained over 65 days after detection ( 55 r - band epochs on 10 different nights ). our early data ( t < 4 days ) joined with published gcn data indicates a steepening decay, approaching f _ nu ~ t ^ { - 0. 6 } at early times ( < < 1 day ) and f _ nu ~ t ^ { - 1. 3 } at late times. the break at t _ b = 0. 16 + - 0. 04 days is the earliest reported jet break among all grb afterglows. during our first night, we obtained 39 exposures spanning 2. 15 hours from 0. 62 to 0. 71 days after the burst that reveal a smooth afterglow, with an rms deviation of 0. 024 mag from the local power - law fit, consistent with photometric errors. after t ~ 4 days, the decay slows considerably, and the light curve remains approximately flat at r ~ 24 mag for a month before decaying by another magnitude to reach r ~ 25 mag two months after the burst. this ` ` bump ' ' is well - fitted by a k - corrected light curve of sn1998bw, but only if stretched by a factor of 1. 38 in time. in comparison with the other grb - related sne bumps, grb 041006 stakes out new parameter space for grb / sne, with a very bright and significantly stretched late - time sn light curve. within a small sample of fairly well observed grb / sn bumps, we see a hint of a possible correlation between their peak luminosity and their ` ` stretch factor ' ', broadly similar to the well - studied phillips relation for the type ia supernovae.
arxiv:astro-ph/0502319
we construct equilibrium configurations of uniformly rotating neutron stars for selected relativistic mean - field nuclear matter equations of state ( eos ). we compute in particular the gravitational mass ( $ m $ ), equatorial ( $ r _ { \ rm eq } $ ) and polar ( $ r _ { \ rm pol } $ ) radii, eccentricity, angular momentum ( $ j $ ), moment of inertia ( $ i $ ) and quadrupole moment ( $ m _ 2 $ ) of neutron stars stable against mass - shedding and secular axisymmetric instability. by constructing the constant frequency sequence $ f = 716 $ hz of the fastest observed pulsar, psr j1748 - 2446ad, and constraining it to be within the stability region, we obtain a lower mass bound for the pulsar, $ m _ { \ rm min } = [ 1. 2 $ - $ 1. 4 ] m _ \ odot $, for the eos employed. moreover we give a fitting formula relating the baryonic mass ( $ m _ b $ ) and gravitational mass of non - rotating neutron stars, $ m _ b / m _ \ odot = m / m _ \ odot + ( 13 / 200 ) ( m / m _ \ odot ) ^ 2 $ [ or $ m / m _ \ odot = m _ b / m _ \ odot - ( 1 / 20 ) ( m _ b / m _ \ odot ) ^ 2 $ ], which is independent on the eos. we also obtain a fitting formula, although not eos independent, relating the gravitational mass and the angular momentum of neutron stars along the secular axisymmetric instability line for each eos. we compute the maximum value of the dimensionless angular momentum, $ a / m \ equiv c j / ( g m ^ 2 ) $ ( or " kerr parameter " ), $ ( a / m ) _ { \ rm max } \ approx 0. 7 $, found to be also independent on the eos. we compare and contrast then the quadrupole moment of rotating neutron stars with the one predicted by the kerr exterior solution for the same values of mass and angular momentum. finally we show that, although the mass quadrupole moment of realistic neutron stars never reaches the kerr value, the latter is closely approached from above at the maximum mass value, as physically expected from the no - hair theorem. in particular the stiffer the e
arxiv:1506.05926
the purpose of this paper is to analyze the capabilities, functionalities and appropriateness of altmetric. com as a data source for the bibliometric analysis of books in comparison to plumx. we perform an exploratory analysis on the metrics the altmetric explorer for institutions platform offers for books. we use two distinct datasets of books : the book collection included in altmetric. com and the clarivate ' s master book list, to analyze altmetric. com ' s capabilities to download and merge data with external databases. finally, we compare our findings with those obtained in a previous study performed in plumx. altmetric. com combines and orderly tracks a set of data sources combined by doi identifiers to retrieve metadata from books, being google books its main provider. it also retrieves information from commercial publishers and from some open access initiatives, including those led by university libraries such as harvard library. we find issues with linkages between records and mentions or isbn discrepancies. furthermore, we find that automatic bots affect greatly wikipedia mentions to books. our comparison with plumx suggests that none of these tools provide a complete picture of the social attention generated by books and are rather complementary than comparable tools.
arxiv:1809.10128
it is shown that the foldy - wouthuysen transformation for relativistic particles in strong external fields provides the possibility of obtaining a meaningful classical limit of the relativistic quantum mechanics. the full agreement between quantum and classical theories is proved. the coincidence of the semiclassical equations of motion of particles and their spins with the corresponding classical equations is established. the niels bohr ' s correspondence principle is valid not only in the limit of large spin quantum numbers but also for particles with any spin as well as for spinless particles.
arxiv:0910.5155
we revisit the $ k $ - hessian eigenvalue problem on a smooth, bounded, $ ( k - 1 ) $ - convex domain in $ \ mathbb r ^ n $. first, we obtain a spectral characterization of the $ k $ - hessian eigenvalue as the infimum of the first eigenvalues of linear second - order elliptic operators whose coefficients belong to the dual of the corresponding g \ r { a } rding cone. second, we introduce a non - degenerate inverse iterative scheme to solve the eigenvalue problem for the $ k $ - hessian operator. we show that the scheme converges, with a rate, to the $ k $ - hessian eigenvalue for all $ k $. when $ 2 \ leq k \ leq n $, we also prove a local $ l ^ 1 $ convergence of the hessian of solutions of the scheme. hyperbolic polynomials play an important role in our analysis.
arxiv:2012.07670
popular methods for exploring the space of rooted phylogenetic trees use rearrangement moves such as rnni ( rooted nearest neighbour interchange ) and rspr ( rooted subtree prune and regraft ). recently, these moves were generalized to rooted phylogenetic networks, which are a more suitable representation of reticulate evolutionary histories, and it was shown that any two rooted phylogenetic networks of the same complexity are connected by a sequence of either rspr or rnni moves. here, we show that this is possible using only tail moves, which are a restricted version of rspr moves on networks that are more closely related to rspr moves on trees. the connectedness still holds even when we restrict to distance - 1 tail moves ( a localized version of tail - moves ). moreover, we give bounds on the number of ( distance - 1 ) tail moves necessary to turn one network into another, which in turn yield new bounds for rspr, rnni and spr ( i. e. the equivalent of rspr on unrooted networks ). the upper bounds are constructive, meaning that we can actually find a sequence with at most this length for any pair of networks. finally, we show that finding a shortest sequence of tail or rspr moves is np - hard.
arxiv:1708.07656
to our knowledge, there are two main references [ 9 ], [ 12 ] regarding the periodical solutions of multi - time euler - lagrange systems, even if the multi - time equations appeared in 1935, being introduced by de donder. that is why, the central objective of this paper is to solve an open problem raised in [ 12 ] : what we can say about periodical solutions of multi - time hamilton systems when the hamiltonian is convex? section 1 recall well - known facts regarding the equivalence between euler - lagrange equations and hamilton equations. section 2 analyzes the action that produces multi - time hamilton equations, and introduces the legendre transform of a hamiltonian together a new dual action. section 3 proves the existence of periodical solutions of multi - time hamilton equations via periodical extremals of the dual action, when the hamiltonian is convex.
arxiv:math/0510554
in this note we describe the commutant of the multiplication operator by a monomial in the toeplitz algebra of a complete strongly pseudoconvex reinhardt domain.
arxiv:1501.02303
a key puzzle in search, ads, and recommendation is that the ranking model can only utilize a small portion of the vastly available user interaction data. as a result, increasing data volume, model size, or computation flops will quickly suffer from diminishing returns. we examined this problem and found that one of the root causes may lie in the so - called ` ` item - centric ' ' formulation, which has an unbounded vocabulary and thus uncontrolled model complexity. to mitigate quality saturation, we introduce an alternative formulation named ` ` user - centric ranking ' ', which is based on a transposed view of the dyadic user - item interaction data. we show that this formulation has a promising scaling property, enabling us to train better - converged models on substantially larger data sets.
arxiv:2305.15333
matchgates and clifford circuits are two types of quantum circuits which can be efficiently simulated classically, though the underlying reasons are quite different. matchgates are essentially the single particle basis transformations in the majorana fermion representation which can be easily handled classically, while the clifford circuits can be efficiently simulated using the tableau method according to the gottesman - knill theorem. in this work, we propose a new wave - function ansatz in which matrix product states are augmented with the combination of matchgates and clifford circuits ( dubbed mca - mps ) to take advantage of the representing power of all of them. moreover, the optimization of mca - mps can be efficiently implemented within the density matrix renormalization group method. our benchmark results on one - dimensional hydrogen chain show that mca - mps can improve the accuracy of the ground - state calculation by several orders of magnitude over mps with the same bond dimension. this new method provides us a useful approach to study quantum many - body systems. the mca - mps ansatz also expands our understanding of classically simulatable quantum many - body states.
arxiv:2505.08635
by operating an antineutrino detector of simple design during several fuel cycles, we have observed long term changes in antineutrino flux that result from the isotopic evolution of a commercial pressurized water reactor ( pwr ). measurements made with simple antineutrino detectors of this kind offer an alternative means for verifying fissile inventories at reactors, as part of international atomic energy agency ( iaea ) and other reactor safeguards regimes.
arxiv:0808.0698
we address the problem of reaching consensus in the presence of byzantine faults. in particular, we are interested in investigating the impact of messages relay on the network connectivity for a correct iterative approximate byzantine consensus algorithm to exist. the network is modeled by a simple directed graph. we assume a node can send messages to another node that is up to $ l $ hops away via forwarding by the intermediate nodes on the routes, where $ l \ in \ mathbb { n } $ is a natural number. we characterize the necessary and sufficient topological conditions on the network structure. the tight conditions we found are consistent with the tight conditions identified for $ l = 1 $, where only local communication is allowed, and are strictly weaker for $ l > 1 $. let $ l ^ * $ denote the length of a longest path in the given network. for $ l \ ge l ^ * $ and undirected graphs, our conditions hold if and only if $ n \ ge 3f + 1 $ and the node - connectivity of the given graph is at least $ 2f + 1 $, where $ n $ is the total number of nodes and $ f $ is the maximal number of byzantine nodes ; and for $ l \ ge l ^ * $ and directed graphs, our conditions is equivalent to the tight condition found for exact byzantine consensus. our sufficiency is shown by constructing a correct algorithm, wherein the trim function is constructed based on investigating a newly introduced minimal messages cover property. the trim function proposed also works over multi - graphs.
arxiv:1411.5282
this work addresses certain ambiguities in the dirac approach to constrained systems. specifically, we investigate the space of so - called ` ` rigging maps ' ' associated with refined algebraic quantization, a particular realization of the dirac scheme. our main result is to provide a condition under which the rigging map is unique, in which case we also show that it is given by group averaging techniques. our results comprise all cases where the gauge group is a finite - dimensional lie group.
arxiv:gr-qc/9902045
the synchrotron self - compton ( ssc ) models and external compton ( ec ) models of agn jets with continually longitudinal and transverse bulk velocity structures are constructed. the observed spectra show complex and interesting patterns in different velocity structures and viewing angles. these models are used to calculate the synchrotron and inverse compton spectra of two typical bl lac objects ( blo ) ( mrk 421 and 0716 + 714 ) and one flat spectrum radio quasars ( fsrqs ) ( 3c 279 ), and to discuss the implications of jet bulk velocity structures in unification of the blo and fr i radio galaxies ( fri ). by calculating the synchrotron spectra and ssc spectra of bl lac object jets with continually bulk velocity structures, we find that the spectra are much different from ones in jets with uniform velocity structure under the increase of viewing angles. the unification of blo and fri is less constrained by viewing angles and would be imprinted by velocity structures intrinsic to the jet themselves. by considering the jets with bulk velocity structures constrained by apparent speed, we discuss the velocity structures imprinted on the observed spectra for different viewing angles. we find that the spectra are greatly impacted by longitudinal velocity structures, becasue the volume elements are compressed or expanded. finally, we present the ec spectra of fsrqs and fr ii radio galaxies ( frii ) and find that they are weakly affected by velocity structures compared to synchrotron and ssc spectra.
arxiv:0907.0094
in this paper, we present convergence guarantees for a modified trust - region method designed for minimizing objective functions whose value and gradient and hessian estimates are computed with noise. these estimates are produced by generic stochastic oracles, which are not assumed to be unbiased or consistent. we introduce these oracles and show that they are more general and have more relaxed assumptions than the stochastic oracles used in prior literature on stochastic trust - region methods. our method utilizes a relaxed step acceptance criterion and a cautious trust - region radius updating strategy which allows us to derive exponentially decaying tail bounds on the iteration complexity for convergence to points that satisfy approximate first - and second - order optimality conditions. finally, we present two sets of numerical results. we first explore the tightness of our theoretical results on an example with adversarial zeroth - and first - order oracles. we then investigate the performance of the modified trust - region algorithm on standard noisy derivative - free optimization problems.
arxiv:2205.03667
of these records, and by observing the times and heights of the maximum rise of a particular flood at the stations on the various tributaries, the time of arrival and height of the top of the flood at any station on the main river can be predicted with remarkable accuracy two or more days beforehand. by communicating these particulars about a high flood to places on the lower river, weir - keepers are enabled to fully open the movable weirs beforehand to permit the passage of the flood, and riparian inhabitants receive timely warning of the impending inundation. where portions of a riverside town are situated below the maximum flood - level, or when it is important to protect land adjoining a river from inundations, the overflow of the river must be diverted into a flood - dam or confined within continuous embankments on both sides. by placing these embankments somewhat back from the margin of the river - bed, a wide flood - channel is provided for the discharge of the river as soon as it overflows its banks, while leaving the natural channel unaltered for the ordinary flow. low embankments may be sufficient where only exceptional summer floods have to be excluded from meadows. occasionally the embankments are raised high enough to retain the floods during most years, while provision is made for the escape of the rare, exceptionally high floods at special places in the embankments, where the scour of the issuing current is guarded against, and the inundation of the neighboring land is least injurious. in this manner, the increased cost of embankments raised above the highest flood - level of rare occurrence is avoided, as is the danger of breaches in the banks from an unusually high flood - rise and rapid flow, with their disastrous effects. = = embankments = = a most serious objection to the formation of continuous, high embankments along rivers bringing down considerable quantities of detritus, especially near a place where their fall has been abruptly reduced by descending from mountain slopes onto alluvial plains, is the danger of their bed being raised by deposit, producing a rise in the flood - level, and necessitating a raising of the embankments if inundations are to be prevented. longitudinal sections of the po river, taken in 1874 and 1901, show that its bed was materially raised during this period from the confluence of the ticino to below caranella, despite the clearance of sediment effected by the rush through breaches. therefore, the completion of the embankments, together with their raising, would only eventually aggravate
https://en.wikipedia.org/wiki/River_engineering
multi - modal large language models ( mllms ) can understand image - language prompts and demonstrate impressive reasoning ability. in this paper, we extend mllms ' output by empowering mllms with the segmentation ability. the extended mllms can both output language responses to the image - language prompts and segment the regions that the complex question or query in the language prompts focuses on. to this end, the existing work, lisa, enlarges the original word embeddings with an additional segment token and fine - tunes dialogue generation and query - focused segmentation together, where the feature of the segment token is used to prompt the segment - anything model. although they achieve superior segmentation performance, we observe that the dialogue ability decreases by a large margin compared to the original mllms. to maintain the original mllms ' dialogue ability, we propose a novel mllms framework, coined as llavaseg, which leverages a chain - of - thought prompting strategy to instruct the mllms to segment the target region queried by the user. the mllms are first prompted to reason about the simple description of the target region from the complicated user query, then extract the visual attributes of the target region according to the understanding of mllms to the image. these visual attributes, such as color and relative locations, are utilized to prompt the downstream segmentation model. experiments show that the proposed method keeps the original dialogue ability and equips the mllms ' model with strong reasoning segmentation ability. the code is available at https : / / github. com / yuqiyang213 / llavaseg.
arxiv:2403.14141
dimension 3, typically r3. a surface that is contained in a projective space is called a projective surface ( see § projective surface ). a surface that is not supposed to be included in another space is called an abstract surface. = = examples = = the graph of a continuous function of two variables, defined over a connected open subset of r2 is a topological surface. if the function is differentiable, the graph is a differentiable surface. a plane is both an algebraic surface and a differentiable surface. it is also a ruled surface and a surface of revolution. a circular cylinder ( that is, the locus of a line crossing a circle and parallel to a given direction ) is an algebraic surface and a differentiable surface. a circular cone ( locus of a line crossing a circle, and passing through a fixed point, the apex, which is outside the plane of the circle ) is an algebraic surface which is not a differentiable surface. if one removes the apex, the remainder of the cone is the union of two differentiable surfaces. the surface of a polyhedron is a topological surface, which is neither a differentiable surface nor an algebraic surface. a hyperbolic paraboloid ( the graph of the function z = xy ) is a differentiable surface and an algebraic surface. it is also a ruled surface, and, for this reason, is often used in architecture. a two - sheet hyperboloid is an algebraic surface and the union of two non - intersecting differentiable surfaces. = = parametric surface = = a parametric surface is the image of an open subset of the euclidean plane ( typically r 2 { \ displaystyle \ mathbb { r } ^ { 2 } } ) by a continuous function, in a topological space, generally a euclidean space of dimension at least three. usually the function is supposed to be continuously differentiable, and this will be always the case in this article. specifically, a parametric surface in r 3 { \ displaystyle \ mathbb { r } ^ { 3 } } is given by three functions of two variables u and v, called parameters x = f 1 ( u, v ), y = f 2 ( u, v ), z = f 3 ( u, v ). { \ displaystyle { \ begin { aligned } x & = f _ { 1 } ( u, v ), \ \ [ 4pt ] y & = f _ { 2 } ( u, v ), \ \ [ 4pt ] z & = f _ { 3 }
https://en.wikipedia.org/wiki/Surface_(mathematics)
we consider cogapsvp _ \ sqrt { n }, a gap version of the shortest vector in a lattice problem. this problem is known to be in am \ cap conp but is not known to be in np or in ma. we prove that it lies inside qma, the quantum analogue of np. this is the first non - trivial upper bound on the quantum complexity of a lattice problem. the proof relies on two novel ideas. first, we give a new characterization of qma, called qma +. working with the qma + formulation allows us to circumvent a problem which arises commonly in the context of qma : the prover might use entanglement between different copies of the same state in order to cheat. the second idea involves using estimations of autocorrelation functions for verification. we make the important observation that autocorrelation functions are positive definite functions and using properties of such functions we severely restrict the prover ' s possibility to cheat. we hope that these ideas will lead to further developments in the field.
arxiv:quant-ph/0307220
the possibility to use the width of the decay $ \ rho \ to e ^ { + } e ^ { - } $ to fix the input parameter $ g _ \ rho = 5. 0 $ of the $ su ( 2 ) \ times su ( 2 ) $ chiral - symmetric nambu - - jona - lasinio model is discussed. it is shown that for a consistent simultaneous description of the processes $ \ rho \ to e ^ { + } e ^ { - } $, $ \ rho \ to \ pi ^ { + } \ pi ^ { - } $, $ \ tau ^ { - } \ to \ pi ^ { - } \ pi ^ { 0 } \ nu _ { \ tau } $, and $ e ^ { + } e ^ { - } \ to \ pi ^ { + } \ pi ^ { - } $ can be constructed. taking into account the interaction of pions in the final state appears to be important. the obtained theoretical results for the considered processes are in a satisfactory agreement with experimental data.
arxiv:2011.02469
if the cosmic dark matter consists of weakly - interacting massive particles, these particles should be produced in reactions at the next generation of high - energy accelerators. measurements at these accelerators can then be used to determine the microscopic properties of the dark matter. from this, we can predict the cosmic density, the annihilation cross sections, and the cross sections relevant to direct detection. in this paper, we present studies in supersymmetry models with neutralino dark matter that give quantitative estimates of the accuracy that can be expected. we show that these are well matched to the requirements of anticipated astrophysical observations of dark matter. the capabilities of the proposed international linear collider ( ilc ) are expected to play a particularly important role in this study.
arxiv:hep-ph/0602187
advances in measurements of jets and collective phenomena in ultrarelativistic heavy ion collisions have led to further understanding of the properties of the medium created in such collisions. measurements of the correlations between the axes of reconstructed jets and the reaction plane or second - order participant plane of the bulk medium ( defined as jet $ v _ 2 $ ), as well as the higher - order participant planes ( jet $ v _ n $ ), provide information on medium - induced parton energy loss. additionally, knowledge of jet $ v _ n $ as well as the ability to reconstruct the event plane in the presence of a jet are necessary in analyses of jet - triggered particle correlations, which are used to study medium - induced jet shape modification. however, the presence of a jet can bias the event plane calculation, leading to an overestimation of jet $ v _ 2 $. this paper proposes a method for calculating jet $ v _ 2 $ ( and by extension, the higher jet $ v _ n $ harmonics ) and the event plane in an unbiased way, using knowledge of the azimuthal angle of the jet axis from full jet reconstruction.
arxiv:1205.1172
we describe a new axion search method based on measuring the variance in the interference of the axion signal using injected photons with a power detector. the need for a linear amplifier is eliminated by putting a strong signal into the microwave cavity, to acquire not only the power excess but also measure the variance of the output power. the interference of the external photons with the axion to photon converted signal greatly enhances the variance at the particular axion frequency, providing evidence of its existence. this method has an advantage in that it can always obtain sensitivity near the quantum noise limit even for a power detector with high dark count rate. we describe the basic concept of this method both analytically and numerically, and we show experimental results using a simple demonstration circuit.
arxiv:2209.07022
the recent discovery of superconductivity in the quasi - one - dimensional compound k $ _ 2 $ cr $ _ 3 $ as $ _ 3 $, which consists of double - walled tubes of [ ( cr $ _ 3 $ as $ _ 3 $ ) $ ^ { 2 - } ] ^ \ infty $ that run along the c axis, has attracted immediate attention as a potential system for studying superconductors with reduced dimensionality. here we report clear experimental evidence for the unconventional nature of the superconducting order parameter in k $ _ 2 $ cr $ _ 3 $ as $ _ 3 $, by precisely measuring the temperature dependence of the change in the penetration depth $ \ delta \ lambda ( t ) $ using a tunnel diode oscillator. linear behavior of $ \ delta \ lambda ( t ) $ is observed for $ t \ ll t _ c $, instead of the exponential behavior of conventional superconductors, indicating that there are line nodes in the superconducting gap. this is strong evidence for unconventional behavior and may provide key information for identifying the pairing state of this novel superconductor.
arxiv:1501.01880
this work introduces a generalized characteristic mapping method designed to handle non - linear advection with source terms. the semi - lagrangian approach advances the flow map, incorporating the source term via the duhamel integral. we derive a recursive formula for the time decomposition of the map and the source term integral, enhancing computational efficiency. benchmark computations are presented for a test case with an exact solution and for two - dimensional ideal incompressible magnetohydrodynamics ( mhd ). results demonstrate third - order accuracy in both space and time. the submap decomposition method achieves exceptionally high resolution, as illustrated by zooming into fine - scale current sheets. an error estimate is performed and suggests third order convergence in space and time.
arxiv:2411.13772
i discuss the behaviour of algorithms for dynamical fermions as the sea - quark mass decreases. i focus on the hybrid - monte - carlo ( hmc ) algorithm applied to two degenerate flavours of wilson fermions. first, i briefly review the performance obtained in large scale hmc simulations. then i discuss a modified pseudo - fermion action for the hmc simulation that has been introduced three years ago. i summarize recent results obtained with this pseudo - fermion action by the qcdsf and the alpha collaborations. i comment on alternatives to the hmc, like the multiboson algorithm and variants of it.
arxiv:hep-lat/0310029
the mathematical modeling of the contraction of a muscle is a crucial problem in biomechanics. several different models of muscle activation exist in literature. a possible approach to contractility is the so - called active strain : it is based on a multiplicative decomposition of the deformation gradient into an active contribution, accounting for the muscle activation, and an elastic one, due to the passive deformation of the body. we show that the active strain approach does not allow to recover the experimental stress - stretch curve corresponding to a uniaxial deformation of a skeletal muscle, whatever the functional form of the strain energy. to overcome such difficulty, we introduce an alternative model, that we call mixture active strain approach, where the muscle is composed of two different solid phases and only one of them actively contributes to the active behavior of the muscle.
arxiv:1902.06947
the combination of flavor symmetries with grand unification is considered : gut $ \ times $ flavor. to accommodate three generations the flavor group so ( 3 ) is used. all fermions transform as 3 - vectors under this group. the yukawa couplings are obtained from vacuum expectation values of flavon fields. for the flavon fields ( singlets with respect to the gut group ) and the higgs fields ( singlets with respect to the generation group ) a simple form for the effective potentials is postulated. it automatically leads to spontaneous symmetry breaking for these scalar fields. discrete s4 transformations relate the different locations of the minima of the potentials. these potentials can be used to describe the hierarchy of the well known up quark mass spectrum. also the huge hierarchy of the masses of the higgs fields in grand unified models can be parametrized in this way. it leads to a prediction of the mass of the lightest higgs boson in terms of its vacuum expectation value $ v _ 0 $ : $ m _ { higgs } = \ frac { v _ 0 } { \ sqrt { 2 } } = 123 gev $.
arxiv:1012.6028
mendelian randomization ( mr ) has become a popular approach to study causal effects by using genetic variants as instrumental variables. we propose a new mr method, genius - mawii, which simultaneously addresses the two salient phenomena that adversely affect mr analyses : many weak instruments and widespread horizontal pleiotropy. similar to mr genius ( tchetgen tchetgen et al., 2021 ), we achieve identification of the treatment effect by leveraging heteroscedasticity of the exposure. we then derive the class of influence functions of the treatment effect, based on which, we construct a continuous updating estimator and establish its consistency and asymptotic normality under a many weak invalid instruments asymptotic regime by developing novel semiparametric theory. we also provide a measure of weak identification, an overidentification test, and a graphical diagnostic tool. we demonstrate in simulations that genius - mawii has clear advantages in the presence of directional or correlated horizontal pleiotropy compared to other methods. we apply our method to study the effect of body mass index on systolic blood pressure using uk biobank.
arxiv:2107.06238
event horizon telescope ( eht ) images of the horizon - scale emission around the galactic center supermassive black hole sagittarius a * ( sgr a * ) favor accretion flow models with a jet component. however, this jet has not been conclusively detected. using the " best - bet " models of sgr a * from the eht collaboration, we assess whether this non - detection is expected for current facilities and explore the prospects of detecting a jet with vlbi at four frequencies : 86, 115, 230, and 345 ghz. we produce synthetic image reconstructions for current and next - generation vlbi arrays at these frequencies that include the effects of interstellar scattering, optical depth, and time variability. we find that no existing vlbi arrays are expected to detect the jet in these best - bet models, consistent with observations to - date. we show that next - generation vlbi arrays at 86 and 115 ghz - - in particular, the eht after upgrades through the ngeht program and the ngvla - - successfully capture the jet in our tests due to improvements in instrument sensitivity and ( u, v ) coverage at spatial scales critical to jet detection. these results highlight the potential of enhanced vlbi capabilities in the coming decade to reveal the crucial properties of sgr a * and its interaction with the galactic center environment.
arxiv:2405.06029
we study the hochschild and cyclic homologies of noncommutative monogenic extensions. as an aplication we compute the hochschild and cyclic homologies of the rank ~ 1 hopf algebras introduced by l. krop and d. radford in [ finite dimensional hopf algebras of rank 1 in characteristic 0, journal of algebra 302, no. 1, 214 - 230 } ( 2006 ) ].
arxiv:0705.1152
we stress that the lack of direct evidence for supersymmetry forces the soft mass parameters to lie very close to the critical line separating the broken and unbroken phases of the electroweak gauge symmetry. we argue that the level of criticality, or fine - tuning, that is needed to escape the present collider bounds can be quantitatively accounted for by assuming that the overall scale of the soft terms is an environmental quantity. under fairly general assumptions, vacuum - selection considerations force a little hierarchy in the ratio between m _ z ^ 2 and the supersymmetric particle square masses, with a most probable value equal to a one - loop factor.
arxiv:hep-ph/0606105
successive releases of planck data have demonstrated the strength of the sunyaev - - zeldovich ( sz ) effect in detecting hot baryons out to the galaxy cluster peripheries. to infer the hot gas pressure structure from nearby galaxy clusters to more distant objects, we developed a parametric method that models the spectral energy distribution and spatial anisotropies of both the galactic thermal dust and the cosmic microwave background, that are mixed - up with the cluster sz and dust signals. taking advantage of the best angular resolution of the high frequency instrument channels ( 5 arcmin ) and using x - ray priors in the innermost cluster regions that are not resolved with planck, this modelling allowed us to analyze a sample of 61 nearby members of the planck catalog of sz sources ( $ 0 < z < 0. 5 $, $ \ tilde { z } = 0. 15 $ ) using the full mission data, as well as to examine a distant sample of 23 clusters ( $ 0. 5 < z < 1 $, $ \ tilde { z } = 0. 56 $ ) that have been recently followed - up with xmm - newton and chandra observations. we find that ( i ) the average shape of the mass - scaled pressure profiles agrees with results obtained by the planck collaboration in the nearby cluster sample, and that ( ii ) no sign of evolution is discernible between averaged pressure profiles of the low - and high - redshift cluster samples. in line with theoretical predictions for these halo masses and redshift ranges, the dispersion of individual profiles relative to a self - similar shape stays well below 10 % inside $ r _ { 500 } $ but increases in the cluster outskirts.
arxiv:1707.02248
the sites of chromospheric excitation during solar flares are marked by extended extreme ultraviolet ribbons and hard x - ray footpoints. the standard interpretation is that these are the result of heating and bremsstrahlung emission from non - thermal electrons precipitating from the corona. we examine this picture using multi - wavelength observations of the early phase of an m - class flare sol2010 - 08 - 07t18 : 24. we aim to determine the properties of the heated plasma in the flare ribbons, and to understand the partition of the power input into radiative and conductive losses. using goes, sdo / eve, sdo / aia and rhessi we measure the temperature, emission measure and differential emission measure of the flare ribbons, and deduce approximate density values. the non - thermal emission measure, and the collisional thick target energy input to the ribbons are obtained from rhessi using standard methods. we deduce the existence of a substantial amount of plasma at 10 mk in the flare ribbons, during the pre - impulsive and early - impulsive phase of the flare. the average column emission measure of this hot component is a few times 10 ^ 28 / cm ^ 5, and we can calculate that its predicted conductive losses dominate its measured radiative losses. if the power input to the hot ribbon plasma is due to collisional energy deposition by an electron beam from the corona then a low - energy cutoff of around 5 kev is necessary to balance the conductive losses, implying a very large electron energy content. independent of the standard collisional thick - target electron beam interpretation, the observed non - thermal x - rays can be provided if one electron in 10 ^ 3 - 10 ^ 4 in the 10 mk ( 1 kev ) ribbon plasma has an energy above 10 kev. we speculate that this could arise if a non - thermal tail is generated in the ribbon plasma which is being heated by other means, for example by waves or turbulence.
arxiv:1401.6538
establishing the axion as the dark matter ( dm ) particle after a haloscope discovery typically requires follow - up experiments to break the degeneracy between the axion ' s coupling to photons and its local dm abundance. given that a discovery would justify more significant investments, we explore the prospects of ambitious light - shining - through - a - wall ( lsw ) setups to probe the qcd axion band. leveraging the excellent mass determination in haloscopes, we show how to design lsw experiments with lengths on the order of 100 km and suitably aligned magnetic fields with apertures of around 1 m to reach well - motivated axion models across up to four orders of magnitude in mass. beyond presenting a concrete plan for post - discovery experimental efforts, we briefly discuss complementary experiments and future directions beyond lsw experiments.
arxiv:2407.04772
akin to many subareas of computer vision, the recent advances in deep learning have also significantly influenced the literature on optical flow. previously, the literature had been dominated by classical energy - based models, which formulate optical flow estimation as an energy minimization problem. however, as the practical benefits of convolutional neural networks ( cnns ) over conventional methods have become apparent in numerous areas of computer vision and beyond, they have also seen increased adoption in the context of motion estimation to the point where the current state of the art in terms of accuracy is set by cnn approaches. we first review this transition as well as the developments from early work to the current state of cnns for optical flow estimation. alongside, we discuss some of their technical details and compare them to recapitulate which technical contribution led to the most significant accuracy improvements. then we provide an overview of the various optical flow approaches introduced in the deep learning age, including those based on alternative learning paradigms ( e. g., unsupervised and semi - supervised methods ) as well as the extension to the multi - frame case, which is able to yield further accuracy improvements.
arxiv:2004.02853
infrared spectra of rg1, 2 - c6h6 complexes ( rg = he, ne, ar ) are observed in the region of the nu12 fundamental of c6h6 using a pulsed supersonic jet expansion and a tunable optical parametric oscillator laser source. the mixed trimer he - ne - c6h6 is also detected. four bands are analyzed for each complex, namely nu12 itself ( ~ 3048 cm - 1 ) and three linked combination bands ( ~ 3079, 3100, and 3102 cm - 1 ). the results are consistent with previous ultraviolet and microwave results, with ne2 - c6h6 and he - ne - c6h6 being analyzed spectroscopically here for the first time.
arxiv:1809.07930
xva is a material component of a trade valuation and hence it must impact the decision to exercise options within a given netting set. this is true for both unsecured trades and secured / cleared trades where kva and mva play a material role even if cva and fva do not. however, this effect has frequently been ignored in xva models and indeed in exercise decisions made by option owners. this paper describes how xva impacts the exercise decision and how this can be readily evaluated using regression techniques ( longstaff and schwartz 2001 ). the paper then assesses the materiality of the impact of xva at the exercise boundary on swaption examples.
arxiv:1610.00256
to realize practical quantum computers, a large number of quantum bits ( qubits ) will be required. semiconductor spin qubits offer advantages such as high scalability and compatibility with existing semiconductor technologies. however, as the number of qubits increases, manual qubit tuning becomes infeasible, motivating automated tuning approaches. in this study, we use u - net, a neural network method for object detection, to identify charge transition lines in experimental charge stability diagrams. the extracted charge transition lines are analyzed using the hough transform to determine their positions and angles. based on this analysis, we obtain the transformation matrix to virtual gates. furthermore, we identify the single - electron regime by clustering the hough transform outputs. we also show the single - electron regime within the virtual gate space. these sequential processes are performed automatically. this approach will advance automated control technologies for large - scale quantum devices.
arxiv:2501.05878
the aim of this paper is to construct chains of length $ ( 2 ^ \ kappa ) ^ + $ in the sense of rudin - frol \ ' ik order in $ \ beta \ kappa $ for $ \ kappa $ regular.
arxiv:2304.00097
we introduce a novel framework, termed $ \ lambda $ dd, that revisits binary decision diagrams from a purely functional point of view. the framework allows to classify the already existing variants, including the most recent ones like chain - dd and esrbdd, as implementations of a special class of ordered models. we enumerate, in a principled way, all the models of this class and isolate its most expressive model. this new model, termed $ \ lambda $ dd - o - nucx, is suitable for both dense and sparse boolean functions, and is moreover invariant by negation. the canonicity of $ \ lambda $ dd - o - nucx is formally verified using the coq proof assistant. we furthermore give bounds on the size of the different diagrams : the potential gain achieved by more expressive models can be at most linear in the number of variables n.
arxiv:2003.09340
we formulate a reduced - order strategy for efficiently forecasting complex high - dimensional dynamical systems entirely based on data streams. the first step of our method involves reconstructing the dynamics in a reduced - order subspace of choice using gaussian process regression ( gpr ). gpr simultaneously allows for reconstruction of the vector field and more importantly, estimation of local uncertainty. the latter is due to i ) local interpolation error and ii ) truncation of the high - dimensional phase space. this uncertainty component can be analytically quantified in terms of the gpr hyperparameters. in the second step we formulate stochastic models that explicitly take into account the reconstructed dynamics and their uncertainty. for regions of the attractor which are not sufficiently sampled for our gpr framework to be effective, an adaptive blended scheme is formulated to enforce correct statistical steady state properties, matching those of the real data. we examine the effectiveness of the proposed method to complex systems including the lorenz 96, the kuramoto - sivashinsky, as well as a prototype climate model. we also study the performance of the proposed approach as the intrinsic dimensionality of the system attractor increases in highly turbulent regimes.
arxiv:1611.01583
this paper is a companion paper to [ g4 ], where sharp estimates are proven for fourier transforms of compactly supported functions built out of two - dimensional real - analytic functions. the theorems of [ g4 ] are stated in a rather general form. in this paper, we expand on the results of [ g4 ] and show that there is a class of " well - behaved " functions that contains a number of relevant examples for which such estimates can be explicitly described in terms of the newton polygon of the function. we will further see that for a subclass of these functions, one can prove noticeably more precise estimates, again in an explicitly describable way.
arxiv:1605.08089
we present an exact solution for the distribution of sample averaged monomer to monomer distance of ring polymers. for non - interacting and weakly - interacting models these distributions correspond to the distribution of the area under the reflected bessel bridge and the bessel excursion respectively, and are shown to be identical in dimension d greater or equal 2. a symmetry of the problem reveals that dimension d and 4 minus d are equivalent, thus the celebrated airy distribution describing the areal distribution of the one dimensional brownian excursion describes also a polymer in three dimensions. for a self - avoiding polymer in dimension d we find numerically that the fluctuations of the scaled averaged distance are nearly identical in dimensions 2 and 3, and are well described to a first approximation by the non - interacting excursion model in dimension 5.
arxiv:1501.06151
multilinear embedding estimates for the fractional laplacian are obtained in terms of functionals defined over a hyperbolic surface. convolution estimates used in the proof enlarge the classical framework of the convolution algebra for riesz potentials to include the critical endpoint index, and provide new realizations for fractional integral inequalities that incorporate restriction to smooth submanifolds. results developed here are modeled on the space - time estimate used by klainerman and machedon in their proof of uniqueness for the gross - pitaevskii hierarchy.
arxiv:1204.5684
in this paper we introduced the class $ \ mathcal { s } _ { g } ^ { \ ast } $ of analytic functions which is related with starlike functions and generating function of gregory coefficients. by using bounds on some coefficient functionals for the family of functions with positive real part, we obtain for functions in the class $ \ mathcal { s } _ { g } ^ { \ ast } $ several sharp coefficient bounds on the first six coeffcients and also further sharp bounds on the corresponding hankel determinants.
arxiv:2306.02431
supernova remnants ( snrs ) contribute to regulate the star formation efficiency and evolution of galaxies. as they expand into the interstellar medium ( ism ), they transfer vast amounts of energy and momentum that displace, compress and heat the surrounding material. despite the extensive work in galaxy evolution models, it remains to be observationally validated to what extent the molecular ism is affected by the interaction with snrs. we use the first results of the eso - aro public spectroscopic survey shrec, to investigate the shock interaction between the snr ic443 and the nearby molecular clump g. we use high sensitivity sio ( 2 - 1 ) and h $ ^ { 13 } $ co $ ^ + $ ( 1 - 0 ) maps obtained by shrec together with sio ( 1 - 0 ) observations obtained with the 40m telescope at the yebes observatory. we find that the bulk of the sio emission is arising from the ongoing shock interaction between ic443 and clump g. the shocked gas shows a well ordered kinematic structure, with velocities blue - shifted with respect to the central velocity of the snr, similar to what observed toward other snr - cloud interaction sites. the shock compression enhances the molecular gas density, n ( h $ _ 2 $ ), up to $ > $ 10 $ ^ 5 $ cm $ ^ { - 3 } $, a factor of > 10 higher than the ambient gas density and similar to values required to ignite star formation. finally, we estimate that up to 50 \ % of the momentum injected by ic443 is transferred to the interacting molecular material. therefore the molecular ism may represent an important momentum carrier in sites of snr - cloud interactions.
arxiv:2201.03008
we propose a new " unbiased through textual description ( utd ) " video benchmark based on unbiased subsets of existing video classification and retrieval datasets to enable a more robust assessment of video understanding capabilities. namely, we tackle the problem that current video benchmarks may suffer from different representation biases, e. g., object bias or single - frame bias, where mere recognition of objects or utilization of only a single frame is sufficient for correct prediction. we leverage vlms and llms to analyze and debias benchmarks from such representation biases. specifically, we generate frame - wise textual descriptions of videos, filter them for specific information ( e. g. only objects ) and leverage them to examine representation biases across three dimensions : 1 ) concept bias - determining if a specific concept ( e. g., objects ) alone suffice for prediction ; 2 ) temporal bias - assessing if temporal information contributes to prediction ; and 3 ) common sense vs. dataset bias - evaluating whether zero - shot reasoning or dataset correlations contribute to prediction. we conduct a systematic analysis of 12 popular video classification and retrieval datasets and create new object - debiased test splits for these datasets. moreover, we benchmark 30 state - of - the - art video models on original and debiased splits and analyze biases in the models. to facilitate the future development of more robust video understanding benchmarks and models, we release : " utd - descriptions ", a dataset with our rich structured descriptions for each dataset, and " utd - splits ", a dataset of object - debiased test splits.
arxiv:2503.18637
a new measuring technique dedicated to bubble velocity and size measurements in complex bubbly flows such as those occurring in bubble columns is proposed. this sensor combines the phase detection capability of a conical optical fiber, with velocity measurements from the doppler signal induced by an interface approaching the extremity of a single - mode fiber. the analysis of the probe functioning and of its response in controlled situations, have shown that the doppler probe provides the translation velocity of bubbles projected along the probe axis. a reliable signal processing routine has been developed that exploits the doppler signal arising at the gas - to - liquid transition : the resulting uncertainty on velocity is at most 14 \ %. such a doppler probe provides statistics on velocity and on size of gas inclusions, as well as local variables including void fraction, gas volumetric flux, number density and its flux. that sensor has been successfully exploited in an air - tap water bubble column 0. 4m in diameter for global gas hold - up from 2. 5 to 30 \ %. in the heterogeneous regime, the transverse profiles of the mean bubble velocity scaled by the value on the axis happen to be self - similar in the quasi fully developed region of the column. a fit is proposed for these profiles. in addition, on the axis, the standard deviation of bubble velocity scaled by the mean velocity increases with vsg in the homogeneous regime, and it remains stable, close to 0. 55, in the heterogeneous regime.
arxiv:2109.08100
atmospheric wind speeds and their fluctuations at different locations ( onshore and offshore ) are examined. one of the most striking features is the marked intermittency of probability density functions ( pdf ) of velocity differences - - no matter what location is considered. the shape of these pdfs is found to be robust over a wide range of scales which seems to contradict the mathematical concept of stability where a gaussian distribution should be the limiting one. motivated by the instationarity of atmospheric winds it is shown that the intermittent distributions can be understood as a superposition of different subsets of isotropic turbulence. thus we suggest a simple stochastic model to reproduce the measured statistics of wind speed fluctuations.
arxiv:nlin/0408005
a comprehensive study of the effect of wall heating or cooling on the linear, transient and secondary growth of instability in channel flow is conducted. the effect of viscosity stratification, heat diffusivity and of buoyancy are estimated separately, with some unexpected results. from linear stability results, it has been accepted that heat diffusivity does not affect stability. however, we show that realistic prandtl numbers cause a transient growth of disturbances that is an order of magnitude higher than at zero prandtl number. buoyancy, even at fairly low levels, gives rise to high levels of subcritical energy growth. unusually for transient growth, both of these are spanwise - independent and not in the form of streamwise vortices. at moderate grashof numbers, exponential growth dominates, with distinct rayleigh - benard and poiseuille modes for grashof numbers upto $ \ sim 25000 $, which merge thereafter. wall heating has a converse effect on the secondary instability compared to the primary, destabilising significantly when viscosity decreases towards the wall. it is hoped that the work will motivate experimental and numerical efforts to understand the role of wall heating in the control of channel and pipe flows.
arxiv:physics/0603245
learning the causes of time - series data is a fundamental task in many applications, spanning from finance to earth sciences or bio - medical applications. common approaches for this task are based on vector auto - regression, and they do not take into account unknown confounding between potential causes. however, in settings with many potential causes and noisy data, these approaches may be substantially biased. furthermore, potential causes may be correlated in practical applications. moreover, existing algorithms often do not work with cyclic data. to address these challenges, we propose a new doubly robust method for structure identification from temporal data ( sitd ). we provide theoretical guarantees, showing that our method asymptotically recovers the true underlying causal structure. our analysis extends to cases where the potential causes have cycles and they may be confounded. we further perform extensive experiments to showcase the superior performance of our method.
arxiv:2311.06012
in this paper, we study the chiral symmetry restoration in the hadronic spectrum in the framework of generalised nambu - jona - lasinio quark models with instantaneous confining quark kernels. we investigate a heavy - light quarkonium and derive its bound - state equation in the form of a schroedingerlike equation and, after the exact inverse foldy - wouthuysen transformation, in the form of a diraclike quation. we discuss the lorentz nature of confinement for such a system and demonstrate explicitly the effective chiral symmetry restoration for highly excited states in the mesonic spectrum. we give an estimate for the scale of this restoration.
arxiv:hep-ph/0507330
in this paper, we propose a hybrid collocation method based on finite difference and haar wavelets to solve nonlocal hyperbolic partial differential equations. developing an efficient and accurate numerical method to solve such problem is a difficult task due to the presence of nonlocal boundary condition. the speciality of the proposed method is to handle integral boundary condition efficiently using the given data. due to various attractive properties of haar wavelets such as closed form expression, compact support and orthonormality, haar wavelets are efficiently used for spatial discretization and second order finite difference is used for temporal discretization. stability and error estimates have been investigated in order to ensure the convergence of the method. finally, numerical results are compared with few existing results and it is shown that numerical results obtained by the proposed method is better than few existing results.
arxiv:2211.07249
ab initio calculations show an antiferromagnetic - ferromagnetic phase transition around 9 - 10 gpa and a magnetic anomaly at 12 gpa in bifeo3. the magnetic phase transition also involves a structural and insulator - metal transition. the g - type afm configuration under pressure leads to an increase of the y component and decrease of the z component of the magnetization, which is caused by the splitting of the dz2 orbital from doubly degenerate eg states. our results agree with recent experimental results.
arxiv:1212.4591
there is vast empirical evidence that given a set of assumptions on the real - world dynamics of an asset, the european options on this asset are not efficiently priced in options markets, giving rise to arbitrage opportunities. we study these opportunities in a generic stochastic volatility model and exhibit the strategies which maximize the arbitrage profit. in the case when the misspecified dynamics is a classical black - scholes one, we give a new interpretation of the classical butterfly and risk reversal contracts in terms of their ( near ) optimality for arbitrage strategies. our results are illustrated by a numerical example including transaction costs.
arxiv:1002.5041
not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition of the wild - type gene with a reporting element such as green fluorescent protein ( gfp ) that will allow easy visualisation of the products of the genetic modification. while this is a useful technique, the manipulation can destroy the function of the gene, creating secondary effects and possibly calling into question the results of the experiment. more sophisticated techniques are now in development that can track protein products without mitigating their function, such as the addition of small sequences that will serve as binding motifs to monoclonal antibodies. expression studies aim to discover where and when specific proteins are produced. in these experiments, the dna sequence before the dna that codes for a protein, known as a gene ' s promoter, is reintroduced into an organism with the protein coding region replaced by a reporter gene such as gfp or an enzyme that catalyses the production of a dye. thus the time and place where a particular protein is produced can be observed. expression studies can be taken a step further by altering the promoter to find which pieces are crucial for the proper expression of the gene and are actually bound by transcription factor proteins ; this process is known as promoter bashing. = = = industrial = = = organisms can have their cells transformed with a gene coding for a useful protein, such as an enzyme, so that they will overexpress the desired protein. mass quantities of the protein can then be manufactured by growing the transformed organism in bioreactor equipment using industrial fermentation, and then purifying the protein. some genes do not work well in bacteria, so yeast, insect cells or mammalian cells can also be used. these techniques are used to produce medicines such as insulin, human growth hormone, and vaccines, supplements such as tryptophan, aid in the production of food ( chymosin in cheese making ) and fuels. other applications with genetically engineered bacteria could involve making them perform tasks outside their natural cycle, such as making biofuels, cleaning up oil spills, carbon and other toxic waste and detecting arsenic in drinking water. certain genetically modified microbes can also be used in biomining and bioremediation, due to their ability to extract heavy metals from their environment and incorporate them into compounds that are more easily recover
https://en.wikipedia.org/wiki/Genetic_engineering