text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
we report a significant dzyaloshinskii - moriya interaction ( dmi ) and perpendicular magnetic anisotropy ( pma ) at interfaces comprising hexagonal boron nitride ( h - bn ) and co. by comparing the behavior of these phenomena at graphene / co and h - bn / co interfaces, it is found that the dmi in latter increases as a function of co thickness and beyond three monolayers stabilizes with one order of magnitude larger values compared to those at graphene / co, where the dmi shows opposite decreasing behavior. at the same time, the pma for both systems shows similar trends with larger values for graphene / co and no significant variations for all thickness ranges of co. furthermore, using micromagnetic simulations we demonstrate that such significant dmi and pma values remaining stable over large range of co thickness give rise to formation of skyrmions with small applied external fields in the range of 200 - 250 mt up to 100 k temperatures. these findings open up further possibilities towards integrating two - dimensional ( 2d ) materials in spin - orbitronics devices. | arxiv:2105.09015 |
interface adaptation allows code written for one interface to be used with a software component with another interface. when multiple adapters are chained together to make certain adaptations possible, we need a way to analyze how well the adaptation is done in case there are more than one chains that can be used. we introduce an approach to precisely analyzing the loss in an interface adapter chain using a simple form of abstract interpretation. | arxiv:1011.6461 |
pre - trained large language models ( pllms ) like openai chatgpt and google gemini face challenges such as inaccurate factual recall, hallucinations, biases, and future data leakage for temporal knowledge graph ( tkg ) forecasting. to address these issues, we introduce sla - tkgf ( small - scale language assistant for tkg forecasting ), which utilizes retrieval - augmented generation ( rag ) aided, custom - trained small - scale language models through a tabula rasa approach from scratch for effective tkg forecasting. our framework constructs knowledge - infused prompts with relevant historical data from tkgs, web search results, and pllms - generated textual descriptions to understand historical entity relationships prior to the target time. it leverages these external knowledge - infused prompts for deeper understanding and reasoning of context - specific semantic and temporal information to zero - shot prompt small - scale language models for more accurate predictions of future events within tkgs. it reduces hallucinations and mitigates distributional shift challenges through comprehending changing trends over time. as a result, it enables more accurate and contextually grounded forecasts of future events while minimizing computational demands. rigorous empirical studies demonstrate our framework robustness, scalability, and state - of - the - art ( sota ) performance on benchmark datasets with interpretable and trustworthy tkg forecasting. | arxiv:2408.13273 |
research in a university. some positions, such as faculty ( academic staff ) require a doctor of philosophy. in germany a toningenieur is an audio engineer who designs, builds and repairs audio systems. = = = sub - disciplines = = = the listed subdisciplines are based on pacs ( physics and astronomy classification scheme ) coding used by the acoustical society of america with some revision. = = = = audio signal processing = = = = audio engineers develop audio signal processing algorithms to allow the electronic manipulation of audio signals. these can be processed at the heart of much audio production such as reverberation, auto - tune or perceptual coding ( e. g. mp3 or opus ). alternatively, the algorithms might perform echo cancellation, or identify and categorize audio content through music information retrieval or acoustic fingerprint. = = = = architectural acoustics = = = = architectural acoustics is the science and engineering of achieving a good sound within a room. for audio engineers, architectural acoustics can be about achieving good speech intelligibility in a stadium or enhancing the quality of music in a theatre. architectural acoustic design is usually done by acoustic consultants. = = = = electroacoustics = = = = electroacoustics is concerned with the design of headphones, microphones, loudspeakers, sound reproduction systems and recording technologies. examples of electroacoustic design include portable electronic devices ( e. g. mobile phones, portable media players, and tablet computers ), sound systems in architectural acoustics, surround sound and wave field synthesis in movie theater and vehicle audio. = = = = musical acoustics = = = = musical acoustics is concerned with researching and describing the science of music. in audio engineering, this includes the design of electronic instruments such as synthesizers ; the human voice ( the physics and neurophysiology of singing ) ; physical modeling of musical instruments ; room acoustics of concert venues ; music information retrieval ; music therapy, and the perception and cognition of music. = = = = psychoacoustics = = = = psychoacoustics is the scientific study of how humans respond to what they hear. at the heart of audio engineering are listeners who are the final arbitrator as to whether an audio design is successful, such as whether a binaural recording sounds immersive. = = = = speech = = = = the production, computer processing and perception of speech is an important part of audio engineering. ensuring speech is transmitted intelligibly | https://en.wikipedia.org/wiki/Audio_engineer |
in the present paper we discuss a so called stripe phase of two - dimensional systems which was observed in computer simulation of core - softened system and in some experiments with colloidal films. we show that the stripe phase is indeed an oblique crystal and find out its unit vectors, i. e. we give a full description of the structure of this crystalline phase. | arxiv:1812.04322 |
connected and automated vehicles are poised to transform the transport system. however, significant uncertainties remain about their impact, particularly regarding concerns that this advanced technology might exacerbate injustices, such as safety disparities for vulnerable road users. therefore, understanding the potential conflicts of this technology with societal values such as justice and safety is crucial for responsible implementation. to date, no research has focused on what safety and justice in transport mean in the context of cav deployment and how the potential benefits of cavs can be harnessed without exacerbating the existing vulnerabilities and injustices vrus face. this paper addresses this gap by exploring car drivers ' and pedestrians ' perceptions of safety and justice issues that cavs might exacerbate using an existing theoretical framework. employing a qualitative approach, the study delves into the nuanced aspects of these concepts. interviews were conducted with 30 participants aged between 18 and 79 in queensland, australia. these interviews were recorded, transcribed, organised, and analysed using reflexive thematic analysis. three main themes emerged from the participants ' discussions : cavs as a safety problem for vrus, cavs as a justice problem for vrus, and cavs as an alignment with societal values problem. participants emphasised the safety challenges cavs pose for vrus, highlighting the need for thorough evaluation and regulatory oversight. concerns were also raised about cavs potentially marginalising vulnerable groups within society. participants advocated for inclusive discussions and a justice - oriented approach to designing a comprehensive transport system to address these concerns. | arxiv:2405.20530 |
real - world multi - modal problems are rarely solved by a single machine learning model, and often require multi - step computational plans that involve stitching several models. tool - augmented llms hold tremendous promise for automating the generation of such computational plans. however, the lack of standardized benchmarks for evaluating llms as planners for multi - step multi - modal tasks has prevented a systematic study of planner design decisions. should llms generate a full plan in a single shot or step - by - step? should they invoke tools directly with python code or through structured data formats like json? does feedback improve planning? to answer these questions and more, we introduce m & m ' s : a benchmark containing 4k + multi - step multi - modal tasks involving 33 tools that include multi - modal models, ( free ) public apis, and image processing modules. for each of these task queries, we provide automatically generated plans using this realistic toolset. we further provide a high - quality subset of 1, 565 task plans that are human - verified and correctly executable. with m & m ' s, we evaluate 10 popular llms with 2 planning strategies ( multi - step vs. step - by - step planning ), 2 plan formats ( json vs. code ), and 3 types of feedback ( parsing / verification / execution ). finally, we summarize takeaways from our extensive experiments. our dataset and code are available on huggingface ( https : / / huggingface. co / datasets / zixianma / mnms ) and github ( https : / / github. com / raivnlab / mnms ). | arxiv:2403.11085 |
this book includes my lectures, together with their problem sets and solutions, on 1 ) classical mechanics ( one semester ), 2 ) thermodynamics and statistical mechanics ( one semester ), and 3 ) quantum mechanics ( one semester ), which i have been giving to graduate students of theoretical physics at annaba university since 2010. | arxiv:1603.03764 |
chains of metallic nanoparticles sustain strongly confined surface plasmons with relatively low dielectric losses. to exploit these properties in applications, such as waveguides, the fabrication of long chains of low disorder and a thorough understanding of the plasmon - mode properties, such as dispersion relations, are indispensable. here, we use a wrinkled template for directed self - assembly to assemble chains of gold nanoparticles. with this up - scalable method, chain lengths from two particles ( 140 nm ) to 20 particles ( 1500 nm ) and beyond can be fabricated. electron energy - loss spectroscopy supported by boundary element simulations, finite - difference time - domain, and a simplified dipole coupling model reveal the evolution of a band of plasmonic waveguide modes from degenerated single - particle modes in detail. in striking difference from plasmonic rod - like structures, the plasmon band is confined in excitation energy, which allows light manipulations below the diffraction limit. the non - degenerated surface plasmon modes show suppressed radiative losses for efficient energy propagation over a distance of 1500 nm. | arxiv:1910.07242 |
graph signal processing ( gsp ) provides a powerful framework for analysing complex, interconnected systems by modelling data as signals on graphs. recent advances in gsp have enabled the learning of graph structures from observed signals, but these methods often struggle with time - varying systems and real - time applications. adaptive filtering techniques, while effective for online learning, have seen limited application in graph topology estimation from a gsp perspective. to this end, we introduce adacgp, an online algorithm for adaptive estimation of the graph shift operator ( gso ) from multivariate time series. the gso is estimated from an adaptive time - vertex autoregressive model through recursive update formulae designed to address sparsity, shift - invariance and bias. through simulations, we show that adacgp performs consistently well across various graph topologies, and achieves improvements in excess of 82 % for gso estimation compared to baseline adaptive vector autoregressive models. in addition, our online variable splitting approach for enforcing sparsity enables near - perfect precision in identifying causal connections while maintaining low false positive rates upon optimisation of the forecast error. finally, adacgp ' s ability to track changes in graph structure is demonstrated on recordings of ventricular fibrillation dynamics in response to an anti - arrhythmic drug. adacgp is shown to be able to identify the stability of critical conduction patterns that may be maintaining the arrhythmia in an intuitive way, together with its potential to support diagnosis and treatment strategies. | arxiv:2411.01567 |
multilingual audio - text retrieval ( ml - atr ) is a challenging task that aims to retrieve audio clips or multilingual texts from databases. however, existing ml - atr schemes suffer from inconsistencies for instance similarity matching across languages. we theoretically analyze the inconsistency in terms of both multilingual modal alignment direction error and weight error, and propose the theoretical weight error upper bound for quantifying the inconsistency. based on the analysis of the weight error upper bound, we find that the inconsistency problem stems from the data distribution error caused by random sampling of languages. we propose a consistent ml - atr scheme using 1 - to - k contrastive learning and audio - english co - anchor contrastive learning, aiming to mitigate the negative impact of data distribution error on recall and consistency in ml - atr. experimental results on the translated audiocaps and clotho datasets show that our scheme achieves state - of - the - art performance on recall and consistency metrics for eight mainstream languages, including english. our code will be available at https : / / github. com / atri - acl / atri - acl. | arxiv:2502.14627 |
an alternative method is described for determining the hyperbolic structure on a link complement, and some of its elementary consequences are examined. the method is particularly suited to alternating links. | arxiv:1108.0510 |
the simulation of fluid flow problems, specifically incompressible flows governed by the navier - stokes equations ( nse ), holds fundamental significance in a range of scientific and engineering applications. traditional numerical methods employed for solving these equations on three - dimensional ( 3d ) meshes are commonly known for their moderate conservation properties, high computational intensity and substantial resource demands. relying on its ability to capture the intrinsic geometric and topological properties of simplicial meshes, discrete exterior calculus ( dec ) provides a discrete analog to differential forms and enables the discretization of partial differential equations ( pdes ) on meshes. we present a hybrid discretization approach for the 3d incompressible navier - stokes equations based on dec and fourier transform ( ft ). an existing conservative primitive variable dec discretization of incompressible navier - stokes equations over surface simplicial meshes developed by jagad et al. [ 1 ] is considered in the planar dimension while the fourier expansion is applied in the third dimension. the test cases of three - dimensional lid - driven cavity and viscous taylor - green three - dimensional vortex ( tgv ) flows show that the simulation results using this hybrid approach are comparable to literature. | arxiv:2409.04731 |
motivated by theoretically and experimentally observed structural phases with octagonal symmetry, we introduce a family of octagonal tilings which are composed of three prototiles. we define our tilings with respect to two non - negative integers, $ m $ and $ n $, so that the inflation factor of a given tiling is $ \ delta _ { ( m, n ) } = m + n ( 1 + \ sqrt { 2 } ) $. as such, we show that our family consists of an infinite series of tilings which can be delineated into separate ` cases ' which are determined by the relationship between $ m $ and $ n $. similarly, we present the primitive substitution rules or decomposition of our prototiles, along with the statistical properties of each case, demonstrating their dependence on these integers. | arxiv:2502.04133 |
we present here our study of the adiabatic quantum dynamics of a random ising chain across its quantum critical point. the model investigated is an ising chain in a transverse field with disorder present both in the exchange coupling and in the transverse field. the transverse field term is proportional to a function $ \ gamma ( t ) $ which, as in the kibble - zurek mechanism, is linearly reduced to zero in time with a rate $ \ tau ^ { - 1 } $, $ \ gamma ( t ) = - t / \ tau $, starting at $ t = - \ infty $ from the quantum disordered phase ( $ \ gamma = \ infty $ ) and ending at $ t = 0 $ in the classical ferromagnetic phase ( $ \ gamma = 0 $ ). we first analyze the distribution of the gaps - - occurring at the critical point $ \ gamma _ c = 1 $ - - which are relevant for breaking the adiabaticity of the dynamics. we then present extensive numerical simulations for the residual energy $ e _ { \ rm res } $ and density of defects $ \ rho _ k $ at the end of the annealing, as a function of the annealing inverse rate $ \ tau $. % for different lenghts of the chain. both the average $ e _ { \ rm res } ( \ tau ) $ and $ \ rho _ k ( \ tau ) $ are found to behave logarithmically for large $ \ tau $, but with different exponents, $ [ e _ { \ rm res } ( \ tau ) / l ] _ { \ rm av } \ sim 1 / \ ln ^ { \ zeta } ( \ tau ) $ with $ \ zeta \ approx 3. 4 $, and $ [ \ rho _ k ( \ tau ) ] _ { \ rm av } \ sim 1 / \ ln ^ { 2 } ( \ tau ) $. we propose a mechanism for $ 1 / \ ln ^ 2 { \ tau } $ - behavior of $ [ \ rho _ k ] _ { \ rm av } $ based on the landau - zener tunneling theory and on a fisher ' s type real - space renormalization group analysis of the relevant gaps. the model proposed shows therefore a paradigmatic example of how an adiabatic quantum computation can become very slow when disorder is at play, even in absence of any source of frustration. | arxiv:0706.1832 |
incorporating biological neuronal properties into artificial neural networks ( anns ) to enhance computational capabilities poses a formidable challenge in the field of machine learning. inspired by recent findings indicating that dendrites adhere to quadratic integration rules for synaptic inputs, we propose a novel ann model, dendritic integration - based quadratic neural network ( diqnn ). this model shows superior performance over traditional anns in a variety of classification tasks. to reduce the computational cost of diqnn, we introduce the low - rank diqnn, while we find it can retain the performance of the original diqnn. we further propose a margin to characterize the generalization error and theoretically prove this margin will increase monotonically during training. and we show the consistency between generalization and our margin using numerical experiments. finally, by integrating this margin into the loss function, the change of test accuracy is indeed accelerated. our work contributes a novel, brain - inspired ann model that surpasses traditional anns and provides a theoretical framework to analyze the generalization error in classification tasks. | arxiv:2307.13609 |
in this paper, we firstly derive two approximations of the achievable uplink rate with the perfect / imperfect channel state information ( csi ) in cell - free massive multi - input multi - output ( mimo ) systems, and all these approximations are not only in the simple, but also converge into the classical bounds achieved in conventional massive mimo systems where the base - station ( bs ) antennas are co - located. it is worth noting that the obtained two approximations with perfect csi could be regarded as the special cases of the obtained two approximations with imperfect csi when the pilot sequence power becomes infinite, respectively. moreover, the theory analysis shows that all obtained approximations with perfect / imperfect csi have an asymptotic lower bound $ \ frac { \ alpha } { 2 } \ log _ 2 l $ thanks to the extra { \ emph { distance diversity } } offered by massively distributed antennas, where $ l $ is the number of bs antennas and the path - loss factor $ \ alpha > 2 $, except for the free space environment. obviously, these results indicate that the cell - free massive mimo system has huge potential of spectral efficiency than the conventional massive mimo system with the asymptotically tight bound $ \ log _ 2 l $. | arxiv:1805.10621 |
in 2000, brightwell and winkler characterised dismantlable graphs as the graphs $ h $ for which the hom - graph $ { \ rm hom } ( g, h ) $, defined on the set of homomorphisms from $ g $ to $ h $, is connected for all graphs $ g $. this shows that the reconfiguration version $ { \ rm recon _ { hom } } ( h ) $ of the $ h $ - colouring problem, in which one must decide for a given $ g $ whether $ { \ rm hom } ( g, h ) $ is connected, is trivial if and only if $ h $ is dismantlable. we prove a similar starting point for the reconfiguration version of the $ h $ - extension problem. where $ { \ rm hom } ( g, h ; p ) $ is the subgraph of the hom - graph $ { \ rm hom } ( g, h ) $ induced by the $ h $ - colourings extending the $ h $ - precolouring $ p $ of $ g $, the reconfiguration version $ { \ rm recon _ { ext } ( h ) } $ of the $ h $ - extension problem asks, for a given $ h $ - precolouring $ p $ of a graph $ g $, if $ { \ rm hom } ( g, h ; p ) $ is connected. we show that the graphs $ h $ for which $ { \ rm hom } ( g, h ; p ) $ is connected for every choice of $ ( g, p ) $ are exactly the $ { \ rm nu } $ graphs. this gives a new characterisation of $ { \ rm nu } $ graphs, a nice class of graphs that is important in the algebraic approach to the $ { \ rm csp } $ - dichotomy. we further give bounds on the diameter of $ { \ rm hom } ( g, h ; p ) $ for $ { \ rm nu } $ graphs $ h $, and show that shortest path between two vertices of $ { \ rm hom } ( g, h ; p ) $ can be found in parameterised polynomial time. we apply our results to the problem of shortest path reconfiguration, significantly extending recent results. | arxiv:2208.04071 |
the question of the relative importance of coherent structures and waves has for a long time attracted a great deal of interest in astrophysical plasma turbulence research, with a more recent focus on kinetic scale dynamics. here we utilize high - resolution observational and simulation data to investigate the nature of waves and structures emerging in a weakly collisional, turbulent kinetic plasma. observational results are based on in situ solar wind measurements from the cluster and mms spacecraft, and the simulation results are obtained from an externally driven, three - dimensional fully kinetic simulation. using a set of novel diagnostic measures we show that both the large - amplitude structures and the lower - amplitude background fluctuations preserve linear features of kinetic alfven waves to order unity. this quantitative evidence suggests that the kinetic turbulence cannot be described as a mixture of mutually exclusive waves and structures but may instead be pictured as an ensemble of localized, anisotropic wave packets or " eddies " of varying amplitudes, which preserve certain linear wave properties during their nonlinear evolution. | arxiv:1806.05741 |
in this paper we prove a new formula for the coefficients of the cellular homology of real flag manifolds in terms of the height of certain roots. in particular, for flag manifolds of type a, we get a very simple formula for these coefficients and an explicit expression for the first and second homology groups with integer coefficients. | arxiv:2009.05114 |
mri scans provide valuable medical information, however they also contain sensitive and personally identifiable information that needs to be protected. whereas mri metadata is easily sanitized, mri image data is a privacy risk because it contains information to render highly - realistic 3d visualizations of a patient ' s head, enabling malicious actors to possibly identify the subject by cross - referencing a database. data anonymization and de - identification is concerned with ensuring the privacy and confidentiality of individuals ' personal information. traditional mri de - identification methods remove privacy - sensitive parts ( e. g. eyes, nose etc. ) from a given scan. this comes at the expense of introducing a domain shift that can throw off downstream analyses. in this work, we propose cp - mae, a model that de - identifies the face by remodeling it ( e. g. changing the face ) rather than by removing parts using masked autoencoders. cp - mae outperforms all previous approaches in terms of downstream task performance as well as de - identification. with our method we are able to synthesize high - fidelity scans of resolution up to $ 256 ^ 3 $ - - compared to $ 128 ^ 3 $ with previous approaches - - which constitutes an eight - fold increase in the number of voxels. | arxiv:2310.15778 |
the symmetric tensor energy - impulse of interaction of collective of electric charges with an electromagnetic field is received. a system of covariant energy and momentum conservation equations or a system of equations for the collective motion of charged particles is derived from this tensor. from this system expressions are derived for collective electrodynamics forces. from the energy - momentum tensor derived the tensor of collective electrodynamics forces is. | arxiv:2002.02756 |
we investigate the global structure of the advection dominated accretion flow around a schwarzschild black hole where the accretion disc is threaded by toroidal magnetic fields. we consider synchrotron radiative process as an effective cooling mechanism active in the flow. with this, we obtain the global transonic accretion solutions by exploring the variety of boundary conditions and dissipation parameters, namely accretion rate ( $ { \ dot m } $ ) and viscosity ( $ \ alpha _ b $ ). the fact that depending on the initial parameters, steady state accretion flows can possess centrifugally supported shock waves. these global shock solutions exist even when the level of dissipation is relatively high. we study the properties of shock waves and observe that the dynamics of the post - shock corona ( hereafter, psc ) is regulated by the flow parameters. interestingly, we find that shock solution disappears completely when the dissipation parameters exceed their critical values. we calculate the critical values of viscosity parameter ( $ \ alpha ^ { \ rm cri } _ b $ ) adopting the canonical values of adiabatic indices as $ \ gamma = 4 / 3 $ ( ultra - relativistic ) and $ 1. 5 $ ( semi - non - relativistic ) and find that in the gas pressure dominated domain, $ \ alpha ^ { \ rm cri } _ b \ sim 0. 4 $ for $ \ gamma = 4 / 3 $ and $ \ alpha ^ { \ rm cri } _ b \ sim 0. 27 $ for $ \ gamma = 1. 5 $, respectively. we further show that global shock solutions are relatively more luminous compared to the shock free solutions. also, we have calculated the synchrotron spectra for shocked solutions. when the shock is considered to be dissipative in nature, it would have an important implication as the available energy at psc can be utilized to power the outflowing matter escaped from psc. towards this, we calculate the maximum shock luminosity and discuss the observational implication of our present formalism. | arxiv:1710.01112 |
the formation of massive black holes and their coevolution with host galaxies are pivotal areas of modern astrophysics. spherical accretion onto a central point mass serves as a foundational framework in cosmological simulations, semianalytical models, and observational studies. this work extends the classical spherical accretion model by incorporating the gravitational potential of host galaxies, including contributions from stellar components and dark matter halos. numerical solutions spanning parsec - scale to event - horizon - scale regimes reveal that the flow structure is highly sensitive to the mass and size of the dark matter halo. adding low angular momentum to the accreting gas demonstrates that such flows resemble spherical bondi accretion, with mass accretion rates converging towards the bondi rate. we find that the low angular momentum flow resembles the spherical bondi flow and its mass accretion rate approaches the bondi accretion rate. due to the presence of dark matter, the mass accretion rate is increased by a factor of more than ~ % 100 in comparison to analogous hydrodynamic solutions. these findings underscore the critical role of stellar and dark matter gravitational potentials in shaping the dynamics and accretion rates of quasi - spherical flows, providing new insights into astrophysical accretion processes. | arxiv:2502.12101 |
we study duality transformations for two - dimensional sigma models with abelian chiral isometries and prove that generic such transformations are equivalent to integrated marginal perturbations by bilinears in the chiral currents, thus confirming a recent conjecture by hassan and sen formulated in the context of wess - zumino - witten models. specific duality transformations instead give rise to coset models plus free bosons. | arxiv:hep-th/9301005 |
we review the prospects for central exclusive diffractive ( ced ) production of higgs bosons in the sm with a fourth generation of fermions at the lhc using forward proton detectors installed at 220m and 420m distance around atlas and / or cms. we discuss the determination of higgs spin - parity and coupling structures at the lhc and show that the forward proton mode would provide a critical information on the cp properties of the higgs bosons. | arxiv:1009.2680 |
we introduce a new self - interacting random walk on the integers in a dynamic random environment and show that it converges to a pure diffusion in the scaling limit. we also find a lower bound on the diffusion coefficient in some special cases. with minor changes the same argument can be used to prove the scaling limit of the corresponding walk in z ^ d. | arxiv:math/0611734 |
we propose a new framework to design controllers for high - dimensional nonlinear systems. the control is designed through the iterative linear quadratic regulator ( ilqr ), an algorithm that computes control by iteratively applying the linear quadratic regulator on the local linearization of the system at each time step. since ilqr is computationally expensive, we propose to first construct reduced - order models ( roms ) of the high - dimensional nonlinear system. we derive nonlinear roms via projection, where the basis is computed via balanced truncation ( bt ) and lqg balanced truncation ( lqg - bt ). numerical experiments are performed on a semi - discretized nonlinear burgers equation. we find that the ilqr algorithm produces good control on roms constructed either by bt or lqg - bt, with bt - rom based controllers outperforming lqg - bt slightly for very low - dimensional systems. | arxiv:2012.02305 |
we show that violation of klyachko - can - binicioglu - shumovsky [ phys. rev. lett. { \ bf 101 }, 020403 ( 2008 ) ] pentagram - like inequality can exceed $ \ sqrt { 5 } $ provided that exclusive events do not have to be comeasurable and that one uses bosonic systems which exhibit bunching effects. we also show that in this case one can find three pairwise exclusive events whose sum of probabilities is 3 / 2. | arxiv:1211.6907 |
[ abridged ] the primordial scalar power spectrum is well constrained on large scales, primarily by the observations of the anisotropies in the cosmic microwave background ( cmb ). over the last few years, it has been recognized that a sharp rise in power on small scales will lead to enhanced formation of primordial black holes ( pbhs ) and also generate secondary gravitational waves ( gws ) of higher and, possibly, detectable amplitudes. it is well understood that scalar power spectra with cobe normalized amplitude on the cmb scales and enhanced amplitudes on smaller scales can be generated due to deviations from slow roll in single, canonical scalar field models of inflation. in fact, an epoch of so - called ultra slow roll inflation can lead to the desired amplification. we find that scenarios that lead to ultra slow roll can be broadly classified into two types, one wherein there is a brief departure from inflation ( a scenario referred to as punctuated inflation ) and another wherein such a departure does not arise. we consider a set of single field inflationary models involving the canonical scalar field that lead to ultra slow roll and punctuated inflation and examine the formation of pbhs as well as the generation of secondary gws in these models. apart from considering specific models, we reconstruct potentials from certain functional choices of the first slow roll parameter leading to ultra slow roll and punctuated inflation and investigate their observational signatures. in addition to the secondary tensor power spectrum, we calculate the secondary tensor bispectrum in the equilateral limit in these scenarios. moreover, we calculate the inflationary scalar bispectrum that arises in all the cases and discuss the imprints of the scalar non - gaussianities on the extent of pbhs formed and the amplitude of the secondary gws. | arxiv:2008.12202 |
we analyze the embedding dimension of a normal weighted homogeneous surface singularity, and more generally, the poincar \ ' e series of the minimal set of generators of the graded algebra of regular functions, provided that the link of the germs is a rational homology sphere. in the case of several sub - families we provide explicit formulas in terms of the seifert invariants ( generalizing results of wagreich and vandyke ), and we also provide key examples showing that, in general, these invariants are not topological. we extend the discussion to the case of splice - - quotient singularities with star - - shaped graph as well. | arxiv:0910.4035 |
using the $ 1 / n _ c $ expansion of qcd we analyze the spectrum of positive parity resonances with strangeness $ s = 0, - 1, - 2 $ and - 3 in the 2 - 3 gev mass region, supposed to belong to the $ [ \ textbf { 56 }, 4 ^ + ] $ multiplet. the mass operator is similar to that of $ [ \ textbf { 56 }, 2 ^ + ] $, previously studied in the literature. the analysis of the latter is revisited. in the $ [ \ textbf { 56 }, 4 ^ + ] $ multiplet we find that the spin - spin term brings the dominant contribution and that the spin - orbit term is entirely negligible in the hyperfine interaction, in agreement with constituent quark model results. more data are strongly desirable, especially in the strange sector in order to fully exploit the power of this approach. | arxiv:hep-ph/0409261 |
we propose and design recommendation systems that incentivize efficient exploration. agents arrive sequentially, choose actions and receive rewards, drawn from fixed but unknown action - specific distributions. the recommendation system presents each agent with actions and rewards from a subsequence of past agents, chosen ex ante. thus, the agents engage in sequential social learning, moderated by these subsequences. we asymptotically attain optimal regret rate for exploration, using a flexible frequentist behavioral model and mitigating rationality and commitment assumptions inherent in prior work. we suggest three components of effective recommendation systems : independent focus groups, group aggregators, and interlaced information structures. | arxiv:1811.06026 |
in this paper, we give a method to evaluate minimum numbers of dehn colors for knots by using symmetric local biquandle cocycle invariants. we give answers to some questions arising as a consequence of our previous paper [ 6 ]. in particular, we show that there exist knots which are distinguished by minimum numbers of dehn colors. | arxiv:2501.09942 |
the use of low - resolution analog - to - digital converters ( adcs ) is considered to be an effective technique to reduce the power consumption and hardware complexity of wireless transceivers. however, in systems with low - resolution adcs, obtaining channel state information ( csi ) is difficult due to significant distortions in the received signals. the primary motivation of this paper is to show that learning techniques can mitigate the impact of csi unavailability. we study the blind detection problem in multiple - input - multiple - output ( mimo ) systems with low - resolution adcs using learning approaches. two methods, which employ a sequence of pilot symbol vectors as the initial training data, are proposed. the first method exploits the use of a cyclic redundancy check ( crc ) to obtain more training data, which helps improve the detection accuracy. the second method is based on the perspective that the to - be - decoded data can itself assist the learning process, so no further training information is required except the pilot sequence. for the case of 1 - bit adcs, we provide a performance analysis of the vector error rate for the proposed methods. based on the analytical results, a criterion for designing transmitted signals is also presented. simulation results show that the proposed methods outperform existing techniques and are also more robust. | arxiv:1906.04090 |
the gauge - singlet right - handed neutrinos would be essential to explain tiny masses of active neutrinos. we consider the effective field theory of the standard model extended with these fields under the assumption that neutrinos are dirac particles. in this framework, we provide a comprehensive study for the phenomenological consequences of various dimension six interactions employing various high and low energy observables. these include the neutrino mass itself, constraints from electroweak precision test and collider searches for lepton or jet plus missing energy, as well as decays of proton, meson, tau and top. we also study their astrophysical and cosmological implications for stellar cooling and relativistic degrees of freedom. | arxiv:2411.17414 |
we consider a class of random block matrix models in $ d $ dimensions, $ d \ ge 1 $, motivated by the study of the vibrational density of states ( dos ) of soft spheres near the isostatic point. the contact networks of average degree $ z = z _ 0 + \ zeta $ are represented by random $ z _ 0 $ - regular graphs ( only the circle graph in $ d = 1 $ with $ z _ 0 = 2 $ ) to which erd \ " os - renyi graphs having a small average degree $ \ zeta $ are superimposed. in the case $ d = 1 $, for $ \ zeta $ small the shifted kesten - mckay dos with parameter $ z $ is a mean - field solution for the dos. numerical simulations in the $ z _ 0 = 2 $ model, which is the $ k = 1 $ newman - watts small - world model, and in the $ z _ 0 = 3 $ model lead us to conjecture that for $ \ zeta \ to 0 $ the cumulative function of the dos converges uniformly to that of the shifted kesten - mckay dos, in an interval $ [ 0, \ omega _ 0 ] $, with $ \ omega _ 0 < \ sqrt { z _ 0 - 1 } + 1 $. for $ 2 \ le d \ le 4 $, we introduce a cutoff parameter $ k _ d \ le 0. 5 $ modeling sphere repulsion. the case $ k _ d = 0 $ is the random elastic network case, with the dos close to the marchenko - pastur dos with parameter $ t = \ frac { z } { d } $. for $ k _ d $ large the dos is close for small $ \ omega $ to the shifted kesten - mckay dos with parameter $ t = \ frac { z } { d } $ ; in the isostatic case the dos has around $ \ omega = 0 $ the expected plateau. the boson peak frequency in $ d = 3 $ with $ k _ 3 $ large is close to the one found in molecular dynamics simulations for $ z = 7 $ and $ 8 $. | arxiv:2001.02622 |
we present a method to calculate the $ x $ - - space expressions of massless or massive operator matrix elements in qcd and qed containing local composite operator insertions, depending on the discrete mellin index $ n $, directly, without computing the mellin - - space expressions in explicit form analytically. here $ n $ belongs either to the even or odd positive integers. the method is based on the resummation of the operators into effective propagators and relies on an analytic continuation between two continuous variables. we apply it to iterated integrals as well as to the more general case of iterated non - - iterative integrals, generalizing the former ones. the $ x $ - - space expressions are needed to derive the small - - $ x $ behaviour of the respective quantities, which usually cannot be accessed in $ n $ - - space. we illustrate the method for different ( iterated ) alphabets, including non - - iterative $ _ 2f _ 1 $ and elliptic structures, as examples. these structures occur in different massless and massive three - - loop calculations. likewise the method applies even to the analytic closed form solutions of more general cases of differential equations which do not factorize into first - - order factors. | arxiv:2303.05943 |
the energetic exclusive two - body nonleptonic decays of b mesons are investigated in the framework of the relativistic quark model within the factorization approximation. the heavy quark expansion is used for the calculation of form factors. the obtained results are in agrement with available experimental data. | arxiv:hep-ph/9701218 |
the object of this interview is the history of the large hadron collider in the lep tunnel at cern, from first ideas to the discovery of the brout - englert - higgs boson, seen from the point of view of a member of cern scientific committees, of the cern council and a former director general of cern in the years of machine construction | arxiv:1705.04951 |
the mesoscopic nature of the atomic nucleus gives rise to a wide array of macroscopic and microscopic phenomena. the size of the nucleus is a window into this duality : while the charge radii globally scale as $ a ^ { 1 / 3 } $, their evolution across isotopic chains reveals unanticipated structural phenomena [ 1 - 3 ]. the most ubiquitous of these is perhaps the odd - even staggering ( oes ) [ 4 ] : isotopes with an odd number of neutrons are usually smaller in size than the trend of their even - neutron neighbours suggests. this oes effect varies with the number of protons and neutrons and poses a significant challenge for nuclear theory [ 5 - 7 ]. here, we examine this problem with new measurements of the charge radii of short - lived copper isotopes up to the very exotic $ ^ { 78 } $ cu $ ( z = 29, n = 49 ) $, produced at only 20 ions / s, using the highly - sensitive collinear resonance ionisation spectroscopy ( cris ) method at isolde - cern. due to the presence of a single proton outside of the closed z = 28 shell, these measurements provide crucial insights into the single - particle proton structure and how this affects the charge radii. we observe an unexpected reduction in the oes for isotopes approaching the $ n = 50 $ shell gap. to describe the data, we applied models based on nuclear density functional theory [ 2, 8 ] ( dft ) and ab - initio valence - space in - medium similarity renormalization group ( vs - imsrg ) theory [ 9, 10 ]. through these comparisons, we demonstrate a relation between the global behavior of charge radii and the saturation density of nuclear matter, and show that the local charge radii variations, which reflect the many - body polarization effects due to the odd neutron, naturally emerge from the vs - imsrg calculations. | arxiv:1911.08765 |
we calculate the transverse effective charges of zincblende compound semiconductors using harrison ' s tight - binding model to describe the electronic structure. our results, which are essentially exact within the model, are found to be in much better agreement with experiment than previous perturbation - theory estimates. efforts to improve the results by using more sophisticated variants of the tight - binding model were actually less successful. the results underline the importance of including quantities that are sensitive to the electronic wavefunctions, such as the effective charges, in the fitting of tight - binding models. | arxiv:mtrl-th/9601002 |
we discuss the peak at 400 cm - 1, which is seen in c - axis conductivity spectra of underdoped high temperature superconductors. the model of van der marel and munzar, where the peak is the result of a transverse plasmon arising from a low frequency conductivity mode between the closely spaced planes, fits our data well. within the model we find that the temperature dependence of the peak amplitude is controlled by in - plane scattering processes. the temperature range where the mode can be seen coincides with ts, the spin gap temperature, which is lower than t *, the pseudogap temperature. as a function of temperature, the amplitude of the mode tracks the amplitude of the 41 mev neutron resonance and the spin lattice relaxation time, suggesting to us that the mode is controlled by magnetic processes and not by superconducting fluctuations which have temperature scale much closer to tc, the superconducting transition temperature. | arxiv:cond-mat/0209371 |
we investigate response generation for multi - turn dialogue in generative - based chatbots. existing generative models based on rnns ( recurrent neural networks ) usually employ the last hidden state to summarize the sequences, which makes models unable to capture the subtle variability observed in different dialogues and cannot distinguish the differences between dialogues that are similar in composition. in this paper, we propose a pseudo - variational gated recurrent unit ( pvgru ) component without posterior knowledge through introducing a recurrent summarizing variable into the gru, which can aggregate the accumulated distribution variations of subsequences. pvgru can perceive the subtle semantic variability through summarizing variables that are optimized by the devised distribution consistency and reconstruction objectives. in addition, we build a pseudo - variational hierarchical dialogue ( pvhd ) model based on pvgru. experimental results demonstrate that pvgru can broadly improve the diversity and relevance of responses on two benchmark datasets. | arxiv:2212.09086 |
we consider the elasticity problem in a % heterogeneous domain with contact on multiple periodic open cracks. the contact is described by the signorini and coulomb - friction conditions. problem is non - linear, the dissipative functional depends on the un - known solution and the existence of the solution for fixed period of the structure is usually proven by the fix - point argument in the sobolev spaces with a little higher regularity, $ h ^ { 1 + \ alpha } $. we rescaled norms, trace, jump and korn inequalities in fractional sobolev spaces with positive and negative exponent, using the unfolding technique, introduced by griso, cioranescu and damlamian. then we proved the existence and uniqieness of the solution for friction and period fixed. then we proved the continuous dependency of the solution to the problem with coulomb ' s friction on the given friction and then estimated the solution using fixed point theorem. however, we were not able to pass to the strong limit in the frictional dissipative term. for this reason, we regularized the problem by adding a fourth - order term, which increased the regularity of the solution and allowed the passing to the limit. this can be interpreted as micro - polar elasticity. | arxiv:1811.06615 |
universality in the ion flux to the jet outer - wall is observed in outerwall limiter mounted langmuir probe ( olp ) time - series across a large range of plasma current and line - averaged density during ohmically heated horizontal target l - mode plasmas. the mean, m, and the standard deviation, sigma, of the ion - saturation current measured by the olp show systematic variation with plasma current and density. both increase as either plasma current decreases and / or density increases. upon renormalization, achieved by subtraction of m and rescaling by sigma, the probability distribution functions ( pdfs ) of each signal collapse approximately onto a single curve. the shape of the curve deviates from a distribution in the tail of the pdf and is better described by a log - normal distribution. the collapse occurs over 4 decades of the ordinate which, given the wide parameter space over which the data spans, is a strong indication of universality. | arxiv:1607.07588 |
due to spectrum scarcity, the coexistence of radar and wireless communication has gained substantial research interest recently. among many scenarios, the heterogeneouslydistributed joint radar - communication system is promising due to its flexibility and compatibility of existing architectures. in this paper, we focus on a heterogeneous radar and communication network ( hrcn ), which consists of various generic radars for multiple target tracking ( mtt ) and wireless communications for multiple users. we aim to improve the mtt performance and maintain good throughput levels for communication users by a well - designed resource allocation. the problem is formulated as a bayesian cram \ ' er - rao bound ( crb ) based minimization subjecting to resource budgets and throughput constraints. the formulated nonconvex problem is solved based on an alternating descent - ascent approach. numerical results demonstrate the efficacy of the proposed allocation scheme for this heterogeneous network. | arxiv:2107.13838 |
recognizing the same faces with and without masks is important for ensuring consistent identification in security, access control, and public safety. this capability is crucial in scenarios like law enforcement, healthcare, and surveillance, where accurate recognition must be maintained despite facial occlusion. this research focuses on the challenge of recognizing the same faces with and without masks by employing cosine similarity as the primary technique. with the increased use of masks, traditional facial recognition systems face significant accuracy issues, making it crucial to develop methods that can reliably identify individuals in masked conditions. for that reason, this study proposed masked - unmasked face matching model ( mufm ). this model employs transfer learning using the visual geometry group ( vgg16 ) model to extract significant facial features, which are subsequently classified utilizing the k - nearest neighbors ( k - nn ) algorithm. the cosine similarity metric is employed to compare masked and unmasked faces of the same individuals. this approach represents a novel contribution, as the task of recognizing the same individual with and without a mask using cosine similarity has not been previously addressed. by integrating these advanced methodologies, the research demonstrates effective identification of individuals despite the presence of masks, addressing a significant limitation in traditional systems. using data is another essential part of this work, by collecting and preparing an image dataset from three different sources especially some of those data are real provided a comprehensive power of this research. the image dataset used were already collected in three different datasets of masked and unmasked for the same faces. | arxiv:2501.04444 |
we predict dilepton invariant - mass spectra for central 5. 5 atev pb - pb collisions at lhc. hadronic emission in the low - mass region is calculated using in - medium spectral functions of light vector mesons within hadronic many - body theory. in the intermediate - mass region thermal radiation from the quark - gluon plasma, evaluated perturbatively with hard - thermal loop corrections, takes over. an important source over the entire mass range are decays of correlated open - charm hadrons, rendering the nuclear modification of charm and bottom spectra a critical ingredient. | arxiv:0706.4443 |
marked chain - order polytopes are convex polytopes constructed from a marked poset, which give a discrete family relating a marked order polytope with a marked chain polytope. in this paper, we consider the gelfand - tsetlin poset of type a, and realize the associated marked chain - order polytopes as newton - okounkov bodies of the flag variety. our realization connects previous realizations of gelfand - tsetlin polytopes and feigin - fourier - littelmann - vinberg polytopes as newton - okounkov bodies in a uniform way. as an application, we prove that the flag variety degenerates into the irreducible normal projective toric variety corresponding to a marked chain - order polytope. we also construct a specific basis of an irreducible highest weight representation which is naturally parametrized by the set of lattice points in a marked chain - order polytope. | arxiv:2104.09929 |
we find the exact bellman function associated to the level - sets of sparse operators acting on characteristic functions. | arxiv:2403.06751 |
ngc1052 - df2 and ngc1052 - df4 are ultra - diffuse galaxies ( udgs ) that were found to have extremely low velocity dispersions, indicating that they have little or no dark matter. both galaxies host anomalously luminous globular cluster ( gc ) systems, with a peak magnitude of their gc luminosity function ( gclf ) that is $ \ sim1. 5 $ magnitudes brighter than the near - universal value of $ m _ v \ approx - 7. 5 $. here we present an analysis of the joint gclf of the two galaxies, making use of new hst photometry and keck spectroscopy, and a recently improved distance measurement. we apply a homogeneous photometric selection method to the combined gc sample of df2 and df4. the new analysis shows that the peak of the combined gc luminosity function remains at $ m _ v \ approx - 9 $ mag. in addition, we find a subpopulation of less luminous gcs at $ m _ v \ approx - 7. 5 $ mag, where the near - universal gclf peak is located. the number of gcs in the magnitude range of $ - 5 $ to $ - 8 $ is $ 7. 1 _ { - 4. 34 } ^ { + 7. 33 } $ in df2 and $ 8. 6 _ { - 4. 83 } ^ { + 7. 74 } $ in df4, similar to that expected from other galaxies of the same luminosity. the total gc number between $ m _ v $ of $ - 5 $ to $ - 11 $ is $ 18. 5 _ { - 4. 42 } ^ { + 8. 99 } $ for df2 and $ 18. 6 _ { - 4. 92 } ^ { + 9. 37 } $ for df4, calculated from the background - subtracted gclf. the updated total number of gcs in both galaxies is $ 37 ^ { + 11. 08 } _ { - 6. 54 } $. the number of gcs do not scale with the halo mass in either df2 or df4, suggesting that $ n _ { gc } $ is not directly determined by the merging of halos. | arxiv:2010.07324 |
the current state of affairs in big - bang nucleosynthesis is reviewed and controversies are discussed. the author concludes that the glass is half full. | arxiv:astro-ph/9610158 |
they can make computations easier. the lu decomposition factors matrices as a product of lower ( l ) and an upper triangular matrices ( u ). once this decomposition is calculated, linear systems can be solved more efficiently by a simple technique called forward and back substitution. likewise, inverses of triangular matrices are algorithmically easier to calculate. the gaussian elimination is a similar algorithm ; it transforms any matrix to row echelon form. both methods proceed by multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding multiples of one row to another row. singular value decomposition expresses any matrix a as a product udv∗, where u and v are unitary matrices and d is a diagonal matrix. the eigendecomposition or diagonalization expresses a as a product vdv−1, where d is a diagonal matrix and v is a suitable invertible matrix. if a can be written in this form, it is called diagonalizable. more generally, and applicable to all matrices, the jordan decomposition transforms a matrix into jordan normal form, that is to say matrices whose only nonzero entries are the eigenvalues λ1 to λn of a, placed on the main diagonal and possibly entries equal to one directly above the main diagonal, as shown at the right. given the eigendecomposition, the nth power of a ( that is, n - fold iterated matrix multiplication ) can be calculated via a n = ( v d v − 1 ) n = v d v − 1 v d v − 1 … v d v − 1 = v d n v − 1 { \ displaystyle { \ mathbf { a } } ^ { n } = ( { \ mathbf { vdv } } ^ { - 1 } ) ^ { n } = { \ mathbf { vdv } } ^ { - 1 } { \ mathbf { vdv } } ^ { - 1 } \ ldots { \ mathbf { vdv } } ^ { - 1 } = { \ mathbf { vd } } ^ { n } { \ mathbf { v } } ^ { - 1 } } and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which is much easier than doing the exponentiation for a instead. this can be used to compute the matrix exponential ea, a need frequently arising in solving linear differential equations, matrix logarithms and square roots of | https://en.wikipedia.org/wiki/Matrix_(mathematics) |
in this paper, we introduce and study a class of pseudo - differential operators on the lattice $ \ mathbb { z } ^ n $. more preciously, we consider a weighted symbol class $ m _ { \ rho, \ lambda } ^ m ( \ mathbb { z } ^ n \ times \ mathbb { t } ^ n ), m \ in \ mathbb { r } $ associated to a suitable weight function $ \ lambda $ on $ \ mathbb { z } ^ n $. we study elements of the symbolic calculus for pseudo - differential operators associated with $ m _ { \ rho, \ lambda } ^ m ( \ mathbb { z } ^ n \ times \ mathbb { t } ^ n ) $ by deriving formulae for the composition, adjoint, transpose. we define the notion of $ m $ - ellipticity for symbols belonging to $ m _ { \ rho, \ lambda } ^ m ( \ mathbb { z } ^ n \ times \ mathbb { t } ^ n ) $ and construct the parametrix of $ m $ - elliptic pseudo - differential operators. further, we investigate the minimal and maximal extensions for $ m $ - elliptic pseudo - differential operators and show that they coincide on $ \ ell ^ 2 ( \ mathbb { z } ^ n ) $ subject to the $ m $ - ellipticity of symbols. we also determine the domains of the minimal and maximal operators. finally, we discuss fredholmness and compute the index of $ m $ - elliptic pseudo - differential operators on $ \ mathbb { z } ^ n $. | arxiv:2111.10224 |
the skew mean curvature flow ( smcf ), which origins from the study of fluid dynamics, describes the evolution of a codimension two submanifold along its binormal direction. we study the basic properties of the smcf and prove the existence of a short - time solution to the initial value problem of the smcf of compact surfaces in euclidean space $ \ mathbb { r } ^ 4 $. a sobolev - type embedding theorem for the second fundamental forms of two dimensional surfaces is also proved, which might be of independent interest. | arxiv:1502.04525 |
in the k - nearest neighbor algorithm ( k - nn ), the determination of classes for test instances is usually performed via a majority vote system, which may ignore the similarities among data. in this research, the researcher proposes an approach to fine - tune the selection of neighbors to be passed to the majority vote system through the construction of a random n - dimensional hyperstructure around the test instance by introducing a new threshold parameter. the accuracy of the proposed k - nn algorithm is 85. 71 %, while the accuracy of the conventional k - nn algorithm is 80. 95 % when performed on the haberman ' s cancer survival dataset, and 94. 44 % for the proposed k - nn algorithm, compared to the conventional ' s 88. 89 % accuracy score on the seeds dataset. the proposed k - nn algorithm is also on par with the conventional support vector machine algorithm accuracy, even on the banknote authentication and iris datasets, even surpassing the accuracy of support vector machine on the seeds dataset. | arxiv:1906.04559 |
guided wave optics, including most prominently fiber optics and integrated photonics, very often considers only one or very few spatial modes of the waveguides. despite being known and utilized for decades, multi - mode guided wave optics is currently rapidly increasing in parallel with technological improvements and better simulation tools. the physics of multi - mode interactions are usually driven by some initial energy distribution in a number of spatial modes. in this work we introduce how, with free - space input beams having space - time couplings, the different modes can be excited with different complex frequency or time profiles. we cover fundamentals, the coupling with a few simple space - time aberrations, different waveguides, and a number of technical nuances. this concept of space - time initial conditions in multi - mode waveguides will provide yet another tool to study the rich nonlinear interactions in such systems. | arxiv:2308.14445 |
we consider a class of hamiltonians describing a fermion field coupled to a boson field. the interaction kernels are assumed bounded in the fermionic momentum variable and decaying like $ | q | ^ { - p } $ for large boson momenta $ q $. a realistic physical case would be $ p = \ frac12 $. we impose a spatial cutoff and uv - renormalise the resulting hamiltonian by subtracting its ground state energy. our renormalisation procedure works for the physically realistic value $ p = \ frac12 $ in spatial dimensions $ 1 $ and $ 2 $ and for $ p > \ frac34 $ in spatial dimension $ d = 3 $. | arxiv:2103.13770 |
we study the symmetries of non - relativistic systems with an emphasis on applications to the fractional quantum hall effect. a source for the energy current of a galilean system is introduced and the non - relativistic diffeomorphism invariance studied in previous work is enhanced to a full spacetime symmetry, allowing us to derive a number of ward identities. these symmetries are smooth in the massless limit of the lowest landau level. we develop a formalism for newton - cartan geometry with torsion to write these ward identities in a covariant form. previous results on the connection between hall viscosity and hall conductivity are reproduced. | arxiv:1407.1252 |
the replication mechanism resolves some challenges with big data such as data durability, data access, and fault tolerance. yet, replication itself gives birth to another challenge known as the consistency in distributed systems. scalability and availability are the challenging criteria on which the replication is based upon in distributed systems which themselves require the consistency. consistency in distributed computing systems has been employed in three different applicable fields, such as system architecture, distributed database, and distributed systems. consistency models based on their applicability could be sorted from strong to weak. our goal is to propose a novel viewpoint to different consistency models utilized in the distributed systems. this research proposes two different categories of consistency models. initially, consistency models are categorized into three groups of data - centric, client - centric and hybrid models. each of which is then grouped into three subcategories of traditional, extended, and novel consistency models. consequently, the concepts and procedures are expressed in mathematical terms, which are introduced in order to present our models ' behavior without implementation. moreover, we have surveyed different aspects of challenges with respect to the consistency i. e., availability, scalability, security, fault tolerance, latency, violation, and staleness, out of which the two latter i. e. violation and staleness, play the most pivotal roles in terms of consistency and trade - off balancing. finally, the contribution extent of each of the consistency models and the growing need for them in distributed systems are investigated. | arxiv:1902.03305 |
in this paper, we show that every $ 8 $ - dimensional closed riemmanian manifold with $ c ^ \ infty $ - generic metrics admits a smooth minimal hypersurface. this generalized previous results by n. smale and chodosh - liokumovich - spolaor. different from their local perturbation techniques, our construction is based on a global perturbation argument in and a novel geometric invariant which counts singular points with suitable weights. | arxiv:2012.05401 |
in this paper, we present power emulation, a novel design paradigm that utilizes hardware acceleration for the purpose of fast power estimation. power emulation is based on the observation that the functions necessary for power estimation ( power model evaluation, aggregation, etc. ) can be implemented as hardware circuits. therefore, we can enhance any given design with " power estimation hardware ", map it to a prototyping platform, and exercise it with any given test stimuli to obtain power consumption estimates. our empirical studies with industrial designs reveal that power emulation can achieve significant speedups ( 10x to 500x ) over state - of - the - art commercial register - transfer level ( rtl ) power estimation tools. | arxiv:0710.4742 |
in this paper we present polifl, a decentralized, edge - based framework that supports heterogeneous privacy policies for federated learning. we evaluate our system on three use cases that train models with sensitive user data collected by mobile phones - predictive text, image classification, and notification engagement prediction - on a raspberry pi edge device. we find that polifl is able to perform accurate model training and inference within reasonable resource and time budgets while also enforcing heterogeneous privacy policies. | arxiv:2003.06612 |
biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processor ' s outputs to its inputs. this paper uses a recent advance in the foundations of information theory to study the properties of local processors that use contextual input to amplify or attenuate transmission of information about their driving inputs. this advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i. e. synergy. the decompositions that we report here show that contextual modulation has information processing properties that contrast with those of all four simple arithmetic operators, that it can take various forms, and that the form used in our previous studies of artificial neural nets composed of local processors with both driving and contextual inputs is particularly well - suited to provide the distinctive capabilities of contextual modulation under a wide range of conditions. we argue that the decompositions reported here could be compared with those obtained from empirical neurobiological and psychophysical data under conditions thought to reflect contextual modulation. that would then shed new light on the underlying processes involved. finally, we suggest that such decompositions could aid the design of context - sensitive machine learning algorithms. | arxiv:1803.05897 |
in a recent article, krapivsky and redner ( j. stat. mech. 093208 ( 2018 ) ) established that the distribution of the first hitting times for a diffusing particle subject to hitting an absorber is independent of the direction of the external flow field. in the present paper, we build upon this observation and investigate when the conditioning on the diffusion leads to a process that is totally independent of the flow field. for this purpose, we adopt the langevin approach, or more formally the theory of conditioned stochastic differential equations. this technique allows us to derive a large variety of stochastic processes : in particular, we introduce a new kind of brownian bridge ending at two different final points and calculate its fundamental probabilities. this method is also very well suited for generating statistically independent paths. numerical simulations illustrate our findings. | arxiv:1909.12522 |
the following work presents a generalized ( extended ) finite element formulation for the advection - diffusion equation. using enrichment functions that represent the exponential nature of the exact solution, smooth numerical solutions are obtained for problems with steep gradients and high peclet numbers ( up to pe = 25 ) in one and two - dimensions. as opposed to traditional stabilized methods that require the construction of stability parameters and stabilization terms, the present work avoids numerical instabilities by improving the classical galerkin solution with an enrichment function. to contextualize this method among other stabilized methods, we show by decomposition of the solution ( in a multiscale manner ) an equivalence to both galerkin / least - squares type methods and those that use bubble functions. this work also presents a strategy for constructing the enrichment function for problems with complex geometries by employing a global - local approach. | arxiv:0806.3963 |
for over thirty years, researchers have developed and analyzed methods for latent tree induction as an approach for unsupervised syntactic parsing. nonetheless, modern systems still do not perform well enough compared to their supervised counterparts to have any practical use as structural annotation of text. in this work, we present a technique that uses distant supervision in the form of span constraints ( i. e. phrase bracketing ) to improve performance in unsupervised constituency parsing. using a relatively small number of span constraints we can substantially improve the output from diora, an already competitive unsupervised parsing system. compared with full parse tree annotation, span constraints can be acquired with minimal effort, such as with a lexicon derived from wikipedia, to find exact text matches. our experiments show span constraints based on entities improves constituency parsing on english wsj penn treebank by more than 5 f1. furthermore, our method extends to any domain where span constraints are easily attainable, and as a case study we demonstrate its effectiveness by parsing biomedical text from the craft dataset. | arxiv:2109.05112 |
we present efficient circuits that can be used for the phase space tomography of quantum states. the circuits evaluate individual values or selected averages of the wigner, kirkwood and husimi distributions. these quantum gate arrays can be programmed by initializing appropriate computational states. the husimi circuit relies on a subroutine that is also interesting in its own right : the efficient preparation of a coherent state, which is the ground state of the harper hamiltonian. | arxiv:quant-ph/0310126 |
a function $ f $ defined on $ [ 0, 1 ] ^ d $ is called strongly chargeable if there is a continuous vector - field $ v $ such that $ f ( x _ 1, \ dots, x _ d ) $ equals the flux of $ v $ through the rectangle $ [ 0, x _ 1 ] \ times \ cdots \ times [ 0, x _ d ] $ for all $ ( x _ 1, \ dots, x _ d ) \ in [ 0, 1 ] ^ d $. in other words, $ f $ is the primitive of the divergence of a continuous vector - field. we prove that the sample paths of the brownian sheet with $ d \ geq 2 $ parameters are almost surely not strongly chargeable. on the other hand, those of the fractional brownian sheet of hurst parameter $ ( h _ 1, \ dots, h _ d ) $ are shown to be almost surely strongly chargeable whenever \ [ \ frac { h _ 1 + \ cdots + h _ d } { d } > \ frac { d - 1 } { d }. \ ] | arxiv:2401.15427 |
image inpainting is the task of filling - in missing regions of a damaged or incomplete image. in this work we tackle this problem not only by using the available visual data but also by incorporating image semantics through the use of generative models. our contribution is twofold : first, we learn a data latent space by training an improved version of the wasserstein generative adversarial network, for which we incorporate a new generator and discriminator architecture. second, the learned semantic information is combined with a new optimization loss for inpainting whose minimization infers the missing content conditioned by the available data. it takes into account powerful contextual and perceptual content inherent in the image itself. the benefits include the ability to recover large regions by accumulating semantic information even it is not fully present in the damaged image. experiments show that the presented method obtains qualitative and quantitative top - tier results in different experimental situations and also achieves accurate photo - realism comparable to state - of - the - art works. | arxiv:1812.01071 |
we study the space of linear difference equations with periodic coefficients and ( anti ) periodic solutions. we show that this space is isomorphic to the space of tame frieze patterns and closely related to the moduli space of configurations of points in the projective space. we define the notion of combinatorial gale transform which is a duality between periodic difference equations of different orders. we describe periodic rational maps generalizing the classical gauss map. | arxiv:1309.3880 |
by halo stars. values of $ \ beta $ from different tracers agree at $ \ sim $ 120 kpc where $ \ beta \ sim $ 0. 2. we also find that model predictions agree broadly with observations in the radial distribution and luminosity function of satellites around the mw and m31. | arxiv:2001.09589 |
the investigation of the magnetic phase transitions in the parent compounds of fe - based superconductors is regarded essential for an understanding of the pairing mechanism in the related superconducting compounds. even though the chemical and electronic properties of these materials are often strongly inhomogeneous on a nanometer length scale, studies of the magnetic phase transitions using spatially resolved experimental techniques are still scarce. here, we present a real space spin - resolved scanning tunneling microscopy investigation of the surface of fe $ _ { 1 + y } $ te single crystals with different excess fe content, $ y $, which are continuously driven through the magnetic phase transition. for fe $ _ { 1. 08 } $ te, the transition into the low - temperature monoclinic commensurate antiferromagnetic phase is accompanied by the sudden emergence of ordering into four rotational domains with different orientations of the monoclinic lattice and of the antiferromagnetic order, showing how structural and magnetic order are intertwined. in the low - temperature phase of fe $ _ { 1. 12 } $ te one type of the domain boundaries disappears, and the transition into the paramagnetic phase gets rather broad, which is assigned to the formation of a mixture of orthorhombic and monoclinic phases. | arxiv:1711.11387 |
matrix model approach to multicolor induced qcd based on the quenched momentum prescription is presented. it is shown that this model exhibits the reduction of spatial degrees of freedom : the partition function is determined by the solution of one dimensional quantum mechanical problem while the d - dimensional scalar field correlators coinside with the same type correlators in the two - dimensional induced qcd. | arxiv:hep-th/9610206 |
we investigate combining imaging and shape features extracted from mri for the clinically relevant tasks of brain age prediction and alzheimer ' s disease classification. our proposed model fuses resnet - extracted image embeddings with shape embeddings from a bespoke graph neural network. the shape embeddings are derived from surface meshes of 15 brain structures, capturing detailed geometric information. combined with the appearance features from t1 - weighted images, we observe improvements in the prediction performance on both tasks, with substantial gains for classification. we evaluate the model using public datasets, including camcan, ixi, and oasis3, demonstrating the effectiveness of fusing imaging and shape features for brain analysis. | arxiv:2501.07994 |
ferroelectric hfo2 films are usually polycrystalline and contain a mixture of polar and nonpolar phases. this challenges the understanding and control of polar phase stabilization and ferroelectric properties. several factors such as dopants, oxygen vacancies, or stress, among others, have been investigated and shown to have a crucial role on optimizing the ferroelectric response. stress generated during deposition or annealing of thin films is a main factor determining the formed crystal phases and influences the lattice strain of the polar orthorhombic phase. it is difficult to discriminate between stress and strain effects on polycrystalline ferroelectric hfo2 films, and the direct impact of orthorhombic lattice strain on ferroelectric polarization has yet to be determined experimentally. here, we analyze the crystalline phases and lattice strain of several series of doped hfo2 epitaxial films. we conclude that stress has a critical influence on metastable orthorhombic phase stabilization and ferroelectric polarization. on the contrary, the lattice deformation effects are much smaller than those caused by variations in the orthorhombic phase content. the experimental results are confirmed by density functional theory calculations on hfo2 and hf0. 5zr0. 5o2 ferroelectric phases. | arxiv:2312.08208 |
stochastic integration w. r. t. fractional brownian motion ( fbm ) has raised strong interest in recent years, motivated in particular by applications in finance and internet traffic modelling. since fbm is not a semi - martingale, stochastic integration requires specific developments. multifractional brownian motion ( mbm ) is a gaussian process that generalizes fbm by letting the local h \ " older exponent vary in time. this is useful in various areas, including financial modelling and biomedicine. in this work we start from the fact, established in \ cite [ thm 2. 1. ( i ) ] { fbm _ to _ mbm _ herbinlebovitsvehel }, that an mbm may be approximated, in law, by a sequence of " tangent " fbms. we used this result to show how one can define a stochastic integral w. r. t. mbm from the stochastic integral w. r. t. fbm, defined in \ cite { ben1 }, in the white noise theory sense. | arxiv:1305.0342 |
recently, charge radii and ground - state electromagnetic moments of li and be isotopes were measured precisely. we have performed large - scale ab initio no - core shell model calculations for these isotopes using high - precision nucleon - nucleon potentials. the isotopic trends of our computed charge radii and quadrupole and magnetic - dipole moments are in good agreement with experimental results with the exception of the 11li charge radius. the magnetic moments are in particular well described, whereas the absolute magnitudes of the quadrupole moments are about 10 % too small. the small magnitude of the 6li quadrupole moment is reproduced, and with the cd - bonn nn potential, also its correct sign. | arxiv:0901.0453 |
semi - crystalline polymers have been shown to have greatly increased thermal conductivity compared to amorphous bulk polymers due to effective heat conduction along the covalent bonds of the backbone. however, the mechanisms governing the intrinsic thermal conductivity of polymers remain largely unexplored as thermal transport has been studied in relatively few polymers. here, we use molecular dynamics simulations to study heat transport in polynorbornene, a polymer that can be synthesized in semi - crystalline form using solution processing. we find that even perfectly crystalline polynorbornene has an exceptionally low thermal conductivity near the amorphous limit due to extremely strong anharmonic scattering. our calculations show that this scattering is sufficiently strong to prevent the formation of propagating phonons, with heat being instead carried by non - propagating, delocalized vibrational modes known as diffusons. our results demonstrate a mechanism for achieving intrinsically low thermal conductivity even in crystalline polymers that may be useful for organic thermoelectrics. | arxiv:1509.04365 |
spatio - temporal contexts are crucial in understanding human actions in videos. recent state - of - the - art convolutional neural network ( convnet ) based action recognition systems frequently involve 3d spatio - temporal convnet filters, chunking videos into fixed length clips and long short term memory ( lstm ) networks. such architectures are designed to take advantage of both short term and long term temporal contexts, but also requires the accumulation of a predefined number of video frames ( e. g., to construct video clips for 3d convnet filters, to generate enough inputs for lstms ). for applications that require low - latency online predictions of fast - changing action scenes, a new action recognition system is proposed in this paper. termed " weighted multi - region convolutional neural network " ( wmr convnet ), the proposed system is lstm - free, and is based on 2d convnet that does not require the accumulation of video frames for 3d convnet filtering. unlike early 2d convnets that are based purely on rgb frames and optical flow frames, the wmr convnet is designed to simultaneously capture multiple spatial and short term temporal cues ( e. g., human poses, occurrences of objects in the background ) with both the primary region ( foreground ) and secondary regions ( mostly background ). on both the ucf101 and hmdb51 datasets, the proposed wmr convnet achieves the state - of - the - art performance among competing low - latency algorithms. furthermore, wmr convnet even outperforms the 3d convnet based c3d algorithm that requires video frame accumulation. in an ablation study with the optical flow convnet stream removed, the ablated wmr convnet nevertheless outperforms competing algorithms. | arxiv:1805.02877 |
we study topological factors of rank - one subshifts and prove that those factors that are themselves subshifts are either finite or isomorphic to the original rank - one subshifts. thus, we completely characterize the subshift factors of rank - one subshifts. | arxiv:1910.09050 |
the questionable responses caused by knowledge hallucination may lead to llms ' unstable ability in decision - making. however, it has never been investigated whether the llms ' hallucination is possibly usable to generate negative reasoning for facilitating the detection of fake news. this study proposes a novel supervised self - reinforced reasoning rectification approach - sr $ ^ 3 $ that yields both common reasonable reasoning and wrong understandings ( negative reasoning ) for news via llms reflection for semantic consistency learning. upon that, we construct a negative reasoning - based news learning model called - \ emph { nrfe }, which leverages positive or negative news - reasoning pairs for learning the semantic consistency between them. to avoid the impact of label - implicated reasoning, we deploy a student model - \ emph { nrfe - d } that only takes news content as input to inspect the performance of our method by distilling the knowledge from \ emph { nrfe }. the experimental results verified on three popular fake news datasets demonstrate the superiority of our method compared with three kinds of baselines including prompting on llms, fine - tuning on pre - trained slms, and other representative fake news detection methods. | arxiv:2503.09153 |
recent in - situ observations and numerical models indicated various types of magnetohydrodynamic ( mhd ) waves contributing to the solar wind acceleration. among them is an mhd wave decomposition at distances closer than 50 $ r _ { \ odot } $ using data taken by the first perihelion pass of parker solar probe ( psp ). however, the underlying physical processes responsible for the formation of the solar wind have not yet been observationally confirmed at distances closer than 10 $ r _ { \ odot } $. we aim to infer the mode population of density fluctuations observed by radio occultation, which has all been attributed to slow magnetoacoustic waves. we compare the radio occultation observations conducted in 2016 using the jaxa ' s venus orbiter akatsuki with the mhd simulation. the time - frequency analysis was applied to the density fluctuations observed by the radio occultation and those reproduced in the mhd model. the time - spatial spectrum of the density fluctuation in the model exhibits two components that are considered to be fast and slow magnetoacoustic waves. the fast magnetoacoustic waves in the model tend to have periods shorter than the slow magnetoacoustic waves, and the superposition of these modes has a broadened spectrum extending in the range of approximately 20 $ - $ 1000 s, which resembles that of the observed waves. based on this comparison, it is probable that the density oscillations observed by radio occultation include fast and slow magnetoacoustic waves, and that fast magnetoacoustic waves are predominant at short periods and slow magnetoacoustic waves are prevalent at long periods. this is qualitatively similar to the results of the mode decomposition obtained from the psp ' s first perihelion at more distance regions. | arxiv:2502.11700 |
we present an experimental measurement of the cooperative lamb shift and the lorentz shift using an atomic nanolayer with tunable thickness and atomic density. the cooperative lamb shift arises due to the exchange of virtual photons between identical atoms. the interference between the forward and backward propagating virtual fields is confirmed by the thickness dependence of the shift which has a spatial frequency equal to $ 2k $, i. e. twice that of the optical field. the demonstration of cooperative interactions in an easily scalable system opens the door to a new domain for non - linear optics. | arxiv:1201.5251 |
some results are presented indicating the distinct advantages that accrue from choosing a real representation for the generators of su ( n ) rather than the usual and more popular gell - mann type matrices. a few examples in the context of quantum chromodynamics are used to serve as illustrations. | arxiv:hep-ph/9311220 |
the two - dimensional causal dynamical triangulations ( $ 2 $ d cdt ) is a lattice model of quantum geometry. in $ 2 $ d cdt, one can deal with the quantum effects analytically and explore the physics through the continuum limit. the continuum theory is known to be two - dimensional projectable horava - lifshitz quantum gravity ( $ 2 $ d projectable hl qg ). in this chapter, we wish to review the very relation between $ 2 $ d cdt and $ 2 $ d projectable hl qg in detail. | arxiv:2212.03446 |
when fine - tuning deep neural networks ( dnns ) to new data, dnns are prone to overwriting network parameters required for task - specific functionality on previously learned tasks, resulting in a loss of performance on those tasks. we propose using parameter - based uncertainty to determine which parameters are relevant to a network ' s learned function and regularize training to prevent change in these important parameters. we approach this regularization in two ways : ( 1 ), we constrain critical parameters from significant changes by associating more critical parameters with lower learning rates, thereby limiting alterations in those parameters ; ( 2 ), important parameters are restricted from change by imposing a higher regularization weighting, causing parameters to revert to their states prior to the learning of subsequent tasks. we leverage a bayesian moment propagation framework which learns network parameters concurrently with their associated uncertainties while allowing each parameter to contribute uncertainty to the network ' s predictive distribution, avoiding the pitfalls of existing sampling - based methods. the proposed approach is evaluated for common sequential benchmark datasets and compared to existing published approaches from the continual learning community. ultimately, we show improved continual learning performance for average test accuracy and backward transfer metrics compared to sampling - based methods and other non - uncertainty - based approaches. | arxiv:2501.10861 |
we prove that every lie derivation on a solid $ \ star - $ subalgebras of locally measurable operators it is equal to a sum of the associative derivation and the center - valued trace. | arxiv:1608.03996 |
we study the entanglement entropy of connected bipartitions in free fermion gases of n particles in arbitrary dimension d. we show that the von neumann and renyi entanglement entropies grow asymptotically as n ^ ( 1 - 1 / d ) ln n, with a prefactor that is analytically computed using the widom conjecture both for periodic and open boundary conditions. the logarithmic correction to the power - law behavior is related to the area - law violation in lattice free fermions. these asymptotic large - n behaviors are checked against exact numerical calculations for n - particle systems. | arxiv:1110.6276 |
let $ x $ be a ball banach function space on $ { \ mathbb r } ^ n $. let $ \ omega $ be a lipschitz function on the unit sphere of $ { \ mathbb r } ^ n $, which is homogeneous of degree zero and has mean value zero, and let $ t _ \ omega $ be the convolutional singular integral operator with kernel $ \ omega ( \ cdot ) / | \ cdot | ^ n $. in this article, under the assumption that the hardy - - littlewood maximal operator $ \ mathcal { m } $ is bounded on both $ x $ and its associated space, the authors prove that the commutator $ [ b, t _ \ omega ] $ is compact on $ x $ if and only if $ b \ in { \ rm cmo } ( { \ mathbb r } ^ n ) $. to achieve this, the authors mainly employ three key tools : some elaborate estimates, given in this article, on the norm in $ x $ of the commutators and the characteristic functions of some measurable subset, which are implied by the assumed boundedness of $ { \ mathcal m } $ on $ x $ and its associated space as well as the geometry of $ \ mathbb r ^ n $ ; the complete john - - nirenberg inequality in $ x $ obtained by y. sawano et al. ; the generalized fr \ ' { e } chet - - kolmogorov theorem on $ x $ also established in this article. all these results have a wide range of applications. particularly, even when $ x : = l ^ { p ( \ cdot ) } ( { \ mathbb r } ^ n ) $ ( the variable lebesgue space ), $ x : = l ^ { \ vec { p } } ( { \ mathbb r } ^ n ) $ ( the mixed - norm lebesgue space ), $ x : = l ^ \ phi ( { \ mathbb r } ^ n ) $ ( the orlicz space ), and $ x : = ( e _ \ phi ^ q ) _ t ( { \ mathbb r } ^ n ) $ ( the orlicz - slice space or the generalized amalgam space ), all these results are new. | arxiv:2101.07407 |
this expository paper is a tribute to ekkehart kr \ " oner ' s results on the intrinsic non - riemannian geometrical nature of a single crystal filled with point and / or line defects. a new perspective on this old theory is proposed, intended to contribute to the debate around the still open kr \ " oner ' s question : " what are the dynamical variables of our theory? " | arxiv:1010.3655 |
we identify a class of condensate states in the group field theory ( gft ) approach to quantum gravity that can be interpreted as macroscopic homogeneous spatial geometries. we then extract the dynamics of such condensate states directly from the fundamental quantum gft dynamics, following the procedure used in ordinary quantum fluids. the effective dynamics is a non - linear and non - local extension of quantum cosmology. we also show that any gft model with a kinetic term of laplacian type gives rise, in a semi - classical ( wkb ) approximation and in the isotropic case, to a modified friedmann equation. this is the first concrete, general procedure for extracting an effective cosmological dynamics directly from a fundamental theory of quantum geometry. | arxiv:1303.3576 |
in this work, single particle dispersion was analyzed for a bacterial turbulence by retrieving the virtual lagrangian trajectory via numerical integration of the lagrangian equation. high - order displacement functions were calculated for cases with and without mean velocity effect. two - regime power - law behavior for short and long time evolutions were identified experimentally, which were separated by the lagrangian integral time. for the case with the mean velocity effect, the experimental hurst numbers were determined to be $ 0. 94 $ and $ 0. 97 $ for short and long times evolutions, respectively. for the case without the mean velocity effect, the values were $ 0. 88 $ and $ 0. 58 $. moreover, very weak intermittency correction was detected. all measured hurst number were above $ 1 / 2 $, the value of the normal diffusion, which verifies the super - diffusion behavior of living fluid. this behavior increases the efficiency of bacteria to obtain food. | arxiv:1809.07457 |
we describe a viewpoint on the dempster / shafer ' theory of evidence ', and provide an interpretation which regards the combination formulas as statistics of the opinions of " experts ". this is done by introducing spaces with binary operations that are simpler to interpret or simpler to implement than the standard combination formula, and showing that these spaces can be mapped homomorphically onto the dempster / shafer theory of evidence space. the experts in the space of " opinions of experts " combine information in a bayesian fashion. we present alternative spaces for the combination of evidence suggested by this viewpoint. | arxiv:1304.3093 |
automatic assessment and evaluation of team performance during collaborative tasks is key to the learning analytics and computer - supported cooperative work research. there is a growing interest in the use of gaze - oriented cues for evaluating the collaboration and cooperativeness of teams. however, collecting gaze data using eye - trackers is not always feasible due to time and cost constraints. in this paper, we introduce an automated team assessment tool based on gaze points and joint visual attention ( jva ) information extracted by computer vision solutions. we then evaluate team collaborations in an undergraduate anatomy learning activity ( n = 60, 30 teams ) as a test user - study. the results indicate that higher jva was positively associated with student learning outcomes ( r ( 30 ) = 0. 50, p < 0. 005 ). moreover, teams who participated in two experimental groups, and used interactive 3 - d anatomy models, had higher jva ( f ( 1, 28 ) = 6. 65, p < 0. 05 ) and better knowledge retention ( f ( 1, 28 ) = 7. 56, p < 0. 05 ) than those in the control group. also, no significant difference was observed based on jva for different gender compositions of teams. the findings from this work offer implications in learning sciences and collaborative computing by providing a novel mutual attention - based measure to objectively evaluate team collaboration dynamics. | arxiv:2010.12012 |
with research opportunities, and the university of waterloo has launched integrated postgraduate engineering programs within the institute for quantum computing. similar programs are being pursued at delft university, technical university of munich, mit, centralesupelec and other technical universities. in the realm of undergraduate studies, opportunities for specialization are sparse. nevertheless, some institutions have begun to offer programs. the universite de sherbrooke offers a bachelor of science in quantum information, university of waterloo offers a quantum specialization in its electrical engineering program, and the university of new south wales offers a bachelor of quantum engineering. a report on the development of this bachelor degree has been published in ieee transactions on quantum engineering. students are trained in signal and information processing, optoelectronics and photonics, integrated circuits ( bipolar, cmos ) and electronic hardware architectures ( vlsi, fpga, asic ). in addition, they are exposed to emerging applications such as quantum sensing, quantum communication and cryptography and quantum information processing. they learn the principles of quantum simulation and quantum computing, and become familiar with different quantum processing platforms, such as trapped ions, and superconducting circuits. hands - on laboratory projects help students to develop the technical skills needed for the practical realization of quantum devices, consolidating their education in quantum science and technologies. = = see also = = quantum supremacy noisy intermediate - scale quantum era timeline of quantum computing and communication = = references = = | https://en.wikipedia.org/wiki/Quantum_engineering |
we experimentally demonstrate a testing strategy for boson samplers that is based on efficiently computable expressions for the output photon counting distributions binned over multiple optical modes. we apply this method to validate boson sampling experiments with three photons on a reconfigurable photonic chip, which implements a four - mode interferometer, analyzing 50 haar - random unitary transformations while tuning photon distinguishability via controlled delays. we show that for high values of indistinguishability, the experiment accurately reproduces the ideal boson sampling binned - mode distributions, which exhibit variations that depend both on the specific interferometer implemented as well as the choice of bin, confirming the usefulness of the method to diagnose imperfections such as partial distinguishability or imperfect chip control. finally, we analyze the behavior of haar - averaged binned - mode distributions with partial distinguishability and demonstrate analytically that its variance is proportional to the average of the square of the photons ' indistinguishability parameter. these findings highlight the central role of binning in boson sampling validation, offering a scalable and efficient framework for assessing multiphoton interference and experimental performance. | arxiv:2502.05093 |
both the naturalness of the electroweak symmetry breaking and the resolution of the strong cp problem may require a small higgsino mass $ \ mu $ generated by a realization of the dfsz axion model. assuming the axino is the lightest supersymmetric particle, we study its implications on $ \ mu $ and the axion scale. copiously produced light higgsinos at collider ( effectively only neutral nlsp pairs ) eventually decay to axinos leaving prompt multi - leptons or displaced vertices which are being looked for at the lhc. we use latest lhc7 + 8 results to derive current limits on $ \ mu $ and the axion scale. various higgsino - axino phenomenology is illustrated by comparing with a standard case without lightest axinos as well as with a more general case with additional light gauginos in the spectrum. | arxiv:1407.1218 |
we investigate the construction of early stopping rules in the nonparametric regression problem where iterative learning algorithms are used and the optimal iteration number is unknown. more precisely, we study the discrepancy principle, as well as modifications based on smoothed residuals, for kernelized spectral filter learning algorithms including gradient descent. our main theoretical bounds are oracle inequalities established for the empirical estimation error ( fixed design ), and for the prediction error ( random design ). from these finite - sample bounds it follows that the classical discrepancy principle is statistically adaptive for slow rates occurring in the hard learning scenario, while the smoothed discrepancy principles are adaptive over ranges of faster rates ( resp. higher smoothness parameters ). our approach relies on deviation inequalities for the stopping rules in the fixed design setting, combined with change - of - norm arguments to deal with the random design setting. | arxiv:2004.08436 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.