text
stringlengths
1
3.65k
source
stringlengths
15
79
the algorithm behind the fast fourier transform has a simple yet beautiful geometric interpretation that is often lost in translation in a classroom. this article provides a visual perspective which aims to capture the essence of it.
arxiv:1805.08633
break even point, net present value, marginal sales, marginal cost, return on investment of the industrial plant after the analysis of the heat and mass transfer of the plant. process data analytics : applying data analytics and machine learning methods for process manufacturing problems. = = history of process engineering = = various chemical techniques have been used in industrial processes since time immemorial. however, it wasn ' t until the advent of thermodynamics and the law of conservation of mass in the 1780s that process engineering was properly developed and implemented as its own discipline. the set of knowledge that is now known as process engineering was then forged out of trial and error throughout the industrial revolution. the term process, as it relates to industry and production, dates back to the 18th century. during this time period, demands for various products began to drastically increase, and process engineers were required to optimize the process in which these products were created. by 1980, the concept of process engineering emerged from the fact that chemical engineering techniques and practices were being used in a variety of industries. by this time, process engineering had been defined as " the set of knowledge necessary to design, analyze, develop, construct, and operate, in an optimal way, the processes in which the material changes ". by the end of the 20th century, process engineering had expanded from chemical engineering - based technologies to other applications, including metallurgical engineering, agricultural engineering, and product engineering. = = see also = = = = references = = = = external links = = advanced process engineering at cranfield university ( cranfield, uk ) sargent centre for process systems engineering ( imperial ) process systems engineering at cornell university ( ithaca, new york ) department of process engineering at stellenbosch university process research and intelligent systems modeling ( prism ) group at byu process systems engineering at cmu process systems engineering laboratory at rwth aachen the process systems engineering laboratory ( mit ) process engineering consulting at canada
https://en.wikipedia.org/wiki/Process_engineering
we present a calculation of the hindered m $ 1 $ $ \ upsilon ( 2s ) \ to \ eta _ b ( 1s ) \ gamma $ decay rate using lattice non - relativistic qcd. the calculation includes spin - dependent relativistic corrections to the nrqcd action through $ \ mathcal { o } ( v ^ 6 ) $ in the quark ' s relative velocity, relativistic corrections to the leading order current which mediates the transition through the quark ' s magnetic moment, radiative corrections to the leading spin - magnetic coupling and for the first time a full error budget. we also use gluon field ensembles at multiple lattice spacing values, all of which include $ u $, $ d $, $ s $ and $ c $ quark vacuum polarisation. our result for the branching fraction is $ \ mathcal { b } ( \ upsilon ( 2s ) \ to \ eta _ b ( 1s ) \ gamma ) = 5. 4 ( 1. 8 ) \ times 10 ^ { - 4 } $, which agrees with the current experimental value.
arxiv:1508.01694
in the travelling salesman problem, every vertex of an edge - weighted graph has to be visited by an agent who traverses the edges of the graph. in this problem, it is usually assumed that the costs of each edge are given in advance, making it computationally hard but possible to calculate an optimal tour for the agent. also in the graph exploration problem, every vertex of a given graph must be visited, but here the graph is not known in the beginning - at every point, an algorithm only knows about the already visited vertices and their neighbors. both however are not necessarily realistic settings : usually the structure of the graph ( for example underlying road network ) is known in advance, but the details are not. one usually has a prediction of how long it takes to traverse through a particular road, but due to road conditions or imprecise maps the agent might realize that a road will take slightly longer than expected when arriving on it. to deal with those deviations, it is natural to assume that the agent is able to adapt to the situation : when realizing that taking a particular road is more expensive than expected, recalculating the tour and taking another road instead is possible. we analyze the competitive ratio of this problem based on the perturbation factor $ \ alpha $ of the edge weights. for general graphs we show that for realistic factors smaller than $ 2 $ there is no strategy that achieves a competitive ratio better than $ \ alpha $, which can be matched by a simple algorithm. in addition, we prove an algorithm which has a competitive ratio of $ \ frac { 1 + \ alpha } { 2 } $ for restricted graph classes like complete graphs with uniform announced edge weights. here, we present a matching lower bound as well, proving that the strategy for those graph classes is best possible. we conclude with a remark about special graph classes like cycles.
arxiv:2501.18496
in this paper, we study massless braneworld black holes as gravitational lenses. we find the weak and the strong deflection limits for the deflection angle, from which we calculate the positions and magnifications of the images. we compare the results obtained here with those corresponding to schwarzschild and reissner - nordstrom spacetimes, and also with those found in previous works for some other braneworld black holes.
arxiv:1207.5502
stimulated by recent t2k indications on a surprisingly large neutrino mixing theta _ 13 angle we suggest that this last unknown angle is not independent but determined by the known large solar and atmospheric neutrino oscillation angles via simple, symmetric, positive - definite equation. encouragingly, it appears in agreement with recent new long base - line appearance nu - mu to nu - e neutrino oscillation t2k data. at zero approximation this equation determines benchmark bimaximal neutrino mixing matrix as its unique solution with one texture zero. extension to quark mixing angles leads to definite equation with unit mixing matrix as sole solution. all six realistic neutrino and quark mixing angles are explicitly expressed as small deviations from the zero approximation benchmark ones by one small empirical universal parameter. thus in the considered semi - empirical flavor phenomenology the system of two related neutrino and quark equations is the source of the known main empirical rule of ' large neutrino mixing angles versus small quark ones ' as well as other particle mixing regularities.
arxiv:1107.1145
emission of muonium ( $ \ mu ^ + e ^ - $ ) atoms from a laser - processed aerogel surface into vacuum was studied for the first time. laser ablation was used to create hole - like regions with diameter of about 270 $ ~ \ mu $ m in a triangular pattern with hole separation in the range of 300 - - 500 $ ~ \ mu $ m. the emission probability for the laser - processed aerogel sample is at least eight times higher than for a uniform one.
arxiv:1407.8248
the lagrange point $ l _ 1 $ for the sun - earth system is considered due to its special importance for the scientific community for the design of space missions. the location of the lagrangian points with the trajectories and stability regions of $ l _ 1 $ are computed numerically for the initial conditions very close to the point. the influence of belt, effect of radiation pressure due to sun and oblateness effect of second primary ( finite body earth ) is presented for various values of parameters. the collinear point $ l _ 1 $ is asymptotically stable within a specific interval of time $ t $ correspond to the values of parameters and initial conditions.
arxiv:1003.3980
the relationship between spatially heterogeneous dynamics ( shd ) and jamming is studied in a glass - forming binary lennard - jones system via molecular dynamics simulations. it has been suggested that the probability distribution of interparticle forces $ p ( f ) $ develops a peak at the glass transition temperature $ t _ g $, and that the large force inhomogeneities, responsible for structural arrest in granular materials, are related to dynamical heterogeneities in supercooled liquids that form glasses. it has been further suggested that ` ` force chains ' ' present in granular materials may exist in supercooled liquids, and may provide an order parameter for the glass transition. our goal is to investigate the extent to which the forces experienced by particles in a glass - forming liquid are related to shd, and compare these forces to those observed in granular materials and other glass - forming systems. we find no peak in $ p ( f ) $ at any temperature in our system, even below $ t _ g $. we also find that particles that have been localized for a long time are less likely to experience high relative force and that mobile particles experience higher relative forces at shorter time scales, indicating a correlation between pairwise forces and particle mobility. we also discuss a possible relationship between force chains found here and the development of string - like motion found in other glass - forming liquids.
arxiv:cond-mat/0406451
we introduce a new paradigm for dark matter ( dm ) interactions in which the interaction strength is asymptotically safe. in models of this type, the coupling strength is small at low energies but increases at higher energies, and asymptotically approaches a finite constant value. the resulting phenomenology of this " asymptotically safe dm " is quite distinct. one interesting effect of this is to partially offset the low - energy constraints from direct detection experiments without affecting thermal freeze - out processes which occur at higher energies. high - energy collider and indirect annihilation searches are the primary ways to constrain or discover asymptotically safe dark matter.
arxiv:1412.8034
a viable quantum theory of gravity is one of the biggest challenges facing physicists. we discuss the confluence of two highly expected features which might be instrumental in the quest of a finite and renormalizable quantum gravity - - spontaneous dimensional reduction and self - completeness. the former suggests the spacetime background at the planck scale may be effectively two - dimensional, while the latter implies a condition of maximal compression of matter by the formation of an event horizon for planckian scattering. we generalize such a result to an arbitrary number of dimensions, and show that gravity in higher than four dimensions remains self - complete, but in lower dimensions it is not. in such a way we established an " exclusive disjunction " or " exclusive or " ( xor ) between the occurrence of self - completeness and dimensional reduction, with the goal of actually reducing the unknowns for the scenario of the physics at the planck scale. potential phenomenological implications of this result are considered by studying the case of a two - dimensional dilaton gravity model resulting from dimensional reduction of einstein gravity.
arxiv:1206.4696
the coefficients of the regular continued fraction for random numbers are distributed by the gauss - kuzmin distribution according to khinchin ' s law. their geometric mean converges to khinchin ' s constant and their rational approximation speed is khinchin ' s speed. it is an open question whether these theorems also apply to algebraic numbers of degree $ > 2 $. since they apply to almost all numbers it is, however, commonly inferred that it is most likely that non quadratic algebraic numbers also do so. we argue that this inference is not well grounded. there is strong numerical evidence that khinchin ' s speed is too fast. for khinchin ' s law and khinchin ' s constant the numerical evidence is unclear. we apply the kullback leibler divergence ( kld ) to show that the gauss - kuzmin distribution does not fit well for algebraic numbers of degree $ > 2 $. our suggestion to truncate the gauss - kuzmin distribution for finite parts fits slightly better but its kld is still much larger than the kld of a random number. so, if it converges the convergence is non uniform and each algebraic number has its own bound. we conclude that there is no evidence to apply the theorems that hold for random numbers to algebraic numbers.
arxiv:2208.14359
to clarify a crucial role of a spin - orbit coupling in the emergence of novel spin - orbital states in $ 5d $ - electron compounds such as sr $ _ { 2 } $ iro $ _ { 4 } $, we investigate ground state properties of a $ t _ { \ rm 2g } $ - orbital hubbard model on a square lattice by lanczos diagonalization. in the absence of the spin - orbit coupling, the ground state is spin singlet. when the spin - orbit coupling is strong enough, the ground state turns into a weak ferromagnetic state. the weak ferromagnetic state is a singlet state in terms of an effective total angular momentum. regarding the orbital state, we find the so - called complex orbital state, in which real $ xy $, $ yz $, and $ zx $ orbital states are mixed with complex coefficients.
arxiv:1310.8016
spatially extended localized spins can interact via indirect exchange interaction through friedel oscillations in the fermi sea. in arrays of localized spins such interaction can lead to a magnetically ordered phase. without external magnetic field such a phase is well understood via a " two - impurity " kondo model. here we employ non - equilibrium transport spectroscopy to investigate the role of the orbital phase of conduction electrons on the magnetic state of a spin lattice. we show experimentally, that even tiniest perpendicular magnetic field can influence the magnitude of the inter - spin magnetic exchange.
arxiv:0710.1221
we study the segal - bargmann transform on a motion group rn n k ; where k is a compact subgroup of so ( n ) : a characterization of the poisson integrals associated to the laplacian on rn n k is given. we also establish a paley - wiener type theorem using the complexified representations.
arxiv:1001.2119
using an { \ it ab initio } approach, we report a phonon soft mode in the tetragonal structure described by the space group $ i4 _ { 1 } 22 $ of the $ 1 $ k $ 5d $ superconductor cd $ _ 2 $ re $ _ 2 $ o $ _ 7 $. it induces an orthorhombic distortion to a crystal structure described by the space group $ f222 $ which hosts the superconducting state. this new phase has a lower total energy than the other known crystal structures of cd $ _ 2 $ re $ _ 2 $ o $ _ 7 $. comprehensive temperature dependent raman scattering experiments on isotope enriched samples, $ ^ { 116 } $ cd $ _ 2 $ re $ _ 2 { ^ { 18 } } $ o $ _ 7 $, not only confirm the already known structural phase transitions but also allow us to identify a new characteristic temperature regime around $ \ sim 80 $ k, below which the raman spectra undergo remarkable changes with the development of several sharp modes and mode splitting. together with the results of the \ textit { ab initio } phonon calculations we take these observations as strong evidence for another phase transition to a novel low - temperature crystal structure of cd $ _ 2 $ re $ _ 2 $ o $ _ 7 $.
arxiv:1911.11057
we present new astrometry for the young ( 12 - - 21 myr ) exoplanet beta pictoris b taken with the gemini / nici and magellan / magao instruments between 2009 and 2012. the high dynamic range of our observations allows us to measure the relative position of beta pic b with respect to its primary star with greater accuracy than previous observations. based on a markov chain monte carlo analysis, we find the planet has an orbital semi - major axis of 9. 1 ( + 5. 3, - 0. 5 ) au and orbital eccentricity < 0. 15 at 68 % confidence ( with 95 % confidence intervals of 8. 2 - - 48 au and 0. 00 - - 0. 82 for semi - major axis and eccentricity, respectively, due to a long narrow degenerate tail between the two ). we find that the planet has reached its maximum projected elongation, enabling higher precision determination of the orbital parameters than previously possible, and that the planet ' s projected separation is currently decreasing. with unsaturated data of the entire beta pic system ( primary star, planet, and disk ) obtained thanks to nici ' s semi - transparent focal plane mask, we are able to tightly constrain the relative orientation of the circumstellar components. we find the orbital plane of the planet lies between the inner and outer disks : the position angle ( pa ) of nodes for the planet ' s orbit ( 211. 8 + / - 0. 3 degrees ) is 7. 4 sigma greater than the pa of the spine of the outer disk and 3. 2 sigma less than the warped inner disk pa, indicating the disk is not collisionally relaxed. finally, for the first time we are able to dynamically constrain the mass of the primary star beta pic to 1. 76 ( + 0. 18, - 0. 17 ) solar masses.
arxiv:1403.7195
we provide an optimization - based framework to perform counterfactual analysis in a dynamic model with hidden states. our framework is grounded in the ` ` abduction, action, and prediction ' ' approach to answer counterfactual queries and handles two key challenges where ( 1 ) the states are hidden and ( 2 ) the model is dynamic. recognizing the lack of knowledge on the underlying causal mechanism and the possibility of infinitely many such mechanisms, we optimize over this space and compute upper and lower bounds on the counterfactual quantity of interest. our work brings together ideas from causality, state - space models, simulation, and optimization, and we apply it on a breast cancer case study. to the best of our knowledge, we are the first to compute lower and upper bounds on a counterfactual query in a dynamic latent - state model.
arxiv:2205.13832
ultra - high energy neutrinos hold promise as cosmic messengers to advance the understanding of extreme astrophysical objects and environments as well as possible probes for discovering new physics. this proceeding describes the motivation for measuring high energy neutrinos. a short summary of the mechanisms for producing high energy neutrinos is provided along with an overview of current and proposed modes of detection. the science reach of the field is also briefly surveyed. as an example of the potential of neutrinos as cosmic messengers, the recent results from an icecube collaboration real - time high energy neutrino alert and subsequent search of archival data are described.
arxiv:1901.02528
we construct a vector field e from the real and imaginary parts of an entire function xi ( z ) which arises in the quantum statistical mechanics of relativistic gases when the spatial dimension d is analytically continued into the complex z plane. this function is built from the gamma and riemann zeta functions and is known to satisfy the functional identity xi ( z ) = xi ( 1 - z ). e satisfies the conditions for a static electric field. the structure of e in the critical strip is determined by its behavior near the riemann zeros on the critical line re ( z ) = 1 / 2, where each zero can be assigned a + or - vorticity of a related pseudo - magnetic field. using these properties, we show that a hypothetical riemann zero that is off the critical line leads to a frustration of this " electric " field. we formulate our argument more precisely in terms of the potential phi satisfying e = - gradient phi, and construct phi explicitly. one outcome of our analysis is a formula for the n - th zero on the critical line for large n expressed as the solution of a simple transcendental equation. riemann ' s counting formula for the number of zeros on the entire critical strip can be derived from this formula. our result is much stronger than riemann ' s counting formula, since it provides an estimate of the n - th zero along the critical line. this provides a simple way to estimate very high zeros to very good accuracy, and we estimate the 10 ^ { 10 ^ 6 } - th one.
arxiv:1305.2613
gone through numerous changes, largely due to advances in technology and the incorporation of technology into business. currently, there are many it - dependent companies that rely on information technology in order to operate their business e. g. telecommunication or banking company. for the other types of business, it plays the big part of company including the applying of workflow instead of using the paper request form, using the application control instead of manual control which is more reliable or implementing the erp application to facilitate the organization by using only one application. according to these, the importance of it audit is constantly increased. one of the most important roles of the it audit is to audit over the critical system in order to support the financial audit or to support the specific regulations announced e. g. sox. = = emerging issues = = there are also new audits being imposed by various standard boards which are required to be performed, depending upon the audited organization, which will affect it and ensure that it departments are performing certain functions and controls appropriately to be considered compliant. examples of such audits are ssae 16, isae 3402, and iso27001 : 2013. = = = web presence audits = = = the extension of the corporate it presence beyond the corporate firewall ( e. g. the adoption of social media by the enterprise along with the proliferation of cloud - based tools like social media management systems ) has elevated the importance of incorporating web presence audits into the it / is audit. the purposes of these audits include ensuring the company is taking the necessary steps to : rein in use of unauthorized tools ( e. g. " shadow it " ) minimize damage to reputation maintain regulatory compliance prevent information leakage mitigate third - party risk minimize governance risk the use of departmental or user developed tools has been a controversial topic in the past. however, with the widespread availability of data analytics tools, dashboards, and statistical packages users no longer need to stand in line waiting for it resources to fulfill seemingly endless requests for reports. the task of it is to work with business groups to make authorized access and reporting as straightforward as possible. to use a simple example, users should not have to do their own data matching so that pure relational tables are linked in a meaningful way. it needs to make non - normalized, data warehouse type files available to users so that their analysis work is simplified. for example, some organizations will refresh a warehouse periodically and create easy to use " flat ' tables which can be easily uploaded by a package such as tableau and used to
https://en.wikipedia.org/wiki/Information_technology_audit
markov chain monte carlo ( mcmc ) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. in this paper, we introduce a new communication - free parallel method, the likelihood inflating sampling algorithm ( lisa ), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running mcmc methods independently in parallel on each subset using different processors. each processor will be used to run an mcmc chain that samples sub - posterior distributions which are defined using an " inflated " likelihood function. we develop a strategy for combining the draws from different sub - posteriors to study the full posterior of the bayesian additive regression trees ( bart ) model. the performance of the method is tested using both simulated and real data.
arxiv:1605.02113
optimized operation of the transmission network is one solution to supply extra demand by more efficient use of transmission facilities, and line switching is one main tool to achieve this goal. in this paper, we add extra constraints to opf formulation to limit the maximum number of switching operations in every hour based on network conditions, and add switching cost in the objective function to represent extra maintenance cost as a result of frequent switching. we also propose an algorithm to remove less important lines for switching in different loading conditions, so opf with transmission switching will be solved faster for real - time operation. it is applied to a case study with several operation hours.
arxiv:1507.03825
fermi has detected over 200 pulsars above 100 mev. in a previous work, using 3 years of lat data ( 1fhl catalog ) we reported that 28 of these pulsars show emission above 10 gev ; only three of these, however, were millisecond pulsars ( msps ). the recently - released third catalog of hard fermi - lat sources ( 3fhl ) contains over 1500 sources showing emission above 10 gev, 17 of which are associated with gamma - ray msps. using three times as much data as in our previous study ( 1fhl ), we report on a systematic analysis of these pulsars to determine the highest energy ( pulsed ) emission frommsps and discuss the best possible candidates for follow - up observations with ground - based tev instruments ( h. e. s. s., magic, veritas, and the upcoming cta ).
arxiv:1712.06808
the implicitly shifted qr iteration is used as a restart procedure for the arnoldi method for the calculation of a few dominant eigenvalues of a large matrix. we show that the underlying idea of implicit polynomial filtering can be utilized in much the same manner via the implicitly shifted lr iteration to create a restart procedure for the non - symmetric lanczos algorithm for eigenvalue computations, which preserves the tri - diagonal structure of the reduced matrix.
arxiv:2407.06561
we consider the case, in qcd, of a single jet propagating within a strongly interacting fluid, of finite extent. interactions lead to the appearance of a source of energy - momentum within the fluid. the remnant jet that escapes the container is analyzed along with portions of the medium excited by the jet. we study the effect of a static versus a semi - realistic expanding medium, with jets traveling inward versus outward. we consider the medium response via recoils in partonic scatterings based on a weakly - coupled description and its combination with hydrodynamical medium response based on a strongly - coupled description, followed by incorporation into a jet. the effects of these limits on the reconstructed energy, momentum and mass of the jet, as a function of the angle away from the original parton direction are studied. it is demonstrated that different flow velocity configurations in the medium produce considerable differences in jet observables. this work highlights the importance of accurate dynamical modeling of the soft medium as a foundation on which to calculate jet modification, and casts skepticism on results obtained without such modeling.
arxiv:2001.08321
the erisk laboratory aims to address issues related to early risk detection on the web. in this year ' s edition, three tasks were proposed, where task 2 was about early detection of signs of anorexia. early risk detection is a problem where precision and speed are two crucial objectives. our research group solved task 2 by defining a cpi + dmc approach, addressing both objectives independently, and a time - aware approach, where precision and speed are considered a combined single - objective. we implemented the last approach by explicitly integrating time during the learning process, considering the erde { \ theta } metric as the training objective. it also allowed us to incorporate temporal metrics to validate and select the optimal models. we achieved outstanding results for the erde50 metric and ranking - based metrics, demonstrating consistency in solving erd problems.
arxiv:2410.17963
a computational system is called autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control. the capability and spread of such systems have reached the point where they are beginning to touch much of everyday life. however, regulators grapple with how to deal with autonomous systems, for example how could we certify an unmanned aerial system for autonomous use in civilian airspace? we here analyse what is needed in order to provide verified reliable behaviour of an autonomous system, analyse what can be done as the state - of - the - art in automated verification, and propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators. case studies in seven distinct domains illustrate the article.
arxiv:2001.09124
in our research we test data and models for the recognition of housing quality in the city of amsterdam from ground - level and aerial imagery. for ground - level images we compare google streetview ( gsv ) to flickr images. our results show that gsv predicts the most accurate building quality scores, approximately 30 % better than using only aerial images. however, we find that through careful filtering and by using the right pre - trained model, flickr image features combined with aerial image features are able to halve the performance gap to gsv features from 30 % to 15 %. our results indicate that there are viable alternatives to gsv for liveability factor prediction, which is encouraging as gsv images are more difficult to acquire and not always available.
arxiv:2403.08915
we prove that, for any $ n \ geq 2 $, the classes of $ \ rm { fp } _ { n } $ - injective modules and of $ \ rm { fp } _ n $ - flat modules are both covering and preenveloping over any ring $ r $. this includes the case of $ \ rm { fp } _ { \ infty } $ - injective and $ \ rm { fp } _ { \ infty } $ - flat modules ( i. e. absolutely clean and, respectively, level modules ). then we consider a generalization of the class of ( strongly ) gorenstein flat modules - the ( strongly ) gorenstein ac - flat modules ( cycles of exact complexes of flat modules that remain exact when tensored with any absolutely clean module ). we prove that some of the properties of gorenstein flat modules extend to the class of gorenstein ac - flat modules ; for example we show that this class is precovering over any ring $ r $. we also show that ( as in the case of gorenstein flat modules ) every gorenstein ac - flat module is a direct summand of a strongly gorenstein ac - flat module. when $ r $ is such that the class of gorenstein ac - flat modules is closed under extensions, the converse is also true. we also prove that if the class of gorenstein ac - flat modules is closed under extensions, then this class of modules is covering.
arxiv:1709.10160
we prove, for a class of first order differential operators that contains the stein - weiss, dirac and penrose twistor operators, a family of kato inequalities that interpolates between the classical and the refined kato. for the hodge - de - rham operator we get a more detailed result. as a corollary, we get various kato inequalities from the literature.
arxiv:2410.15520
a two - dimensional grid with dots is called a \ emph { configuration with distinct differences } if any two lines which connect two dots are distinct either in their length or in their slope. these configurations are known to have many applications such as radar, sonar, physical alignment, and time - position synchronization. rather than restricting dots to lie in a square or rectangle, as previously studied, we restrict the maximum distance between dots of the configuration ; the motivation for this is a new application of such configurations to key distribution in wireless sensor networks. we consider configurations in the hexagonal grid as well as in the traditional square grid, with distances measured both in the euclidean metric, and in the manhattan or hexagonal metrics. we note that these configurations are confined inside maximal anticodes in the corresponding grid. we classify maximal anticodes for each diameter in each grid. we present upper bounds on the number of dots in a pattern with distinct differences contained in these maximal anticodes. our bounds settle ( in the negative ) a question of golomb and taylor on the existence of honeycomb arrays of arbitrarily large size. we present constructions and lower bounds on the number of dots in configurations with distinct differences contained in various two - dimensional shapes ( such as anticodes ) by considering periodic configurations with distinct differences in the square grid.
arxiv:0811.3832
we define the singular support of an $ \ ell $ - adic sheaf on a smooth variety over any field. to do this, we combine beilinson ' s construction of the singular support for torsion \ ' etale sheaves with hansen and scholze ' s theory of universal local acyclicity for $ \ ell $ - adic sheaves.
arxiv:2309.02587
the distribution of the spectral numbers of an isolated hypersurface singularity is studied in terms of the bernoulli moments. these are certain rational linear combinations of the higher moments of the spectral numbers. they are related to the generalized bernoulli polynomials. we conjecture that their signs are alternating and prove this in many cases. one motivation for the bernoulli moments comes from the comparison with compact complex manifolds.
arxiv:math/0405501
hermann schwarz, while studying complex analysis, introduced the geometric interpretation for the poisson kernel in 1890. we shall see here that the geometric interpretation can be useful to develop a new approach to some old classical problems as well as to obtain several new results, mostly related to hyperbolic geometry. for example, we obtain one radius theorem saying that any two radial eigenfunctions of a hyperbolic laplacian assuming the value 1 at the origin can not assume any other common value within some interval [ 0, p ], where the length of this interval depends only on the location of the eigenvalues on the complex plane and does not depend on the distance between them.
arxiv:0912.0223
( pre ) closure spaces are a generalization of topological spaces covering also the notion of neighbourhood in discrete structures, widely used to model and reason about spatial aspects of distributed systems. in this paper we introduce an abstract theoretical framework for the systematic investigation of the logical aspects of closure spaces. to this end, we introduce the notion of closure ( hyper ) doctrines, i. e. doctrines endowed with inflationary operators ( and subject to suitable conditions ). the generality and effectiveness of this concept is witnessed by many examples arising naturally from topological spaces, fuzzy sets, algebraic structures, coalgebras, and covering at once also known cases such as kripke frames and probabilistic frames ( i. e., markov chains ). then, we show how spatial logical constructs concerning surroundedness and reachability can be interpreted by endowing hyperdoctrines with a general notion of paths. by leveraging general categorical constructions, we provide axiomatisations and sound and complete semantics for various fragments of logics for closure operators. therefore, closure hyperdoctrines are useful both for refining and improving the theory of existing spatial logics, but especially for the definition of new spatial logics for new applications.
arxiv:2007.04213
the methodology of the riemann - hilbert ( rh ) factorisation approach for lax - pair isospectral deformations is used to derive, in the solitonless sector, the leading - order asymptotics as $ t \ to \ pm \ infty $ $ ( x / t \ sim \ mathcal { o } ( 1 ) ) $ of solutions to the cauchy problem for the defocusing non - linear schr \ " { o } dinger equation ( d $ { } _ { f } $ nlse ), $ \ mi \ partial _ { t } u + \ partial _ { x } ^ { 2 } u - 2 ( | u | ^ { 2 } - 1 ) u = 0 $, with ( finite - density ) initial data $ u ( x, 0 ) = _ { x \ to \ pm \ infty } \ exp ( \ tfrac { \ mi ( 1 \ mp 1 ) \ theta } { 2 } ) ( 1 + o ( 1 ) ) $, $ \ theta \ in [ 0, 2 \ pi ) $. a limiting case of these asymptotics related to the rh problem for the painlev \ ' { e } ii equation, or one of its special reductions, is also identified.
arxiv:nlin/0110024
predictive uncertainty estimation remains a challenging problem precluding the use of deep neural networks as subsystems within safety - critical applications. aleatoric uncertainty is a component of predictive uncertainty that cannot be reduced through model improvements. uncertainty propagation seeks to estimate aleatoric uncertainty by propagating input uncertainties to network predictions. existing uncertainty propagation techniques use one - way information flows, propagating uncertainties layer - by - layer or across the entire neural network while relying either on sampling or analytical techniques for propagation. motivated by the complex information flows within deep neural networks ( e. g. skip connections ), we developed and evaluated a novel approach by posing uncertainty propagation as a non - linear optimization problem using factor graphs. we observed statistically significant improvements in performance over prior work when using factor graphs across most of our experiments that included three datasets and two neural network architectures. our implementation balances the benefits of sampling and analytical propagation techniques, which we believe, is a key factor in achieving performance improvements.
arxiv:2312.05946
the functional relation of the riemann z \ ^ eta function provides us with neither the nature nor the expression of z \ ^ eta at positive odd numbers. from the function $ f ( z ) = \ frac { z ^ { - 2n } } { e ^ z - 1 } $, we find a functional relation involving $ \ zeta ( 4n - 1 ) $, $ \ zeta ( 2p ) $ and $ \ zeta ( 4n - 1 - 2p ) $. it is given by : \ begin { equation } \ zeta ( 4n - 1 ) = \ frac { 1 } { 2n - 1 } \ sum _ { p = 1 } ^ { 2n - 2 } \ zeta ( 2p ) \ zeta ( 4n - 1 - 2p ). \ end { equation } $ n = 2, 3, 4, 5, 6,... $ from this formula we introduce a new approach to study the nature of $ \ zeta $ on these integers.
arxiv:2403.17997
the discovery, representation and reconstruction of ( technical ) integration networks from network mining ( nm ) raw data is a difficult problem for enterprises. this is due to large and complex it landscapes within and across enterprise boundaries, heterogeneous technology stacks, and fragmented data. to remain competitive, visibility into the enterprise and partner it networks on different, interrelated abstraction levels is desirable. we present an approach to represent and reconstruct the integration networks from nm raw data using logic programming based on first - order logic. the raw data expressed as integration network model is represented as facts, on which rules are applied to reconstruct the network. we have built a system that is used to apply this approach to real - world enterprise landscapes and we report on our experience with this system.
arxiv:1301.1332
we study the dynamical properties of irregular model sets and show that the translation action on their hull always admits an infinite independence set. the dynamics can therefore not be tame and the topological sequence entropy is strictly positive. extending the proof to a more general setting, we further obtain that tame implies regular for almost automorphic group actions on compact spaces. in the converse direction, we show that even in the restrictive case of euclidean cut and project schemes irregular model sets may be uniquely ergodic and have zero topological entropy. this provides negative answers to questions by schlottmann and moody in the euclidean setting.
arxiv:1811.06283
the direct manipulation of an organism ' s genes. unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. genetic engineering techniques have found success in numerous applications. some examples include the improvement of crop technology ( not a medical application, but see biological systems engineering ), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse ( cancer mouse ) for research. = = = neural engineering = = = neural engineering ( also known as neuroengineering ) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non - living constructs. neural engineering can assist with numerous things, including the future development of prosthetics. for example, cognitive neural prosthetics ( cnp ) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices. = = = pharmaceutical engineering = = = pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of chemical engineering, and pharmaceutical analysis. it may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment. = = hospital and medical devices = = this is an extremely broad category — essentially covering all health care products that do not achieve their intended results through predominantly chemical ( e. g., pharmaceuticals ) or biological ( e. g., vaccines ) means, and do not involve metabolism. a medical device is intended for use in : the diagnosis of disease or other conditions in the cure, mitigation, treatment, or prevention of disease. some examples include pacemakers, infusion pumps, the heart - lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants. stereolithography is a practical example of medical modeling being used to create physical objects. beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative the
https://en.wikipedia.org/wiki/Biomedical_engineering
quantum key distribution ( qkd ) is a revolutionary cryptography response to the rapidly growing cyberattacks threat posed by quantum computing. yet, the roadblock limiting the vast expanse of secure quantum communication is the exponential decay of the transmitted quantum signal with the distance. today ' s quantum cryptography is trying to solve this problem by focusing on quantum repeaters. however, efficient and secure quantum repetition at sufficient distances is still far beyond modern technology. here, we shift the paradigm and build the long - distance security of the qkd upon the quantum foundations of the second law of thermodynamics and end - to - end physical oversight over the transmitted optical quantum states. our approach enables us to realize quantum states ' repetition by optical amplifiers keeping states ' wave properties and phase coherence. the unprecedented secure distance range attainable through our approach opens the door for the development of scalable quantum - resistant communication networks of the future.
arxiv:2301.10610
coded - caching is a promising technique to reduce the peak rate requirement of backhaul links during high traffic periods. in this letter, we study the effect of adaptive transmission on the performance of coded - caching based networks. particularly, concentrating on the reduction of backhaul peak load during the high traffic periods, we develop adaptive rate and power allocation schemes maximizing the network successful transmission probability, which is defined as the probability of the event with all cache nodes decoding their intended signals correctly. moreover, we study the effect of different message decoding and buffering schemes on the system performance. as we show, the performance of coded - caching networks is considerably affected by rate / power allocation as well as the message decoding / buffering schemes.
arxiv:2103.07234
the dispersion relations for leptons in the symmetric phase of the electroweak model in the presence of a constant hypermagnetic field are investigated. the one - loop fermion self - energies are calculated in the lowest landau level approximation and used to show that the hypermagnetic field forbids the generation of the ' ' effective mass ' ' found as a pole of the fermions ' propagators at high temperature and zero fields. in the considered approximation leptons behave as massless particles propagating only along the direction of the external field. the reported results can be of interest for the cosmological implications of primordial hypermagnetic fields.
arxiv:hep-ph/0204126
every semigroup which is a finite disjoint union of copies of the free mono - genic semigroup ( natural numbers under addition ) has soluble word prob - lem and soluble membership problem. efficient algorithms are given for both problems.
arxiv:1503.06818
we consider a generalization of the full symmetric toda hierarchy where the matrix $ \ tilde { l } $ of the lax pair is given by $ \ tilde { l } = ls $, with a full symmetric matrix $ l $ and a nondegenerate diagonal matrix $ s $. the key feature of the hierarchy is that the inverse scattering data includes a class of noncompact groups of matrices, such as $ o ( p, q ) $. we give an explicit formula for the solution to the initial value problem of this hierarchy. the formula is obtained by generalizing the orthogonalization procedure of szeg \ " { o }, or the qr factorization method of symes. the behaviors of the solutions are also studied. generically, there are two types of solutions, having either sorting property or blowing up to infinity in finite time. the $ \ tau $ - function structure for the tridiagonal hierarchy is also studied.
arxiv:solv-int/9505004
this paper reviews the use of relatively new manufacturing method called as additive manufacturing, most often mentioned as 3d printing in fabrication of high performance superalloys. the overview of the article describes the structure, property, processing, performance relationship of the fabrication process and the superalloys. the manufacturing methods such as electron beam melting, laser beam melting and direct energy deposition used to fabricate commercially available alloys are explained. the microstructure / grain structure resulting from directional building, complex thermal cycles is discussed. an overview of the properties of the superalloys and their performance as well as applications is presented.
arxiv:1805.11664
recently, both charge density wave ( cdw ) and superconductivity have been observed in kagome compounds $ a $ v $ _ 3 $ sb $ _ 5 $. however, the nature of cdw that results in many novel charge modulations is still under hot debate. by means of the first - principles calculations, we discover two kinds of cdw states, the trimerized and hexamerized 2 $ \ times $ 2 phase and dimerized 4 $ \ times $ 1 phase existing in $ a $ v $ _ 3 $ sb $ _ 5 $. our phonon excitation spectrum and electronic lindhard function calculations reveal that the most intensive structural instability in $ a $ v $ _ 3 $ sb $ _ 5 $ originates from a combined in - plane vibration mode of v atoms through the electron - phonon coupling, rather than the fermi surface nesting effect. crucially, a metastable 4 $ \ times $ 1 phase with v - v dimer pattern and twofold symmetric bowtie shaped charge modulation is revealed in csv $ _ 3 $ sb $ _ 5 $, implying that both dimerization and trimerization exist in the v kagome layers. these results provide essential understanding of cdw instability and new thoughts for the novel charge modulation patterns.
arxiv:2111.07314
we propose a data - driven learned sky model, which we use for outdoor lighting estimation from a single image. as no large - scale dataset of images and their corresponding ground truth illumination is readily available, we use complementary datasets to train our approach, combining the vast diversity of illumination conditions of sun360 with the radiometrically calibrated and physically accurate laval hdr sky database. our key contribution is to provide a holistic view of both lighting modeling and estimation, solving both problems end - to - end. from a test image, our method can directly estimate an hdr environment map of the lighting without relying on analytical lighting models. we demonstrate the versatility and expressivity of our learned sky model and show that it can be used to recover plausible illumination, leading to visually pleasant virtual object insertions. to further evaluate our method, we capture a dataset of hdr 360 { \ deg } panoramas and show through extensive validation that we significantly outperform previous state - of - the - art.
arxiv:1905.03897
signal amplitude envelope allows to obtain information of the signal features for different applications. it is widely used to pre - process sound and other signals of physiological origin in human or animal studies. in order to obtain signal envelope, a fast and simple algorithm is proposed based on peak detection. the procedure presented here is quite straightforward and can be used in different applications of time series analysis. it can be applied in signals with different origin and frequency content. this algorithm presented is implemented based on python libraries. an open source code is also provided. aspects on the parameter selection are discussed to adapt the same method for different applications. also traditional methods are revisited and compared with the one proposed here.
arxiv:1703.06812
a homogeneous color magnetic field is known to be unstable for the fluctuations perpendicular to the field in the color space ( the nielsen - olesen instability ). we argue that these unstable modes, exponentially growing, generate an azimuthal magnetic field with the original field being in the z - direction, which causes the nielsen - olesen instability for another type of fluctuations. the growth rate of the latter unstable mode increases with the momentum p _ z and can become larger than the former ' s growth rate which decreases with increasing p _ z. these features may explain the interplay between the primary and secondary instabilities observed in the real - time simulation of a non - expanding glasma, i. e., stochastically generated anisotropic yang - mills fields without expansion.
arxiv:0903.2930
we implement a complete randomized benchmarking protocol on a system of two superconducting qubits. the protocol consists of randomizing over gates in the clifford group, which experimentally are generated via an improved two - qubit cross - resonance gate implementation and single - qubit unitaries. from this we extract an optimal average error per clifford of 0. 0936. we also perform an interleaved experiment, alternating our optimal two - qubit gate with random two - qubit clifford gates, to obtain a two - qubit gate error of 0. 0653. we compare these values with a two - qubit gate error of ~ 0. 12 obtained from quantum process tomography, which is likely limited by state preparation and measurement errors.
arxiv:1210.7011
we construct off - shell vertex operators for the bosonic spinning particle. using the language of homotopy algebras, we show that the full nonlinear structure of yang - mills theory, including its gauge transformations, is encoded in the commutator algebra of the worldline vertex operators. to do so, we deform the worldline brst operator by coupling it to a background gauge field and show that the coupling is consistent on a suitable truncation of the hilbert space. on this subspace, the square of the brst operator is proportional to the yang - mills field equations, which we interpret as an operator maurer - cartan equation for the background. this allows us to define further vertex operators in different ghost numbers, which correspond to the entire $ l _ \ infty $ algebra of yang - mills theory. besides providing a precise map of a fully nonlinear field theory into a worldline model, we expect these results will be valuable to investigate the kinematic algebra of yang - mills, which is central to the double copy program.
arxiv:2406.19045
in this article, we compare the results of non - equilibrium ( nemd ) and equilibrium ( emd ) molecular dynamics methods to compute the thermal conductance at the interface between solids. we propose to probe the thermal conductance using equilibrium simulations measuring the decay of the thermally induced energy fluctuations of each solid. we also show that nemd and emd give generally speaking inconsistent results for the thermal conductance : green kubo simulations probe the landauer conductance between two solids which assumes phonons on both sides of the interface to be at equilibrium. on the other hand, we show that nemd give access to the out - of - equilibrium interfacial conductance consistent with the interfacial flux describing phonon transport in each solid. the difference may be large and reaches typically a factor 5 for interfaces between usual semi - conductors. we analyze finite size effects for the two determinations of the interfacial thermal conductance, and show that the equilibrium simulations suffer from severe size effects as compared to nemd. we also compare the predictions of the two above mentioned methods - emd and nemd - regarding the interfacial conductance of a series of mass mismatched lennard - jones solids. we show that the kapitza conductance obtained with emd can be well described using the classical diffuse mismatch model ( dmm ). on the other hand, nemd simulations results are consistent with a out - of - equilibrium generalisation of the acoustic mismatch model ( amm ). these considerations are important in rationalizing previous results obtained using molecular dynamics, and help in pinpointing the physical scattering mechanisms taking place at atomically perfect interfaces between solids, which is a prerequesite to understand interfacial heat transfer across real interfaces.
arxiv:1209.3485
we show, conditional on a uniform version of the prime k - tuples conjecture, that there are x ( log x ) ^ { - 1 + o ( 1 ) } numbers not exceeding x common to the ranges of euler ' s function phi ( n ) and the sum - of - divisors function sigma ( m ).
arxiv:1010.5427
the status of numerical hydrodynamical models for planetary nebulae is reviewed. since all of the numerical work is based on the interacting winds model, we start with a description of this model and give an overview of the early analytical and numerical models. subsequently we address the numerical models which include radiation effects, first of all the ones which neglect any effects of stellar evolution. these ` constant environment ' models are shown to closely match typical observed nebulae, both in images and kinematic data. this shows that the basic generalized interacting winds model gives a good description of the situation in aspherical pne. next we discuss models that do include the effects of stellar and fast wind evolution. this introduces several new effects, the most important of which are the formation of a surrounding attached envelope, and the modification of the expansion of the nebula, which helps in creating aspherical pne very early on in their evolution. the ionization of the slow wind also leads to a gradual smoothing out of its aspherical character, working against aspherical pne forming in later stages. finally we discuss some applications of the model to nebular problems.
arxiv:astro-ph/9410057
the relationship between brains and computers is often taken to be merely metaphorical. however, genuine computational systems can be implemented in virtually any media ; thus, one can take seriously the view that brains literally compute. but without empirical criteria for what makes a physical system genuinely a computational one, computation remains a matter of perspective, especially for natural systems ( e. g., brains ) that were not explicitly designed and engineered to be computers. considerations from real examples of physical computers - both analog and digital, contemporary and historical - make clear what those empirical criteria must be. finally, applying those criteria to the brain shows how we can view the brain as a computer ( probably an analog one at that ), which, in turn, illuminates how that claim is both informative and falsifiable.
arxiv:2208.12032
in this paper, we prove a theorem on tight paths in convex geometric hypergraphs, which is asymptotically sharp in infinitely many cases. our geometric theorem is a common generalization of early results of hopf and pannwitz [ 12 ], sutherland [ 19 ], kupitz and perles [ 16 ] for convex geometric graphs, as well as the classical erd \ h { o } s - gallai theorem [ 6 ] for graphs. as a consequence, we obtain the first substantial improvement on the tur \ ' an problem for tight paths in uniform hypergraphs.
arxiv:2002.09457
the following convective brinkman - forchheimer ( cbf ) equations ( or damped navier - stokes equations ) with potential \ begin { equation * } \ frac { \ partial \ boldsymbol { y } } { \ partial t } - \ mu \ delta \ boldsymbol { y } + ( \ boldsymbol { y } \ cdot \ nabla ) \ boldsymbol { y } + \ alpha \ boldsymbol { y } + \ beta | \ boldsymbol { y } | ^ { r - 1 } \ boldsymbol { y } + \ nabla p + \ psi ( \ boldsymbol { y } ) \ ni \ boldsymbol { g }, \ \ nabla \ cdot \ boldsymbol { y } = 0, \ end { equation * } in a $ d $ - dimensional torus is considered in this work, where $ d \ in \ { 2, 3 \ } $, $ \ mu, \ alpha, \ beta > 0 $ and $ r \ in [ 1, \ infty ) $. for $ d = 2 $ with $ r \ in [ 1, \ infty ) $ and $ d = 3 $ with $ r \ in [ 3, \ infty ) $ ( $ 2 \ beta \ mu \ geq 1 $ for $ d = r = 3 $ ), we establish the existence of \ textsf { \ emph { a unique global strong solution } } for the above multi - valued problem with the help of the \ textsf { \ emph { abstract theory of $ m $ - accretive operators } }. % for nonlinear differential equations of accretive type in banach spaces. moreover, we demonstrate that the same results hold \ textsf { \ emph { local in time } } for the case $ d = 3 $ with $ r \ in [ 1, 3 ) $ and $ d = r = 3 $ with $ 2 \ beta \ mu < 1 $. we explored the $ m $ - accretivity of the nonlinear as well as multi - valued operators, yosida approximations and their properties, and several higher order energy estimates in the proofs. for $ r \ in [ 1, 3 ] $, we { quantize ( modify ) } the navier - stokes nonlinearity $ ( \ boldsymbol { y } \ cdot
arxiv:2301.01527
this letter reports on the photometric detection of transits of the neptune - mass planet orbiting the nearby m - dwarf star gj 436. it is by far the closest, smallest and least massive transiting planet detected so far. its mass is slightly larger than neptune ' s at m = 22. 6 + - 1. 9 m _ earth. the shape and depth of the transit lightcurves show that it is crossing the host star disc near its limb ( impact parameter 0. 84 + - 0. 03 ) and that the planet size is comparable to that of uranus and neptune, r = 25200 + - 2200 km = 3. 95 + - 0. 35 r _ earth. its main constituant is therefore very likely to be water ice. if the current planet structure models are correct, an outer layer of h / he constituting up to ten percent in mass is probably needed on top of the ice to account for the observed radius.
arxiv:0705.2219
affordance theory proposes that the use of an object is intrinsically determined by its physical shape. however, when translated to digital objects, affordance theory loses explanatory power, as the same physical affordances, for example, screens, can have many socially constructed meanings and can be used in many ways. furthermore, the affordance theory core idea that physical affordances have intrinsic, pre - cognitive meaning cannot be sustained for the highly symbolic nature of digital affordances, which gain meaning through social learning and use. a possible way to solve this issue is to think about on - screen affordances as symbols and affordance research as a semiotic and linguistic enterprise.
arxiv:2003.02307
in this note, we demonstrate that an incorrect statement has been propagated in multiple papers, stemming from the substitution of ` ` lim ' ' with ` ` limsup ' ' for a sequence in lemma 1. 3 of the paper [ j. schu : weak and strong convergence to fixed points of asymptotically nonexpansive mappings, bull. \ austral. \ math. \ soc. \ 43 ( 1991 ), 153 - - 159 ]. this occurred over a span of more than 20 years, with the earliest paper we identified using this incorrect statement dating back to 2002.
arxiv:2406.16378
the dynamic impedance of a sphere oscillating in an elastic medium is considered. oestreicher ' s formula for the impedance of a sphere bonded to the surrounding medium can be expressed simply in terms of three lumped impedances associated with the displaced mass and the longitudinal and transverse waves. if the surface of the sphere slips while the normal velocity remains continuous, the impedance formula is modified by adjusting the definition of the transverse impedance to include the interfacial impedance.
arxiv:cond-mat/0601186
soxs ( son of x - shooter ) will be the new medium resolution ( r $ \ sim $ 4500 for a 1 arcsec slit ), high - efficiency, wide band spectrograph for the eso - ntt telescope on la silla. it will be able to cover simultaneously optical and nir bands ( 350 - 2000nm ) using two different arms and a pre - slit common path feeding system. soxs will provide an unique facility to follow up any kind of transient event with the best possible response time in addition to high efficiency and availability. furthermore, a calibration unit and an acquisition camera system with all the necessary relay optics will be connected to the common path sub - system. the acquisition camera, working in optical regime, will be primarily focused on target acquisition and secondary guiding, but will also provide an imaging mode for scientific photometry. in this work we give an overview of the acquisition camera system for soxs with all the different functionalities. the optical and mechanical design of the system are also presented together with the preliminary performances in terms of optical quality, throughput, magnitude limits and photometric properties.
arxiv:1809.01526
text normalization ( tn ) and inverse text normalization ( itn ) are essential preprocessing and postprocessing steps for text - to - speech synthesis and automatic speech recognition, respectively. many methods have been proposed for either tn or itn, ranging from weighted finite - state transducers to neural networks. despite their impressive performance, these methods aim to tackle only one of the two tasks but not both. as a result, in a complete spoken dialog system, two separate models for tn and itn need to be built. this heterogeneity increases the technical complexity of the system, which in turn increases the cost of maintenance in a production setting. motivated by this observation, we propose a unified framework for building a single neural duplex system that can simultaneously handle tn and itn. combined with a simple but effective data augmentation method, our systems achieve state - of - the - art results on the google tn dataset for english and russian. they can also reach over 95 % sentence - level accuracy on an internal english tn dataset without any additional fine - tuning. in addition, we also create a cleaned dataset from the spoken wikipedia corpora for german and report the performance of our systems on the dataset. overall, experimental results demonstrate the proposed duplex text normalization framework is highly effective and applicable to a range of domains and languages
arxiv:2108.09889
in the performance - based engineering ( pbe ) framework, uncertainties in system parameters, or modelling uncertainties, have been shown to have significant effects on capacity fragilities and annual collapse rates of buildings. yet, since modelling uncertainties are non - ergodic variables, their consideration in failure rate calculations offends the poisson assumption of independent crossings. this problem has been addressed in the literature, and errors found negligible for small annual collapse failure rates. however, the errors could be significant for serviceability limit states, and when failure rates are integrated in time, to provide lifetime failure probabilities. herein, we present a novel formulation to fully avoid the error in integration of non - ergodic variables. the proposed product - of - lognormals formulation is fully compatible with popular fragility modelling approaches in pbe context. moreover, we address collapse limit states of realistic reinforced concrete buildings, and find errors of the order of 5 to 8 % for 50 - year lifetimes, up to 14 % for 100 years. computation of accurate lifetime failure probabilities in a pbe context is clearly important, as it allows comparison with lifetime target reliability values for other structural analysis formulations.
arxiv:2210.07361
an approximate method is suggested to obtain analytical expressions for the eigenvalues and eigenfunctions of the some quantum optical models. the method is based on the lie - type transformation of the hamiltonians. in a particular case it is demonstrated that $ e \ times \ epsilon $ jahn - teller hamiltonian can easily be solved within the framework of the suggested approximation. the method presented here is conceptually simple and can easily be extended to the other quantum optical models. we also show that for a purely imaginary coupling the $ e \ times \ epsilon $ hamiltonian becomes non - hermitian but $ p \ sigma _ { 0 } $ - symmetric. possible generalization of this approach is outlined.
arxiv:quant-ph/0510219
using a limited, but representative sample of sources in the ism of our galaxy with published spectra from the infrared space observatory, we analyze flux ratios between the major mid - ir emission features ( efs ) centered around 6. 2, 7. 7, 8. 6 and 11. 3 microns, respectively. in a flux ratio - to - flux ratio plot of ef ( 6. 2 ) / ef ( 7. 7 ) as a function of ef ( 11. 3 ) / ef ( 7. 7 ), the sample sources form roughly a $ \ lambda $ - shaped locus which appear to trace, on an overall basis, the hardness of a local heating radiation field. but some driving parameters other than the radiation field may also be required for a full interpretation of this trend. on the other hand, the flux ratio of ef ( 8. 6 ) / ef ( 7. 7 ) shows little variation over the sample sources, except for two hii regions which have much higher values for this ratio due to an ` ` ef ( 8. 6 \ um ) anomaly, ' ' a phenomenon clearly associated with environments of an intense far - uv radiation field. if further confirmed on a larger database, these trends should provide crucial information on how the ef carriers collectively respond to a changing environment.
arxiv:astro-ph/9803080
the possibility of realizing the superradiant regime of electromagnetic emission by the assembly of quantum dots is considered. the overall dynamical process is analyzed in detail. it is shown that there can occur several qualitatively different stages of evolution. the process starts with dipolar waves triggering the spontaneous radiation of individual dots. this corresponds to the fluctuation stage, when the dots are not yet noticeably correlated with each other. the second is the quantum stage, when the dot interactions through the common radiation field become more important, but the coherence is not yet developed. the third is the coherent stage, when the dots radiate coherently, emitting a superradiant pulse. after the superradiant pulse, the system of dots relaxes to an incoherent state in the relaxation stage. if there is no external permanent pumping, or the effective dot interactions are weak, the system tends to a stationary state during the last stationary stage, when coherence dies out to a low, practically negligible, level. in the case of permanent pumping, there exists the sixth stage of pulsing superradiance, when the system of dots emits separate coherent pulses.
arxiv:1002.2322
the present article highlights an approach to generate contrasting patterns from drying droplets in a liquid bridge configuration, different from well - known coffee rings. reduction of the confinement distance ( the gap between the solid surfaces ) leads to systematized nano - particle agglomeration yielding to spokes - like patterns similar to those found on scallop shells instead of circumferential edge deposition. alteration of the confinement length modulates the curvature that entails variations in the evaporation flux across the liquid - vapor interface. consequently, flow inside different liquid bridges ( lbs ) varies significantly for different confinement lengths. small confinement lengths result in the stick - slip motion of squeezed liquid bridges. on the contrary, the stretched lbs exhibit pinned contact lines. we decipher a proposition that a drying liquid thin film present during dewetting near the three - phase contact line is responsible for the aligned deposition of particles. the confinement distance determines the height of this thin film, and its theoretical estimations are validated against the experimental observations using reflection interferometry, further exhibiting good agreement ( in order of magnitude ). modulating the particle size does not significantly influence the precipitate patterns ; however, particle concentration can substantially affect the deposition patterns. the differences in deposition patterns are attributed to the complex interplay of the gradient of evaporation flux induced motion of contact line in combination with the drying of thin liquid film during dewetting.
arxiv:2201.02382
a key prediction of the trap model for the new conducting state in 2d is that the resistivity turns upwards below some characteristic temperature, $ t _ { \ rm min } $. altshuler, maslov, and pudalov have argued that the reason why no upturn has been observed for the low density conducting samples is that the temperature was not low enough in the experiments. we show here that $ t _ { \ rm min } $ within the altshuler, maslov, and pudalov trap model actually increases with decreasing density, contrary to their claim. consequently, the trap model is not consistent with the experimental trends.
arxiv:cond-mat/9910122
the paper presents an evolutionary economic model for the price evolution of stocks. treating a stock market as a self - organized system governed by a fast purchase process and slow variations of demand and supply the model suggests that the short term price distribution has the form a logistic ( laplace ) distribution. the long term return can be described by laplace - gaussian mixture distributions. the long term mean price evolution is governed by a walrus equation, which can be transformed into a replicator equation. this allows quantifying the evolutionary price competition between stocks. the theory suggests that stock prices scaled by the price over all stocks can be used to investigate long - term trends in a fisher - pry plot. the price competition that follows from the model is illustrated by examining the empirical long - term price trends of two stocks.
arxiv:1607.01248
the nature of edge state transport in quantum hall systems has been studied intensely ever since halperin [ 1 ] noted its importance for the quantization of the hall conductance. since then, there have been many developments in the study of edge states in the quantum hall effect, including the prediction of multiple counter - propagating modes in the fractional quantum hall regime, the prediction of edge mode renormalization due to disorder, and studies of how the sample confining potential affects the edge state structure ( edge reconstruction ), among others. in this paper, we study edge transport for the $ \ nu _ { \ text { bulk } } = 2 / 3 $ edge in the disordered, fully incoherent transport regime. to do so, we use a hydrodynamic approximation for the calculation of voltage and temperature profiles along the edge of the sample. within this formalism, we study two different bare mode structures with tunneling : the original edge structure predicted by wen [ 2 ] and macdonald [ 3 ], and the more complicated edge structure proposed by meir [ 4 ], whose renormalization and transport characteristics were discussed by wang, meir and gefen ( wmg ) [ 5 ]. we find that in the fully incoherent regime, the topological characteristics of transport ( quantized electrical and heat conductance ) are intact, with finite size corrections which are determined by the extent of equilibration. in particular, our calculation of conductance for the wmg model in a double qpc geometry reproduce conductance results of a recent experiment by r. sabo, et al. [ 17 ], which are inconsistent with the model of macdonald. our results can be explained in the charge / neutral mode picture, with incoherent analogues of the renormalization fixed points of ref. [ 5 ]. additionally, we find diffusive $ ( \ sim1 / l ) $ conductivity corrections to the heat conductance in the fully incoherent regime for both models of the edge.
arxiv:1804.06611
geochemfoam is an open - source openfoam - based numerical modelling toolbox that includes a range of custom packages to solve complex flow processes including multiphase transport with interface transfer, single - phase flow in multiscale porous media, and reactive transport with mineral dissolution. in this paper, we present geochemfoam ' s novel numerical model for simulation of conjugate heat transfer in micro - ct images of porous media. geochemfoam uses the micro - continuum approach to describe the fluid - solid interface using the volume fraction of fluid and solid in each computational cell. the velocity field is solved using brinkman ' s equation with permeability calculated using the kozeny - carman equation which results in a near - zero permeability in the solid phase. conjugate heat transfer is then solved with heat convection where the velocity is non - zero, and the thermal conductivity is calculated as the harmonic average of phase conductivity weighted by the phase volume fraction. our model is validated by comparison with the standard two - medium approach for a simple 2d geometry. we then simulate conjugate heat transfer and calculate heat transfer coefficients for different flow regimes and injected fluid analogous to injection into a geothermal reservoir in a micro - ct image of bentheimer sandstone and perform a sensitivity analysis in a porous heat exchanger with a random sphere packing.
arxiv:2110.03311
in this paper, we study the nonlinear dissipative boussinesq equation in the whole space $ \ mathbb { r } ^ n $ with $ l ^ 1 $ integrable data. as our preparations, the optimal estimates as well as the optimal leading terms for the linearized model are derived by performing the wkb analysis and the fourier analysis. then, under some conditions on the power $ p $ of nonlinearity, we demonstrate global ( in time ) existence of small data sobolev solutions with different regularities to the nonlinear model by applying some fractional order interpolations, where the optimal growth ( $ n = 2 $ ) and decay ( $ n \ geqslant 3 $ ) estimates of solutions for large time are given. simultaneously, we get a new large time asymptotic profile of global ( in time ) solutions. these results imply some influence of dispersion and dissipation on qualitative properties of solution.
arxiv:2311.03802
we discuss higher loop corrections to gauge coupling renormalization in the context of gauge coupling unification via kaluza - klein thresholds. we show that in the case n = 1 supersymmetric compactifications the one - loop threshold contributions are dominant, while the higher loop correction are subleading. this is due to the fact that at heavy kaluza - klein levels the spectrum as well as the interactions are n = 2 supersymmetric. in particular, we give two different arguments leading to this result - one is field theoretic, while the second one utilizes the power of string perturbation techniques. to illustrate our discussions we perform explicit two - loop computations of various corrections to gauge couplings within this framework. we also remark on phenomenological applications of our discussions in the context of tev - scale brane world.
arxiv:hep-th/9905137
the mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. practically it is of importance for the interpretation of multi - particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. here we prove the generalized iimaging theorem which shows that the spatial wave function of any multi - particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. currently, the quantum to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. the imaging theorem arises from unitary schroedinger propagation and so is valid without any environmental interaction. it implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.
arxiv:1601.02588
we study limit models in the abstract elementary class of modules with embeddings as algebraic objects. we characterize parametrized noetherian rings using the degree of injectivity of certain limit models. we show that the number of limit models and how close a ring is from being noetherian are inversely proportional. $ \ textbf { theorem. } $ let $ n \ geq 0 $ the following are equivalent. 1. $ r $ is left $ ( < \ aleph _ { n } ) $ - noetherian but not left $ ( < \ aleph _ { n - 1 } ) $ - noetherian. 2. the abstract elementary class of modules with embeddings has exactly $ n + 1 $ non - isomorphic $ \ lambda $ - limit models for every $ \ lambda \ geq ( \ operatorname { card } ( r ) + \ aleph _ 0 ) ^ + $ such that the class is stable in $ \ lambda $. we further show that there are rings such that the abstract elementary class of modules with embeddings has exactly $ \ kappa $ non - isomorphic $ \ lambda $ - limit models for every infinite cardinal $ \ kappa $.
arxiv:2405.20214
i review recent progress in perturbative qcd on two fronts : extending next - to - next - to - leading order qcd corrections to a broader range of collider processes, and applying twistor - space methods ( and related spinoffs ) to computations of multi - parton scattering amplitudes.
arxiv:hep-ph/0507064
a new method for approximating fractional derivatives of the gaussian function and dawson ' s integral are presented. unlike previous approaches, which are dominantly based on some discretization of riemann - liouville integral using polynomial or discrete fourier basis, we take an alternative approach which is based on expressing the riemann - liouville definition of the fractional integral for the semi - infinite axis in terms of a moment problem. as a result, fractional derivatives of the gaussian function and dawson ' s integral are expressed as a weighted sum of complex scaled gaussian and dawson ' s integral. error bounds for the approximation are provided. another distinct feature of the proposed method compared to the previous approaches, it can be extended to approximate partial derivative with respect to the order of the fractional derivative which may be used in pde constraint optimization problems.
arxiv:1709.02089
we study the freeze - in of gravitationally interacting dark matter in extra dimensions. focusing on a minimal dark matter candidate that only interacts with the sm via gravity in a five - dimensional model we find that a large range of dark matter and kaluza - klein graviton masses can lead to the observed relic density. the preferred values of the masses and the strength of the interaction make this scenario very hard to test in terrestrial experiments. however, significant parts of the parameter space lead to warm dark matter and can be tested by cosmological and astrophysical observations.
arxiv:2208.03153
we explicitly reorganise the partition function of an arbitrary cft in four spacetime dimensions into a heat kernel form for the dual string spectrum on ads ( 5 ). on very general grounds, the heat kernel answer can be expressed in terms of a convolution of the one - particle partition function of the four - dimensional cft. our methods are general and would apply for arbitrary dimensions, which we comment on.
arxiv:1212.1050
a detailed analysis is presented to demonstrate the capabilities of the lattice boltzmann method. thorough comparisons with other numerical solutions for the two - dimensional, driven cavity flow show that the lattice boltzmann method gives accurate results over a wide range of reynolds numbers. studies of errors and convergence rates are carried out. compressibility effects are quantified for different maximum velocities, and parameter ranges are found for stable simulations. the paper ' s objective is to stimulate further work using this relatively new approach for applied engineering problems in transport phenomena utilizing parallel computers.
arxiv:comp-gas/9401003
multi - view multi - human association and tracking ( mvmhat ), is a new but important problem for multi - person scene video surveillance, aiming to track a group of people over time in each view, as well as to identify the same person across different views at the same time, which is different from previous mot and multi - camera mot tasks only considering the over - time human tracking. this way, the videos for mvmhat require more complex annotations while containing more information for self learning. in this work, we tackle this problem with a self - supervised learning aware end - to - end network. specifically, we propose to take advantage of the spatial - temporal self - consistency rationale by considering three properties of reflexivity, symmetry and transitivity. besides the reflexivity property that naturally holds, we design the self - supervised learning losses based on the properties of symmetry and transitivity, for both appearance feature learning and assignment matrix optimization, to associate the multiple humans over time and across views. furthermore, to promote the research on mvmhat, we build two new large - scale benchmarks for the network training and testing of different algorithms. extensive experiments on the proposed benchmarks verify the effectiveness of our method. we have released the benchmark and code to the public.
arxiv:2401.17617
the composition of solar system surfaces can be inferred through reflectance and emission spectroscopy, by comparing these observations to laboratory measurements and radiative transfer models. while several populations of objects appear to be covered by sub - micrometre sized particles ( referred to as hyperfine ), there are limited studies on reflectance and emission of particulate surfaces composed of particles smaller than the visible and infrared wavelengths. we have undertaken an effort to determine the reflectance of hyperfine particulate surfaces in conjunction with high - porosity, in order to simulate the physical state of cometary surfaces and their related asteroids ( p - and d - types ). in this work, we present a technique developed to produce hyperfine particles of astrophysical relevant materials. hyperfine powders were prepared and measured in reflectance in the 0. 4 - 2. 6 micrometer range. these powders were then included in water ice particles, sublimated under vacuum, in order to produce a hyperporous sample of hyperfine material. when grinded below one micrometre, the four materials studied ( olivine, smectite, pyroxene and amorphous silica ), show strong decrease of their absorption features together with a blueing of the spectra. this small grain degeneracy implies that surfaces covered by hyperfine grains should show only shallow absorption features if any. these two effects, decrease of band depth and spectral blueing, appear magnified when the grains are incorporated in the hyperporous residue. we interpret the distinct behaviour between hyperporous and more compact surfaces by the distancing of individual grains and a decrease in the size of the elemental scatterers. this work implies that hyperfine grains are unabundant at the surfaces of s - or v - type asteroids, and that the blue nature of b - type may be related to a physical effect rather than a compositional effect.
arxiv:2010.16136
we study the topological order in rvb state derived from gutzwiller projection of bcs - like mean field state. we propose to construct the topological excitation on the projected rvb state through gutzwiller projection of mean field state with inserted $ z _ { 2 } $ flux tube. we prove that all projected rvb states derived from bipartite effective theories, no matter the gauge structure in the mean field ansatz, are positive definite in the sense of the marshall sign rule, which provides a universal origin for the absence of topological order in such rvb state.
arxiv:cond-mat/0405034
we have described a novel approach for training tabular data using the tabtransformer model with self - supervised learning. traditional machine learning models for tabular data, such as gbdt are being widely used though our paper examines the effectiveness of the tabtransformer which is a transformer based model optimised specifically for tabular data. the tabtransformer captures intricate relationships and dependencies among features in tabular data by leveraging the self - attention mechanism of transformers. we have used a self - supervised learning approach in this study, where the tabtransformer learns from unlabelled data by creating surrogate supervised tasks, eliminating the need for the labelled data. the aim is to find the most effective tabtransformer model representation of categorical and numerical features. to address the challenges faced during the construction of various input settings into the transformers. furthermore, a comparative analysis is also been conducted to examine performance of the tabtransformer model against baseline models such as mlp and supervised tabtransformer. the research has presented with a novel approach by creating various variants of tabtransformer model namely, binned - tt, vanilla - mlp - tt, mlp - based - tt which has helped to increase the effective capturing of the underlying relationship between various features of the tabular dataset by constructing optimal inputs. and further we have employed a self - supervised learning approach in the form of a masking - based unsupervised setting for tabular data. the findings shed light on the best way to represent categorical and numerical features, emphasizing the tabtransormer performance when compared to established machine learning models and other self - supervised learning methods.
arxiv:2401.15238
there is a classification by misiurewicz and ziemian of elements in homeo $ _ 0 ( \ mathbf { t } ^ 2 ) $ by their rotation set $ \ rho $, according to wether $ \ rho $ is a point, a segment or a set with nonempty interior. a recent classification of nonwandering elements in homeo $ _ 0 ( \ mathbf { t } ^ 2 ) $ by koropecki and tal has been given, according to the itrinsic underlying ambient where the dynamics takes place : planar, annular and strictly toral maps. we study the link between these two classifications, showing that, even abroad the nonwandering setting, annular maps are characterized by rotation sets which are \ textit { rational segments }. also, we obtain information on the \ textit { sublinear diffusion } of orbits in the - not very well understood - case that $ \ rho $ has nonempty interior.
arxiv:1311.0046
funded by the national institutes of health. in a clinical trial, the protocol is carefully designed to safeguard the health of the participants as well as answer specific research questions. a protocol describes what types of people may participate in the trial ; the schedule of tests, procedures, medications, and dosages ; and the length of the study. while in a clinical trial, participants following a protocol are seen regularly by research staff to monitor their health and to determine the safety and effectiveness of their treatment. since 1996, clinical trials conducted are widely expected to conform to and report the information called for in the consort statement, which provides a framework for designing and reporting protocols. though tailored to health and medicine, ideas in the consort statement are broadly applicable to other fields where experimental research is used. protocols will often address : safety : safety precautions are a valuable addition to a protocol, and can range from requiring goggles to provisions for containment of microbes, environmental hazards, toxic substances, and volatile solvents. procedural contingencies in the event of an accident may be included in a protocol or in a referenced sop. procedures : procedural information may include not only safety procedures but also procedures for avoiding contamination, calibration of equipment, equipment testing, documentation, and all other relevant issues. these procedural protocols can be used by skeptics to invalidate any claimed results if flaws are found. equipment used : equipment testing and documentation includes all necessary specifications, calibrations, operating ranges, etc. environmental factors such as temperature, humidity, barometric pressure, and other factors can often have effects on results. documenting these factors should be a part of any good procedure. reporting : a protocol may specify reporting requirements. reporting requirements would include all elements of the experiments design and protocols and any environmental factors or mechanical limitations that might affect the validity of the results. calculations and statistics : protocols for methods that produce numerical results generally include detailed formulas for calculation of results. a formula may also be included for preparation of reagents and other solutions required for the work. methods of statistical analysis may be included to guide interpretation of the data. bias : many protocols include provisions for avoiding bias in the interpretation of results. approximation error is common to all measurements. these errors can be absolute errors from limitations of the equipment or propagation errors from approximate numbers used in calculations. sample bias is the most common and sometimes the hardest bias to quantify. statisticians often go to great lengths to ensure that the sample used is representative. for instance political polls are best when restricted to likely voters and this is one
https://en.wikipedia.org/wiki/Protocol_(science)
this article points out some surprising similarities between a 1944 study by georgy udny yule and modern approaches to authorship attribution.
arxiv:2012.04796
is round and bounded on every side by the circumference of a solid sphere, has no beginning or end... " other advocates of a round earth included eusebius, hilary of poitiers, irenaeus, hippolytus of rome, firmicus maternus, ambrose, jerome, prudentius, favonius eulogius, and others. the only exceptions to this consensus up until the mid - fourth century were theophilus of antioch and lactantius, both of whom held anti - hellenistic views and associated the round - earth view with pagan cosmology. lactantius, a western christian writer and advisor to the first christian roman emperor, constantine, writing sometime between 304 and 313 ad, ridiculed the notion of antipodes and the philosophers who fancied that " the universe is round like a ball. they also thought that heaven revolves in accordance with the motion of the heavenly bodies.... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture, on the ground that the earth is suspended within the concavity of the sky, and that it has as much room on the one side of it as on the other : hence they say that the part that is beneath must also be inhabited. but they do not remark that, although it be supposed or scientifically demonstrated that the world is of a round and spherical form, yet it does not follow that the other side of the earth is bare of water ; nor even, though it be bare, does it immediately follow that it is peopled. for scripture, which proves the truth of its historical statements by the accomplishment of its prophecies, gives no false information ; and it is too absurd to say, that some men might have taken ship and traversed the whole wide ocean, and crossed from this side of the world to the other, and that thus even the inhabitants of that distant region are descended from that one first man. some historians do not
https://en.wikipedia.org/wiki/Flat_Earth
the proposed election system lies in ensuring that it is transparent and impartial. thus while the electoral system may vary from country to country, it has to take into account the peculiarities of every society while at the same time incorporating remedies to problems prevailing in the system. the electoral process expressed serious concerns regarding the independence of the election commission of pakistan, the restrictions on political parties and their candidates, the misuse of state resources, some unbalanced coverage in the state media, deficiencies in the compilation of the voting register and significant problems relating to the provision of id cards. the holding of a general election does not in itself guarantee the restoration of democracy. the unjustified interference with electoral arrangements, as detailed above, irrespective of the alleged motivation, resulted in serious flaws being inflicted on the electoral process. additionally, questions remain as to whether or not there will be a full transfer of power from a military to civilian administration. the independent study research has following modules : login / subscription module candidate subscription module vote casting module administration module intelligent decision data analysis module
arxiv:cs/0405105
in the present work we consider off - diagonal jacobi matrices with uncertainty in the position of sparse perturbations. we prove ( theorem 3. 2 ) that the sequence of pr \ " ufer angles ( \ theta _ { k } ^ { \ omega } ) _ { k \ geq 1 } is u. d mod \ pi for all \ phi \ in [ 0, \ pi ] with exception of the set of rational numbers and for almost every \ omega with respect to the product \ nu = \ prod _ { j \ geq 1 } \ nu _ { j } of uniform measures on { - j,..., j }. together with an improved criterion for pure point spectrum ( lemma 4. 1 ), this provides a simple and natural alternative proof of a result of zlatos ( j. funct. anal. \ textbf { 207 }, 216 - 252 ( 2004 ) ) : the existence of pure point ( p. p ) spectrum and singular continuous ( s. c. ) spectra on sets complementary to one another with respect to the essential spectrum [ - 2, 2 ], outside sets a _ { sc } and a _ { pp }, respectively, both of zero lebesgue measure ( theorem 2. 4 ). our method allows for an explicit characterization of a _ { pp }, which is seen to be also of dense p. p. type, and thus the spectrum is proved to be exclusively pure point on one subset of the essential spectrum.
arxiv:1006.2849
we present new abundances derived from cu i, cu ii, zn i, and zn ii lines in six warm ( 5766 < teff < 6427 k ), metal - poor ( - 2. 50 < [ fe / h ] < - 0. 95 ) dwarf and subgiant ( 3. 64 < log g < 4. 44 ) stars. these abundances are derived from archival high - resolution ultraviolet spectra from the space telescope imaging spectrograph on board the hubble space telescope and ground - based optical spectra from several observatories. ionized cu and zn are the majority species, and abundances derived from cu ii and zn ii lines should be largely insensitive to departures from local thermodynamic equilibrium ( lte ). we find good agreement between the [ zn / h ] ratios derived separately from zn i and zn ii lines, suggesting that departures from lte are, at most, minimal ( < 0. 1 dex ). we find that the [ cu / h ] ratios derived from cu ii lines are 0. 36 + / - 0. 06 dex larger than those derived from cu i lines in the most metal - poor stars ( [ fe / h ] < - 1. 8 ), suggesting that lte underestimates the cu abundance derived from cu i lines. the deviations decrease in more metal - rich stars. our results validate previous theoretical non - lte calculations for both cu and zn, supporting earlier conclusions that the enhancement of [ zn / fe ] in metal - poor stars is legitimate, and the deficiency of [ cu / fe ] in metal - poor stars may not be as large as previously thought.
arxiv:1803.09763
unsupervised learning has gained prominence in the big data era, offering a means to extract valuable insights from unlabeled datasets. deep clustering has emerged as an important unsupervised category, aiming to exploit the non - linear mapping capabilities of neural networks in order to enhance clustering performance. the majority of deep clustering literature focuses on minimizing the inner - cluster variability in some embedded space while keeping the learned representation consistent with the original high - dimensional dataset. in this work, we propose soft silhoutte, a probabilistic formulation of the silhouette coefficient. soft silhouette rewards compact and distinctly separated clustering solutions like the conventional silhouette coefficient. when optimized within a deep clustering framework, soft silhouette guides the learned representations towards forming compact and well - separated clusters. in addition, we introduce an autoencoder - based deep learning architecture that is suitable for optimizing the soft silhouette objective function. the proposed deep clustering method has been tested and compared with several well - studied deep clustering methods on various benchmark datasets, yielding very satisfactory clustering results.
arxiv:2402.00608
we address systematically an apparent non - physical behavior of the free energy moment generating function for several instances of the logarithmically correlated models : the fractional brownian motion with hurst index $ h = 0 $ ( fbm0 ) ( and its bridge version ), a 1d model appearing in decaying burgers turbulence with log - correlated initial conditions, and finally, the two - dimensional logrem introduced in [ cao et al., phys. rev. lett., 118, 090601 ] based on the 2d gaussian free field ( gff ) with background charges and directly related to the liouville field theory. all these models share anomalously large fluctuations of the associated free energy, with a variance proportional to the log of the system size. we argue that a seemingly non - physical vanishing of the moment generating function for some values of parameters is related to the termination point transition ( a. k. a pre - freezing ). we study the associated universal log corrections in the frozen phase, both for log - rems and for the standard rem, filling a gap in the literature. for the above mentioned integrable instances of logrems, we predict the non - trivial free energy cumulants describing non - gaussian fluctuations on the top of the gaussian with extensive variance. some of the predictions are tested numerically.
arxiv:1712.06023
various partial orders related to the structures of dual canonical monoids are investigated. it is shown that the nilpotent variety of a dual canonical monoid is equidimensional ; its dimension is found. it is shown in type a that certain intervals of the putcha poset of a dual canonical monoid are isomorphic to the renner monoids of matrices. the notion of a two - sided weak order on a normal reductive monoid is introduced. a criterion, in terms of type maps, for the covering relations in a two - sided weak order to have degree 2 is found. it is shown that, for the unique equivariant divisor of a dual canonical monoid ( the asymptotic semigroup ), the covering relations of the two - sided weak order are always of degree 1. these computations provide new insights for the two - sided weak orders on coxeter groups. in type a, some enumerative results for the covering relations are presented.
arxiv:1905.08316
the brst - antibrst invariant path integral formulation of classical mechanics of gozzi et al is generalized to pseudomechanics. it is shown that projections to physical propagators may be obtained by brst - antibrst invariant boundary conditions. the formulation is also viewed from recent group theoretical results within brst - antibrst invariant theories. a natural bracket expressed in terms of brst and antibrst charges in the extended formulation is shown to be equal to the poisson bracket. several remarks on the operator formulation are made.
arxiv:hep-th/0006177
the known extrasolar planets exhibit many interesting and surprising features - - extremely short - period orbits, high - eccentricity orbits, mean - motion and secular resonances, etc. - - and have dramatically expanded our appreciation of the diversity of possible planetary systems. in this review we summarize the orbital properties of extrasolar planets. one of the most remarkable features of extrasolar planets is their high eccentricities, far larger than seen in the solar system. we review theoretical explanations for large eccentricities and point out the successes and shortcomings of existing theories.
arxiv:astro-ph/0312045