text
stringlengths
1
3.65k
source
stringlengths
15
79
in this paper, we introduce a symmetric - key latin square image cipher ( lsic ) for grayscale and color images. our contributions to the image encryption community include 1 ) we develop new latin square image encryption primitives including latin square whitening, latin square s - box and latin square p - box ; 2 ) we provide a new way of integrating probabilistic encryption in image encryption by embedding random noise in the least significant image bit - plane ; and 3 ) we construct lsic with these latin square image encryption primitives all on one keyed latin square in a new loom - like substitution - permutation network. consequently, the proposed lsic achieve many desired properties of a secure cipher including a large key space, high key sensitivities, uniformly distributed ciphertext, excellent confusion and diffusion properties, semantically secure, and robustness against channel noise. theoretical analysis show that the lsic has good resistance to many attack models including brute - force attacks, ciphertext - only attacks, known - plaintext attacks and chosen - plaintext attacks. experimental analysis under extensive simulation results using the complete usc - sipi miscellaneous image dataset demonstrate that lsic outperforms or reach state of the art suggested by many peer algorithms. all these analysis and results demonstrate that the lsic is very suitable for digital image encryption. finally, we open source the lsic matlab code under webpage https : / / sites. google. com / site / tuftsyuewu / source - code.
arxiv:1204.2310
executing quantum algorithms over distributed quantum systems requires quantum circuits to be divided into sub - circuits which communicate via entanglement - based teleportation. naively mapping circuits to qubits over multiple quantum processing units ( qpus ) results in large communication overhead, increasing both execution time and noise. this can be minimised by optimising the assignment of qubits to qpus and the methods used for covering non - local operations. formulations that are general enough to capture the spectrum of teleportation possibilities lead to complex problem instances which can be difficult to solve effectively. this highlights a need to exploit the wide range of heuristic techniques used in the graph partitioning literature. this paper formalises and extends existing constructions for graphical quantum circuit partitioning and designs a new objective function that captures further possibilities for non - local operations via nested state teleportation. we adapt the well - known fiduccia - mattheyses heuristic to the constraints and problem objective and explore multilevel techniques that coarsen hypergraphs and partition at multiple levels of granularity. we find that this reduces runtime and improves solution quality of standard partitioning. we place these techniques within a larger framework, through which we can extract full distributed quantum circuits including teleportation instructions. we compare the entanglement requirements and runtimes with state - of - the - art methods, finding that we can achieve the lowest entanglement costs in most cases, while always being close to the best performing method. we achieve an average improvement of 33 % over the next best performing method across a wide range of circuits. we also find that our techniques can scale to much larger circuit sizes than state - of - the - art methods, provided the number of partitions is not too large.
arxiv:2503.19082
low - rank matrix estimation from incomplete measurements recently received increased attention due to the emergence of several challenging applications, such as recommender systems ; see in particular the famous netflix challenge. while the behaviour of algorithms based on nuclear norm minimization is now well understood, an as yet unexplored avenue of research is the behaviour of bayesian algorithms in this context. in this paper, we briefly review the priors used in the bayesian literature for matrix completion. a standard approach is to assign an inverse gamma prior to the singular values of a certain singular value decomposition of the matrix of interest ; this prior is conjugate. however, we show that two other types of priors ( again for the singular values ) may be conjugate for this model : a gamma prior, and a discrete prior. conjugacy is very convenient, as it makes it possible to implement either gibbs sampling or variational bayes. interestingly enough, the maximum a posteriori for these different priors is related to the nuclear norm minimization problems. we also compare all these priors on simulated datasets, and on the classical movielens and netflix datasets.
arxiv:1406.1440
coded aperture imaging ( cai ) has been proposed as an alternative collimation technique in nuclear imaging. to maximize spatial resolution small pinholes in the coded aperture mask are required. however, a high - resolution detector is needed to correctly sample the point spread function ( psf ) to keep the nyquist - shannon sampling theorem satisfied. the disadvantage of smaller pixels, though, is the resulting higher poisson noise. thus, the aim of this paper was to investigate if sufficiently accurate cai reconstruction is achievable with a detector which undersamples the psf. with the monte carlo simulation framework topas a test image with multiple spheres of different diameter was simulated based on the setup of an experimental gamma camera from previous work. additionally, measured phantom data were acquired. the captured detector images were converted to low - resolution images of different pixel sizes according to the super - resolution factor $ k $. multiple analytical reconstruction methods and a machine learning approach were compared based on the contrast - to - noise ratio ( cnr ). we show, that all reconstruction methods are able to reconstruct both the test image and the measured phantom data for $ k \ leq 7 $. with a synthetic high - resolution psf and upsampling the simulated low - resolution detector image by bilinear interpolation the cnr can be kept approximately constant. results of this simulation study and additional validation on measured phantom data indicate that an undersampling detector can be combined with small aperture holes. however, further experiments need to be conducted.
arxiv:2306.08483
recently, prompt tuning \ cite { lester2021power } has gradually become a new paradigm for nlp, which only depends on the representation of the words by freezing the parameters of pre - trained language models ( plms ) to obtain remarkable performance on downstream tasks. it maintains the consistency of masked language model ( mlm ) \ cite { devlin2018bert } task in the process of pre - training, and avoids some issues that may happened during fine - tuning. naturally, we consider that the " " tokens carry more useful information than other tokens because the model combines with context to predict the masked tokens. among the current prompt tuning methods, there will be a serious problem of random composition of the answer tokens in prediction when they predict multiple words so that they have to map tokens to labels with the help verbalizer. in response to the above issue, we propose a new \ textbf { pro } mpt \ textbf { tu } ning based on " [ \ textbf { m } ask ] " ( \ textbf { protum } ) method in this paper, which constructs a classification task through the information carried by the hidden layer of " " tokens and then predicts the labels directly rather than the answer tokens. at the same time, we explore how different hidden layers under " " impact on our classification model on many different data sets. finally, we find that our \ textbf { protum } can achieve much better performance than fine - tuning after continuous pre - training with less time consumption. our model facilitates the practical application of large models in nlp.
arxiv:2201.12109
inference of the conditional dependence structure is challenging when many covariates are present. in numerous applications, only a low - dimensional projection of the covariates influences the conditional distribution. the smallest subspace that captures this effect is called the central subspace in the literature. we show that inference of the central subspace of a vector random variable $ \ mathbf y $ conditioned on a vector of covariates $ \ mathbf x $ can be separated into inference of the marginal central subspaces of the components of $ \ mathbf y $ conditioned on $ \ mathbf x $ and on the copula central subspace, that we define in this paper. further discussion addresses sufficient dimension reduction subspaces for conditional association measures. an adaptive nonparametric method is introduced for estimating the central dependence subspaces, achieving parametric convergence rates under mild conditions. simulation studies illustrate the practical performance of the proposed approach.
arxiv:2505.01052
solar system centaurs originate in transneptunian space from where planet orbit crossing events inject their orbits inside the giant planets ' domain. here, we examine this injection process in the three - body problem by studying the orbital evolution of transneptunian asteroids located at neptune ' s collision singularity as a function of the tisserand invariant, t. two injection modes are found, one for t > 0. 1, or equivalently prograde inclinations far from the planet, where unstable motion dominates injection, and another for t < = 0. 1, or equivalently polar and retrograde inclinations far from the planet, where stable motion dominates injection. the injection modes are independent of the initial semi - major axis and the dynamical time at the collision singularity. the simulations uncovered a region in the polar corridor where the dynamical time exceeds the solar system ' s age suggesting the possibility of long - lived primordial polar transneptunian reservoirs that supply centaurs to the giant planets ' domain.
arxiv:2311.09946
quantifying uncertainty in predictions or, more generally, estimating the posterior conditional distribution, is a core challenge in machine learning and statistics. we introduce convex nonparanormal regression ( cnr ), a conditional nonparanormal approach for coping with this task. cnr involves a convex optimization of a posterior defined via a rich dictionary of pre - defined non linear transformations on gaussians. it can fit an arbitrary conditional distribution, including multimodal and non - symmetric posteriors. for the special but powerful case of a piecewise linear dictionary, we provide a closed form of the posterior mean which can be used for point - wise predictions. finally, we demonstrate the advantages of cnr over classical competitors using synthetic and real world data.
arxiv:2004.10255
the x8. 2 event of 10 september 2017 provides unique observations to study the genesis, magnetic morphology and impulsive dynamics of a very fast cme. combining goes - 16 / suvi and sdo / aia euv imagery, we identify a hot ( $ t \ approx 10 - 15 $ mk ) bright rim around a quickly expanding cavity, embedded inside a much larger cme shell ( $ t \ approx 1 - 2 $ mk ). the cme shell develops from a dense set of large ar loops ( $ \ gtrsim $ 0. 5 $ r _ s $ ), and seamlessly evolves into the cme front observed in lasco c2. the strong lateral overexpansion of the cme shell acts as a piston initiating the fast euv wave. the hot cavity rim is demonstrated to be a manifestation of the dominantly poloidal flux and frozen - in plasma added to the rising flux rope by magnetic reconnection in the current sheet beneath. the same structure is later observed as the core of the white light cme, challenging the traditional interpretation of the cme three - part morphology. the large amount of added magnetic flux suggested by these observations explains the extreme accelerations of the radial and lateral expansion of the cme shell and cavity, all reaching values of $ 5 - 10 $ km s $ ^ { - 2 } $. the acceleration peaks occur simultaneously with the first rhessi $ 100 - 300 $ kev hard x - ray burst of the associated flare, further underlining the importance of the reconnection process for the impulsive cme evolution. finally, the much higher radial propagation speed of the flux rope in relation to the cme shell causes a distinct deformation of the white light cme front and shock.
arxiv:1810.09320
autonomous driving requires efficient reasoning about the location and appearance of the different agents in the scene, which aids in downstream tasks such as object detection, object tracking, and path planning. the past few years have witnessed a surge in approaches that combine the different taskbased modules of the classic self - driving stack into an end - toend ( e2e ) trainable learning system. these approaches replace perception, prediction, and sensor fusion modules with a single contiguous module with shared latent space embedding, from which one extracts a human - interpretable representation of the scene. one of the most popular representations is the birds - eye view ( bev ), which expresses the location of different traffic participants in the ego vehicle frame from a top - down view. however, a bev does not capture the chromatic appearance information of the participants. to overcome this limitation, we propose a novel representation that captures various traffic participants appearance and occupancy information from an array of monocular cameras covering 360 deg field of view ( fov ). we use a learned image embedding of all camera images to generate a bev of the scene at any instant that captures both appearance and occupancy of the scene, which can aid in downstream tasks such as object tracking and executing language - based commands. we test the efficacy of our approach on synthetic dataset generated from carla. the code, data set, and results can be found at https : / / rebrand. ly / app occ - results.
arxiv:2211.04557
we investigate the importance of final state interactions in weak nonleptonic hyperon decays within a relativistic chiral unitary approach based on coupled channels. the effective potentials for meson - baryon scattering are derived from a chiral effective lagrangian and iterated in a bethe - salpeter equation, which generates the low lying baryon resonances dynamically. the inclusion of final state interactions decreases the discrepancy between theory and experiment for both s and p waves. our study indicates that contributions from higher order terms of the weak effective lagrangian may play an important role in these decays.
arxiv:hep-ph/0306175
ethical bias in machine learning models has become a matter of concern in the software engineering community. most of the prior software engineering works concentrated on finding ethical bias in models rather than fixing it. after finding bias, the next step is mitigation. prior researchers mainly tried to use supervised approaches to achieve fairness. however, in the real world, getting data with trustworthy ground truth is challenging and also ground truth can contain human bias. semi - supervised learning is a machine learning technique where, incrementally, labeled data is used to generate pseudo - labels for the rest of the data ( and then all that data is used for model training ). in this work, we apply four popular semi - supervised techniques as pseudo - labelers to create fair classification models. our framework, fair - ssl, takes a very small amount ( 10 % ) of labeled data as input and generates pseudo - labels for the unlabeled data. we then synthetically generate new data points to balance the training data based on class and protected attribute as proposed by chakraborty et al. in fse 2021. finally, the classification model is trained on the balanced pseudo - labeled data and validated on test data. after experimenting on ten datasets and three learners, we find that fair - ssl achieves similar performance as three state - of - the - art bias mitigation algorithms. that said, the clear advantage of fair - ssl is that it requires only 10 % of the labeled training data. to the best of our knowledge, this is the first se work where semi - supervised techniques are used to fight against ethical bias in se ml models.
arxiv:2111.02038
this paper studies density estimation and regression analysis with contaminated data observed on the unit hypersphere s ^ d. our methodology and theory are based on harmonic analysis on general s ^ d. we establish novel nonparametric density and regression estimators, and study their asymptotic properties including the rates of convergence and asymptotic distributions. we also provide asymptotic confidence intervals based on the asymptotic distributions of the estimators and on the empirical likelihood technique. we present practical details on implementation as well as the results of numerical studies.
arxiv:2301.03000
it has been shown that in the context of general relativity ( gr ) enriched with a new set of discrete symmetry reversal conjugate metrics, negative energy states can be rehabilitated while avoiding the well - known instability issues. we review here some cosmological implications of the model and confront them with the supernovae and cmb data. the predicted flat universe constantly accelerated expansion phase is found to be in rather good agreement with the most recent cosmological data.
arxiv:gr-qc/0507065
we consider a contextual version of multi - armed bandit problem with global knapsack constraints. in each round, the outcome of pulling an arm is a scalar reward and a resource consumption vector, both dependent on the context, and the global knapsack constraints require the total consumption for each resource to be below some pre - fixed budget. the learning agent competes with an arbitrary set of context - dependent policies. this problem was introduced by badanidiyuru et al. ( 2014 ), who gave a computationally inefficient algorithm with near - optimal regret bounds for it. we give a computationally efficient algorithm for this problem with slightly better regret bounds, by generalizing the approach of agarwal et al. ( 2014 ) for the non - constrained version of the problem. the computational time of our algorithm scales logarithmically in the size of the policy space. this answers the main open question of badanidiyuru et al. ( 2014 ). we also extend our results to a variant where there are no knapsack constraints but the objective is an arbitrary lipschitz concave function of the sum of outcome vectors.
arxiv:1506.03374
mechanical meta - materials are solids whose geometric structure results in exotic nonlinear behaviors that are not typically achievable via homogeneous materials. we show how to drastically expand the design space of a class of mechanical meta - materials known as cellular solids, by generalizing beyond translational symmetry. this is made possible by transforming a reference geometry according to a divergence free flow that is parameterized by a neural network and equivariant under the relevant symmetry group. we show how to construct flows equivariant to the space groups, despite the fact that these groups are not compact. coupling this flow with a differentiable nonlinear mechanics simulator allows us to represent a much richer set of cellular solids than was previously possible. these materials can be optimized to exhibit desirable mechanical properties such as negative poisson ' s ratios or to match target stress - strain curves. we validate these new designs in simulation and by fabricating real - world prototypes. we find that designs with higher - order symmetries can exhibit a wider range of behaviors.
arxiv:2410.02385
in this paper we study optimal control problem for non local cahn - hilliard - brinkman system which models phase separation of binary fluids in porous media. we consider the system in two dimensional bounded domain with regular potential. we extend recently proved existence of weak solution results for such a system and prove the existence of strong solution under certain assumptions on the forcing term and initial datum. further using our regularity results, we study the tracking type optimal control problem. we prove the existence of an optimal control and establish the first order optimality condition. lastly, we characterize optimal control in terms of the solution of corresponding adjoint system. the existence of solution for the adjoint system is also established.
arxiv:1911.02811
the fractional quantum hall effect, a paradigmatic topologically ordered state, has been realised in two - dimensional strongly correlated quantum gases and chern bands of crystals. here we construct a non - crystalline analogue by coupling quantum wires that are not periodically placed in real - space. remarkably, the model remains solvable using bosonisation techniques. due to the non - uniform couplings between the wires, the ground state has different degeneracy compared to the crystalline case. it displays a rich phenomenology of excitations, which can either behave like anyons confined to move in one dimension ( lineons ), anyons confined to hop between two wires ( s - lineons ), and anyonic excitations that are free to travel across the system. both the ground state degeneracy and mutual statistics are directly determined by the real - space positions of the wires. by providing an analytically solvable model of a non - crystalline fractional quantum hall effect, our work showcases that topological order can display richer phenomenology beyond crystals. more broadly, the non - uniform wire construction we develop can serve as a tool to explore richer many - body phenomenology in non - crystalline systems.
arxiv:2504.18337
balanced hypergraph partitioning is an np - hard problem with many applications, e. g., optimizing communication in distributed data placement problems. the goal is to place all nodes across $ k $ different blocks of bounded size, such that hyperedges span as few parts as possible. this problem is well - studied in sequential and distributed settings, but not in shared - memory. we close this gap by devising efficient and scalable shared - memory algorithms for all components employed in the best sequential solvers without compromises with regards to solution quality. this work presents the scalable and high - quality hypergraph partitioning framework mt - kahypar. its most important components are parallel improvement algorithms based on the fm algorithm and maximum flows, as well as a parallel clustering algorithm for coarsening - which are used in a multilevel scheme with $ \ log ( n ) $ levels. as additional components, we parallelize the $ n $ - level partitioning scheme, devise a deterministic version of our algorithm, and present optimizations for plain graphs. we evaluate our solver on more than 800 graphs and hypergraphs, and compare it with 25 different algorithms from the literature. our fastest configuration outperforms almost all existing hypergraph partitioners with regards to both solution quality and running time. our highest - quality configuration achieves the same solution quality as the best sequential partitioner kahypar, while being an order of magnitude faster with ten threads. thus, two of our configurations occupy all fronts of the pareto curve for hypergraph partitioning. furthermore, our solvers exhibit good speedups, e. g., 29. 6x in the geometric mean on 64 cores ( deterministic ), 22. 3x ( $ \ log ( n ) $ - level ), and 25. 9x ( $ n $ - level ).
arxiv:2303.17679
the class a of countable groups that admit a faithful, transitive, amenable - - in the sense that there is an invariant mean - - action on a set has been widely investigated in the past. in this paper, we no longer require the action to be transitive, but we ask for it to preserve a locally finite metric ( and still to be faithful and amenable ). the groups having such actions are those that embed into a totally disconnected amenable locally compact group. then we focus on the subclass a 1 of groups for which the actions are moreover transitive. this class is strictly contained into a and includes non - amenable groups. an important particular case of actions preserving a locally finite metric is given by actions by automorphisms of locally finite connected graphs. we take this opportunity, in our partly expository paper, to review some nice results about amenable actions in this setting.
arxiv:1804.06177
the equations of motion describing all physical systems, except gravity, remain invariant if a constant is added to the lagrangian. in the conventional approach, gravitational theories break this symmetry exhibited by all other physical systems. restoring this symmetry to gravity and demanding that gravitational field equations should also remain invariant under the addition of a constant to a lagrangian, leads to the interpretation of gravity as the thermodynamic limit of the kinetic theory of atoms of space. this approach selects, in a very natural fashion, einstein ' s general relativity in $ d = 4 $. developing this paradigm at a deeper level, one can obtain the distribution function for the atoms of space and connect it up with the thermodynamic description of spacetime. this extension relies on a curious fact that the quantum spacetime endows each event with a finite area but zero volume. this approach allows us determine the numerical value of the cosmological constant and suggests a new perspective on cosmology.
arxiv:1512.06546
process mining, a data - driven approach for analyzing, visualizing, and improving business processes using event logs, has emerged as a powerful technique in the field of business process management. process forecasting is a sub - field of process mining that studies how to predict future processes and process models. in this paper, we introduce and motivate the problem of event log prediction and present our approach to solving the event log prediction problem, in particular, using the sequence - to - sequence deep learning approach. we evaluate and analyze the prediction outcomes on a variety of synthetic logs and seven real - life logs and show that our approach can generate perfect predictions on synthetic logs and that deep learning techniques have the potential to be applied in real - world event log prediction tasks. we further provide practical recommendations for event log predictions grounded in the outcomes of the conducted experiments.
arxiv:2312.09741
the evaluation of a photon - pair source employs characteristic metrics like the photon - pair generation rate, heralding efficiency, and second - order correlation function, all of which are determined by the photon number distribution of the source. the photon number distribution, however, can be altered due to spectral or spatial filtering and optical losses, leading to changes in the above characteristics. in this paper, we theoretically describe the effects of different filterings, losses, and noise counts on the photon number distribution and related characteristics. from the theoretical description, an analytic expression for the effective mode number of the joint spectral density is also derived. compared with previous methods for estimating the photon number distribution and characteristics, an improved methodology is introduced along with a suitable metric of accuracy for estimating the photon number distribution, focusing on photon - pair sources. we discuss the accuracy of the calculated characteristics from the estimated ( or reconstructed ) photon number distribution through repeated simulations and bootstrapped experimental data.
arxiv:2309.04217
electron - phonon ( $ e $ - ph ) interactions arise in many strongly correlated quantum materials from the modulation of the nearest - neighbor hopping integrals, as in the celebrated su - schrieffer - heeger ( ssh ) model. nevertheless, relatively few non - perturbative studies of correlated ssh models have been conducted in dimensions greater than one, and those that have been done have primarily focused on bond models, where generalized displacements independently modulate each hopping integral. we conducted a sign - problem free determinant quantum monte carlo study of the optical ssh - hubbard model on a two - dimensional square lattice, where site - centered phonon modes simultaneously modulate pairs of nearest - neighbor hopping integrals. we report the model ' s low - temperature phase diagram in the challenging adiabatic regime ( $ \ omega / e _ \ mathrm { f } \ sim 1 / 8 $ ). it exhibits insulating antiferromagnetic mott and bond - order - wave ( bow ) phases with a narrow region of coexistence between them. we also find that a critical $ e $ - ph coupling is required to stabilize the bow phase in the small $ u $ limit. lastly, in stark contrast to recent findings for the model ' s bond variant, we find no evidence for a long - range antiferromagnetism in the pure $ ( u / t = 0 ) $ optical ssh model.
arxiv:2502.14196
we study the dynamics of an overdamped brownian particle in a thermal bath that contains a dilute solution of active particles. the particle moves in a harmonic potential and experiences poisson shot - noise kicks with specified amplitude distribution due to moving active particles in the bath. from the fokker - planck equation for the particle dynamics we derive the stationary solution for the displacement distribution along with the moments characterizing mean, variance, skewness, and kurtosis, as well as finite time first and second moments. we also compute an effective temperature through the fluctuation - dissipation theorem and show that equipartition theorem holds for all zero - mean kick distributions, including those leading to non - gaussian stationary statistics. for the case of gaussian - distributed active kicks we find a re - entrant behaviour from non - gaussian to gaussian stationary states and a heavy - tailed leptokurtic distribution across a wide range of parameters as seen in recent experimental studies. further analysis reveals statistical signatures of the irreversible dynamics of the particle displacement in terms of the time asymmetry of cross - correlation functions. fruits of our work is the development of a compact inference scheme that may allow experimentalists to extract the rate and moments of underlying shot - noise solely from the statistics of the particle position.
arxiv:2309.13424
symmetry plays a key role in determining the physical properties of materials. by neumann ' s principle, the properties of a material are invariant under the symmetry operations of the space group to which the material belongs. continuous phase transitions are associated with a spontaneous reduction in symmetry. ( for example, the onset of ferromagnetism spontaneously breaks time reversal symmetry. ) much less common are examples where proximity to a continuous phase transition leads to an increase in symmetry. here, we find an emergent tetragonal symmetry close to an apparent charge density wave ( cdw ) bicritical point in a fundamentally orthorhombic material, erte $ _ 3 $, for which the cdw phase transitions are tuned via anisotropic strain. the underlying structure of the material remains orthorhombic for all applied strains, including at the bicritical point, due to a glide plane symmetry in the crystal structure. nevertheless, the observation of a divergence in the anisotropy of the in - plane elastoresistivity reveals an emergent electronic tetragonality near the bicritical point.
arxiv:2306.14755
the negative ion drift ( nid ) gas sf $ _ 6 $ has favourable properties for track reconstruction in directional dark matter ( dm ) searches utilising low pressure gaseous time projection chambers ( tpcs ). however, the electronegative nature of the gas means that it is more difficult to achieve significant gas gains with regular thick gaseous electron multipliers ( thgems ). typically, the maximum attainable gas gain in sf $ _ 6 $ and other negative ion ( ni ) gas mixtures, previously achieved with an $ ^ { 55 } $ fe x - ray source or electron beam, is on the order of $ 10 ^ 3 $ ; whereas electron drift gases like cf $ _ 4 $ and similar mixtures are readily capable of reaching gas gains on the order of $ 10 ^ 4 $ or greater. in this paper, a novel two stage multi - mesh thgem ( mmthgem ) structure is presented. the mmthgem was used to amplify charge liberated by an $ ^ { 55 } $ fe x - ray source in 40 torr of sf $ _ 6 $. by expanding on previously demonstrated results, the device was pushed to its sparking limit and stable gas gains up to $ \ sim $ 50000 were observed. the device was further optimised by varying the field strengths of both the collection and transfer regions in isolation. following this optimisation procedure, the device was able to produce a maximum stable gas gain of $ \ sim $ 90000. these results demonstrate an order of magnitude improvement in gain with the nid gas over previously reported values and ultimately benefits the sensitivity of a nitpc to low energy recoils in the context of a directional dm search.
arxiv:2311.10556
in this paper, we present distributed generalized clustering algorithms that can handle large scale data across multiple machines in spite of straggling or unreliable machines. we propose a novel data assignment scheme that enables us to obtain global information about the entire data even when some machines fail to respond with the results of the assigned local computations. the assignment scheme leads to distributed algorithms with good approximation guarantees for a variety of clustering and dimensionality reduction problems.
arxiv:2002.08892
the optimal control of a mechanical system is of crucial importance in many realms. typical examples are the determination of a time - minimal path in vehicle dynamics, a minimal energy trajectory in space mission design, or optimal motion sequences in robotics and biomechanics. in most cases, some sort of discretization of the original, infinite - dimensional optimization problem has to be performed in order to make the problem amenable to computations. the approach proposed in this paper is to directly discretize the variational description of the system ' s motion. the resulting optimization algorithm lets the discrete solution directly inherit characteristic structural properties from the continuous one like symmetries and integrals of the motion. we show that the dmoc approach is equivalent to a finite difference discretization of hamilton ' s equations by a symplectic partitioned runge - kutta scheme and employ this fact in order to give a proof of convergence. the numerical performance of dmoc and its relationship to other existing optimal control methods are investigated.
arxiv:0810.1386
we present a hubble space telescope image of the frii radio galaxy 3c 401, obtained at 1. 6 microns with the nicmos camera in which we identify the infrared counterpart of the brightest region of the radio jet. the jet has a complex radio structure and brightens where bending occurs, most likely as a result of relativistic beaming. we analyze archival data in the radio, optical and x - ray bands and we derive its spectral energy distribution. differently from all of the previously known optical extragalactic jets, the jet in 3c401 is not detected in the x - rays even in a long 48ksec x - ray chandra exposure and the infrared emission dominates the overall sed. we propose that the dominant radiation mechanism of this jet is synchrotron. the low x - ray emission is then caused by two different effects : i ) the lack of any strong external photon field and ii ) the shape of the electron distribution. this affects the location of the synchrotron peak in the sed, resulting in a sharp cut - off at energies lower than the x - rays. thus 3c401 shows a new type of jet which has intermediate spectral properties between those of fri, which are dominated by synchrotron emission up to x - ray energies, and frii / qso, which show a strong high - energy emission due to inverse - compton scattering of external photons. this might be a clue for the presence of a continuous ` ` sequence ' ' in the properties of large scale jets, in analogy with the ` ` blazar sequence ' ' already proposed for sub - pc scale jets.
arxiv:astro-ph/0505034
with the huge expansion of internet and trillions of gigabytes of data generated every single day, the needs for the development of various tools has become mandatory in order to maintain system adaptability to rapid changes. one of these tools is known as image captioning. every entity in internet must be properly identified and managed and therefore in the case of image data, automatic captioning for identification is required. similarly, content generation for missing labels, image classification and artificial languages all requires the process of image captioning. this paper discusses an efficient and unique way to perform automatic image captioning on individual image and discusses strategies to improve its performances and functionalities.
arxiv:2009.02565
inelastic lifetime of an electron quasiparticle in an electron liquid due to electron - electron interaction evaluated in previous work is calculated in an alternative way. both the contributions of the " direct " and " exchange " processes are included. the results turn out to be exactly the same as those obtained previously, and hence confirm the latter. derivation in the two - dimensional case is presented in great details due to its intricacies.
arxiv:cond-mat/0512454
we develop a simple coarse - grained bead - spring polymer model exhibiting competing crystallization and glass transitions. for quench rates slower than the critical nucleation rate $ | \ dot { t } | _ { crit } $, systems exhibit a first - order crystallization transition below a critical temperature $ t = t _ { cryst } $. such systems form close - packed crystallites of fcc and / or hcp order, separated by domain walls, twin defects, and an amorphous interphase. the size of amorphous regions grows continuously as the quench rate $ | \ dot { t } | $ increases, producing nearly amorphous structure for $ | \ dot { t } | > | \ dot { t } | _ { crit } $. our model exhibits many features observed in recent studies of crystallization of athermal polymer packings, but also critical differences arising from the softness of the pair interactions and the thermal nature of the phase transition. the model is considerably more computationally efficient than other recent crystallizable coarse - grained polymer models ; while it sacrifices some features of real semicrystalline polymers ( such as lamellar structure and chain disentanglement ), we anticipate that it will serve as a useful model for studying generic features related to semicrystalline order in polymer solids.
arxiv:1303.5494
we propose a new class of filtering and smoothing methods for inference in high - dimensional, nonlinear, non - gaussian, spatio - temporal state - space models. the main idea is to combine the ensemble kalman filter and smoother, developed in the geophysics literature, with state - space algorithms from the statistics literature. our algorithms address a variety of estimation scenarios, including on - line and off - line state and parameter estimation. we take a bayesian perspective, for which the goal is to generate samples from the joint posterior distribution of states and parameters. the key benefit of our approach is the use of ensemble kalman methods for dimension reduction, which allows inference for high - dimensional state vectors. we compare our methods to existing ones, including ensemble kalman filters, particle filters, and particle mcmc. using a real data example of cloud motion and data simulated under a number of nonlinear and non - gaussian scenarios, we show that our approaches outperform these existing methods.
arxiv:1704.06988
grover ' s search algorithm gives a quantum attack against block ciphers by searching for a key that matches a small number of plaintext - ciphertext pairs. this attack uses $ o ( \ sqrt { n } ) $ calls to the cipher to search a key space of size $ n $. previous work in the specific case of aes derived the full gate cost by analyzing quantum circuits for the cipher, but focused on minimizing the number of qubits. in contrast, we study the cost of quantum key search attacks under a depth restriction and introduce techniques that reduce the oracle depth, even if it requires more qubits. as cases in point, we design quantum circuits for the block ciphers aes and lowmc. our circuits give a lower overall attack cost in both the gate count and depth - times - width cost models. in nist ' s post - quantum cryptography standardization process, security categories are defined based on the concrete cost of quantum key search against aes. we present new, lower cost estimates for each category, so our work has immediate implications for the security assessment of post - quantum cryptography. as part of this work, we release q # implementations of the full grover oracle for aes - 128, - 192, - 256 and for the three lowmc instantiations used in picnic, including unit tests and code to reproduce our quantum resource estimates. to the best of our knowledge, these are the first two such full implementations and automatic resource estimations.
arxiv:1910.01700
we present a method for the study of quantum fluctuations of dissipative structures forming in nonlinear optical cavities, which we illustrate in the case of a degenerate, type i optical parametric oscillator. the method consists in ( i ) taking into account explicitly, through a collective variable description, the drift of the dissipative structure caused by the quantum noise, and ( ii ) expanding the remaining - internal - fluctuations in the biorthonormal basis associated to the linear operator governing the evolution of fluctuations in the linearized langevin equations. we obtain general expressions for the squeezing and intensity fluctuations spectra. then we theoretically study the squeezing properties of a special dissipative structure, namely, the bright cavity soliton. after reviewing our previous result that in the linear approximation there is a perfectly squeezed mode irrespectively of the values of the system parameters, we consider squeezing at the bifurcation points, and the squeezing detection with a plane - - wave local oscillator field, taking also into account the effect of the detector size on the level of detectable squeezing.
arxiv:quant-ph/0702070
we fit wmap5 and related data by allowing for a cdm - - de coupling and non - - zero neutrino masses, simultaneously. we find a significant correlation between these parameters, so that simultaneous higher coupling and \ nu - - masses are allowed. furthermore, models with a significant coupling and \ nu - - mass are statistically favoured in respect to a cosmology with no coupling and negligible neutrino mass ( our best fits are : c ~ 1 / 2m _ p, m _ \ nu ~ 0. 12ev per flavor ). we use a standard monte carlo markov chain approach, by assuming de to be a scalar field self - - interacting through ratra - - peebles or sugra potentials.
arxiv:0911.3486
rapid parameter estimation of gravitational waves from binary neutron star coalescence, in particular accurate sky localisation in minutes after the initial detection stage, is crucial for the success of multi - messenger observations. one of the techniques to speed up the parameter estimation, which has been applied for the production analysis of the ligo - virgo collaboration, is reduced order quadrature ( roq ). while it speeds up parameter estimation significantly, the time required is still on the order of hours. focusing on the fact that the parameter - estimation follow - up can be tuned with the information available at the detection stage, we improve the roq technique and develop a new technique, which we designate focused reduced order quadrature ( froq ). we find that froq speeds up the parameter estimation by a factor of $ \ mathcal { o } ( 10 ^ 3 ) $ to $ \ mathcal { o } ( 10 ^ 4 ) $ and enables providing accurate source properties such as the location of a source in several tens of minutes after detection.
arxiv:2007.09108
over the last years several experimental and theoretical studies of diffusion kinetics on the nanoscale have shown that the time evolution differs from the classical fickian law ( kc = 0. 5 ). however, all work was based on crystalline samples or models, so far. in this letter, we report on the diffusion kinetics of a thin amorphous - si layer into amorphous - ge to account for the rising importance of amorphous materials in nanodevices. employing surface sensitive technics, the initial kc was found at 0. 7 + - 0. 1. moreover, after some monolayers of si dissolved into the ge, kc changes to the generally expected classical fickian law with kc = 0. 5.
arxiv:0902.2046
we examine a stochastic noise process that has a decohering effect on the average evolution of qubits in the quantum register of the solid state quantum computer proposed by kane. we consider the effects of this process on the single qubit operations necessary to perform quantum logical gates and derive an expression for the fidelity of these gates in this system. we then calculate an upper bound on the level of this stochastic noise tolerable in a workable quantum computer.
arxiv:quant-ph/0104055
we show by ab initio calculations that the electron - phonon coupling matrix element m of the radial breathing mode in single - walled carbon nanotubes depends strongly on tube chirality. for nanotubes of the same diameter the coupling strength | m | ^ 2 is up to one order of magnitude stronger for zig - zag than for armchair tubes. for ( n, m ) tubes m depends on the value of ( n - m ) mod 3, which allows to discriminate semiconducting nano tubes with similar diameter by their raman scattering intensity. we show measured resonance raman profiles of the radial breathing mode which support our theoretical predictions.
arxiv:cond-mat/0408436
we study the problem of learning the transition matrices of a set of markov chains from a single stream of observations on each chain. we assume that the markov chains are ergodic but otherwise unknown. the learner can sample markov chains sequentially to observe their states. the goal of the learner is to sequentially select various chains to learn transition matrices uniformly well with respect to some loss function. we introduce a notion of loss that naturally extends the squared loss for learning distributions to the case of markov chains, and further characterize the notion of being \ emph { uniformly good } in all problem instances. we present a novel learning algorithm that efficiently balances \ emph { exploration } and \ emph { exploitation } intrinsic to this problem, without any prior knowledge of the chains. we provide finite - sample pac - type guarantees on the performance of the algorithm. further, we show that our algorithm asymptotically attains an optimal loss.
arxiv:1905.11128
this paper examines the cues that typically differentiate phishing emails from genuine emails. the research is conducted in two stages. in the first stage, we identify the cues that actually differentiate between phishing and genuine emails. these are the consistency and personalisation of the message, the perceived legitimacy of links and sender, and the presence of spelling or grammatical irregularities. in the second stage, we identify the cues that participants use to differentiate between phishing and genuine emails. this revealed that participants often use cues that are not good indicators of whether an email is phishing or genuine. this includes the presence of legal disclaimers, the quality of visual presentation, and the positive consequences emphasised in the email. this study has implications for education and training and provides a basis for the design and development of targeted and more relevant training and risk communication strategies.
arxiv:1605.04717
recently, a flexible and stable algorithm was introduced for the computation of 2d unstable manifolds of periodic solutions to systems of ordinary differential equations. the main idea of this approach is to represent orbits in this manifold as the solutions of an appropriate boundary value problem. the boundary value problem is under determined and a one parameter family of solutions can be found by means of arclength continuation. this family of orbits covers a piece of the manifold. the quality of this covering depends on the way the boundary value problem is discretised, as do the tractability and accuracy of the computation. in this paper, we describe an implementation of the orbit continuation algorithm which relies on multiple shooting and newton - krylov continuation. we show that the number of time integrations necessary for each continuation step scales only with the number of shooting intervals but not with the number of degrees of freedom of the dynamical system. the number of shooting intervals is chosen based on linear stability analysis to keep the conditioning of the boundary value problem in check. we demonstrate our algorithm with two test systems : a low - order model of shear flow and a well - resolved simulation of turbulent plane couette flow.
arxiv:1003.4463
the statistical mechanics of a treelike polymer in a confining volume is relevant to the packaging of the genome in rna viruses. making use of the mapping of the grand partition function of this system onto the statistical mechanics of a hard - core gas in two fewer spatial dimensions and of techniques developed for the evaluation of the equilibrium properties of a one - dimensional hard rod gas, we show how it is possible to determine the density and other key properties of a collection of rooted excluded - volume tress confined between two walls, both in the absence and in the presence of a one - dimensional external potential. we find, somewhat surprisingly, that in the case of key quantities, the statistical mechanics of the excluded volume, randomly branched polymer map exactly into corresponding problems for an unrestricted linear polymer.
arxiv:0804.1347
we present a simple, exact and self - consistent cosmology with a phenomenological model of quantum creation of radiation due to decay of the scalar field. the decay drives a non - isentropic inflationary epoch, which exits smoothly to the radiation era, without reheating. the initial vacuum for radiation is a regular minkowski vacuum. the created radiation obeys standard thermodynamic laws, and the total entropy produced is consistent with the accepted value. we analyze the difference between the present model and a model with decaying cosmological constant previously considered.
arxiv:gr-qc/9905105
the crystal nucleation from liquid in most cases is too rare to be accessed within the limited timescales of the conventional molecular dynamics ( md ) simulation. here, we developed a " persistent embryo " method to facilitate crystal nucleation in md simulations by preventing small crystal embryos from melting using external spring forces. we applied this method to the pure ni case for a moderate undercooling where no nucleation can be observed in the conventional md simulation, and obtained nucleation rate in good agreement with the experimental data. moreover, the method is applied to simulate an even more sluggish event : the nucleation of the b2 phase in a strong glass - forming cu - zr alloy. the nucleation rate was found to be 8 orders of magnitude smaller than ni at the same undercooling, which well explains the good glass formability of the alloy. thus, our work opens a new avenue to study solidification under realistic experimental conditions via atomistic computer simulation.
arxiv:1709.00085
a falling liquid drop, after impact on a rigid substrate, deforms and spreads, owing to the normal reaction force. subsequently, if the substrate is non - wetting, the drop retracts and then jumps off. as we show here, not only is the impact itself associated with a distinct peak in the temporal evolution of the normal force, but also the jump - off, which was hitherto unknown. we characterize both peaks and elucidate how they relate to the different stages of the drop impact process. the time at which the second peak appears coincides with the formation of a worthington jet, emerging through flow - focusing, and it is independent of the impact velocity. however, the magnitude of this peak is dictated by the drop ' s inertia and surface tension. we show that even low - velocity impacts can lead to a surprisingly high peak in the normal force, namely when a more pronounced singular worthington jet occurs due to the collapse of an air cavity in the drop.
arxiv:2202.02437
homophily, the tendency of individuals to associate with others who share similar traits, has been identified as a major driving force in the formation and evolution of social ties. in many cases, it is not clear if homophily is the result of a socialization process, where individuals change their traits according to the dominance of that trait in their local social networks, or if it results from a selection process, in which individuals reshape their social networks so that their traits match those in the new environment. here we demonstrate the detailed temporal formation of strong homophily in academic achievements of high school and university students. we analyze a unique dataset that contains information about the detailed time evolution of a friendship network of 6, 000 students across 42 months. combining the evolving social network data with the time series of the academic performance ( gpa ) of individual students, we show that academic homophily is a result of selection : students prefer to gradually reorganize their social networks according to their performance levels, rather than adapting their performance to the level of their local group. we find no signs for a pull effect, where a social environment of good performers motivates bad students to improve their performance. we are able to understand the underlying dynamics of grades and networks with a simple model. the lack of a social pull effect in classical educational settings could have important implications for the understanding of the observed persistence of segregation, inequality and social immobility in societies.
arxiv:1606.09082
the role of quantum tunneling effect in the electron accretion current onto a negatively charged grain immersed in isotropic plasma is analyzed, within the quasiclassic approximation, for different plasma electron distribution functions, plasma parameters, and grain sizes. it is shown that this contribution can be small ( negligible ) for relatively large ( micron - sized ) dust grains in plasmas with electron temperatures of the order of a few ev, but becomes important for nano - sized dust grains ( tens to hundreds nm in diameter ) in cold and ultracold plasmas ( electron temperatures ~ tens to hundreds of kelvin ), especially in plasmas with depleted high - energy " tails " in the electron energy distribution.
arxiv:1007.0806
cusums based on the signed sequential ranks of observations are developed for detecting location and scale changes in symmetric distributions. the cusums are distribution free and fully self - starting : given a specified in - control median and nominal in - control average run length, no parametric specification of the underlying distribution is required in order to find the correct control limits. if the underlying distribution is normal with unknown variance, a cusum based on the van der waerden signed rank score produces out - of - control average run lengths that are commensurate with those produced by the standard cusum for a normal distribution with known variance. for heavier tailed distributions, use of a cusum based on the wilcoxon signed rank score is indicated. the methodology is illustrated by application to real data from an industrial environment.
arxiv:1706.03901
given a ( not necessarily continuous ) homomorphism between banach algebras $ \ t \ colon \ a \ to \ b $, an element $ a \ in \ a $ will be said to be b - fredholm ( respectively generalized b - fredholm ) relative to $ \ t $, if $ \ t ( a ) \ in \ b $ is drazin invertible ( respectively koliha - drazin invertible ). in this article, the aforementioned elements will be characterized and their main properties will be studied. in addition, perturbation properties will be also considered.
arxiv:1504.02952
is safe to consume. " in some countries fsms is a legal requirement, which obliges all food production businesses to use and maintain a fsms based on the principles of hazard analysis critical control point ( haccp ). haccp is a management system that addresses food safety through the analysis and control of biological, chemical, and physical hazards in all stages of the food supply chain. the iso 22000 standard specifies the requirements for fsms. = = emerging technologies = = the following technologies, which continue to evolve, have contributed to the innovation and advancement of food engineering practices : = = = three - dimensional printing of food = = = three - dimensional ( 3d ) printing, also known as additive manufacturing, is the process of using digital files to create three dimensional objects. in the food industry, 3d printing of food is used for the processing of food layers using computer equipment. the process of 3d printing is slow, but is improving over time with the goal of reducing costs and processing times. some of the successful food items that have been printed through 3d technology are : chocolate, cheese, cake frosting, turkey, pizza, celery, among others. this technology is continuously improving, and has the potential of providing cost - effective, energy efficient food that meets nutritional stability, safety and variety. = = = biosensors = = = biosensors can be used for quality control in laboratories and in different stages of food processing. biosensor technology is one way in which farmers and food processors have adapted to the worldwide increase in demand for food, while maintaining their food production and quality high. furthermore, since millions of people are affected by food - borne diseases caused by bacteria and viruses, biosensors are becoming an important tool to ensure the safety of food. they help track and analyze food quality during several parts of the supply chain : in food processing, shipping and commercialization. biosensors can also help with the detection of genetically modified organisms ( gmos ), to help regulate gmo products. with the advancement of technologies, like nanotechnology, the quality and uses of biosensors are constantly being improved. = = = milk pasteurization by microwave = = = when storage conditions of milk are controlled, milk tends to have a very good flavor. however, oxidized flavor is a problem that affects the taste and safety of milk in a negative way. to prevent the growth of pathogenic bacteria and extend the shelf life of milk, pasteurization processes were developed. microwave
https://en.wikipedia.org/wiki/Food_engineering
we investigate a mechanism to transiently stabilize topological phenomena in long - lived quasi - steady states of isolated quantum many - body systems driven at low frequencies. we obtain an analytical bound for the lifetime of the quasi - steady states which is exponentially large in the inverse driving frequency. within this lifetime, the quasi - steady state is characterized by maximum entropy subject to the constraint of fixed number of particles in the system ' s floquet - bloch bands. in such a state, all the non - universal properties of these bands are washed out, hence only the topological properties persist.
arxiv:1901.08385
bioelectrical interfaces represent a significant evolution in the intersection of nanotechnology and biophysics, offering new strategies for probing and influencing cellular processes. these systems capitalize on the subtle but powerful electric fields within living matter, potentially enabling applications beyond cellular excitability, ranging from targeted cancer therapies to interventions in genetic mechanisms and aging. this perspective article envisions the translation, development and application of next - generation solid - state bioelectrical interfaces and their transformative impact across several critical areas of medical research.
arxiv:2504.00872
smoothing operation to make continuous density field from observed point - like distribution of galaxies is crucially important for topological or morphological analysis of the large - scale structure, such as, the genus statistics or the area statistics ( equivalently the level crossing statistics ). it has been pointed out that the adaptive smoothing filters are more efficient tools to resolve cosmic structures than the traditional spatially fixed filters. we study weakly nonlinear effects caused by two representative adaptive methods often used in smoothed hydrodynamical particle ( sph ) simulations. using framework of second - order perturbation theory, we calculate the generalized skewness parameters for the adaptive methods in the case of initially power - law fluctuations. then we apply the multidimensional edgeworth expansion method and investigate weakly nonlinear evolution of the genus statistics and the area statistics. isodensity contour surfaces are often parameterized by the volume fraction of the regions above a given density threshold. we also discuss this parameterization method in perturbative manner.
arxiv:astro-ph/0002315
we have measured resonance spectra in a superconducting microwave cavity with the shape of a three - dimensional generalized bunimovich stadium billiard and analyzed their spectral fluctuation properties. the experimental length spectrum exhibits contributions from periodic orbits of non - generic modes and from unstable periodic orbit of the underlying classical system. it is well reproduced by our theoretical calculations based on the trace formula derived by balian and duplantier for chaotic electromagnetic cavities.
arxiv:nlin/0206028
a geometric $ t $ - spanner on a set of points in euclidean space is a graph containing for every pair of points a path of length at most $ t $ times the euclidean distance between the points. informally, a spanner is $ \ mathcal { o } ( k ) $ - robust if deleting $ k $ vertices only harms $ \ mathcal { o } ( k ) $ other vertices. we show that on any one - dimensional set of $ n $ points, for any $ \ varepsilon > 0 $, there exists an $ \ mathcal { o } ( k ) $ - robust $ 1 $ - spanner with $ \ mathcal { o } ( n ^ { 1 + \ varepsilon } ) $ edges. previously it was only known that $ \ mathcal { o } ( k ) $ - robust spanners with $ \ mathcal { o } ( n ^ 2 ) $ edges exists and that there are point sets on which any $ \ mathcal { o } ( k ) $ - robust spanner has $ \ omega ( n \ log { n } ) $ edges.
arxiv:1803.08719
disasters impact communities through interconnected social, spatial, and physical networks. analyzing network dynamics is crucial for understanding resilience and recovery. we highlight six studies demonstrating how hazards and recovery processes spread through these networks, revealing key phenomena, such as flood exposure, emergent social cohesion, and critical recovery multipliers. this network - centric approach can uncover vulnerabilities, inform interventions, and advance equitable resilience strategies in the face of escalating risks.
arxiv:2502.18730
performance in cross - lingual nlp tasks is impacted by the ( dis ) similarity of languages at hand : e. g., previous work has suggested there is a connection between the expected success of bilingual lexicon induction ( bli ) and the assumption of ( approximate ) isomorphism between monolingual embedding spaces. in this work we present a large - scale study focused on the correlations between monolingual embedding space similarity and task performance, covering thousands of language pairs and four different tasks : bli, parsing, pos tagging and mt. we hypothesize that statistics of the spectrum of each monolingual embedding space indicate how well they can be aligned. we then introduce several isomorphism measures between two embedding spaces, based on the relevant statistics of their individual spectra. we empirically show that 1 ) language similarity scores derived from such spectral isomorphism measures are strongly associated with performance observed in different cross - lingual tasks, and 2 ) our spectral - based measures consistently outperform previous standard isomorphism measures, while being computationally more tractable and easier to interpret. finally, our measures capture complementary information to typologically driven language distance measures, and the combination of measures from the two families yields even higher task performance correlations.
arxiv:2001.11136
the " problem of time " in canonical quantum gravity refers to the difficulties involved in defining a hilbert space structure on states - - and local observables on this hilbert space - - for a theory in which the spacetime metric is treated as a quantum field, so no classical metrical or causal structure is present on spacetime. we describe an approach - - much in the spirit of ideas proposed by misner, kuchar and others - - to defining states and local observables in quantum gravity which exploits the analogy between the hamiltonian formulation of general relativity and that of a relativistic particle. in the case ofminisuperspace models, a concrete theory is obtained which appears to be mathematically and physically viable, although it contains some radical features with regard to the presence of an " arrow of time ". the viability of this approach in the case ofinfinitely many degrees of freedom rests on a number of fairlywell defined issues, which, however, remain unresolved. as a byproduct of our analysis, the theory of a relativistic particle in curved spacetime is developed.
arxiv:gr-qc/9305024
in this article we review classical and recent results in anomalous diffusion and provide mechanisms useful for the study of the fundamentals of certain processes, mainly in condensed matter physics, chemistry and biology. emphasis will be given to some methods applied in the analysis and characterization of diffusive regimes through the memory function, the mixing condition ( or irreversibility ), and ergodicity. those methods can be used in the study of small - scale systems, ranging in size from single - molecule to particle clusters and including among others polymers, proteins, ion channels and biological cells, whose diffusive properties have received much attention lately.
arxiv:1902.03157
we investigate the effect of turbulence on the collisional growth of um - sized droplets through high - resolution numerical simulations with well resolved kolmogorov scales, assuming a collision and coalescence efficiency of unity. the droplet dynamics and collisions are approximated using a superparticle approach. in the absence of gravity, we show that the time evolution of the shape of the droplet - size distribution due to turbulence - induced collisions depends strongly on the turbulent energy - dissipation rate, but only weakly on the reynolds number. this can be explained through the energy dissipation rate dependence of the mean collision rate described by the saffman - turner collision model. consistent with the saffman - turner collision model and its extensions, the collision rate increases as the square root of the energy dissipation rate even when coalescence is invoked. the size distribution exhibits power law behavior with a slope of - 3. 7 between a maximum at approximately 10 um up to about 40 um. when gravity is invoked, turbulence is found to dominate the time evolution of an initially monodisperse droplet distribution at early times. at later times, however, gravity takes over and dominates the collisional growth. we find that the formation of large droplets is very sensitive to the turbulent energy dissipation rate. this is due to the fact that turbulence enhances the collisional growth between similar sized droplets at the early stage of raindrop formation. the mean collision rate grows exponentially, which is consistent with the theoretical prediction of the continuous collisional growth even when turbulence - generated collisions are invoked. this consistency only reflects the mean effect of turbulence on collisional growth.
arxiv:1711.10062
neural mapping schemes have become appealing approaches to deliver gap - free satellite - derived products for sea surface tracers. the generalization performance of these learning - based approaches naturally arises as a key challenge. this is particularly true for satellite - derived ocean colour products given the variety of bio - optical variables of interest, as well as the diversity of processes and scales involved. considering region - specific and parameter - specific neural mapping schemes will result in substantial training costs. this study addresses generalization performance of neural mapping schemes to deliver gap - free satellite - derived ocean colour products. we develop a comprehensive experimental framework using real multi - sensor ocean colour datasets for two regions ( the mediterranean sea and the north sea ) and a representative set of bio - optical parameters ( chlorophyll - a concentration, suspended particulate matter concentration, particulate backscattering coefficient ). we consider several neural mapping schemes, and we report excellent generalization performance across regions and bio - optical parameters without any fine - tuning using appropriate dataset - specific normalization procedures. we discuss further how these results provide new insights towards the large - scale deployment of neural schemes for the processing of satellite - derived ocean colour datasets beyond case - study - specific demonstrations.
arxiv:2503.11588
the purpose of this paper is two - fold. first we review in detail the geometric aspects of the swampland program for supersymmetric 4d effective theories using a new and unifying language we dub ` domestic geometry ', the generalization of special k \ " ahler geometry which does not require the underlying manifold to be k \ " ahler or have a complex structure. all 4d sugras are described by domestic geometry. as special k \ " ahler geometries, domestic geometries carry formal brane amplitudes : when the domestic geometry describes the supersymmetric low - energy limit of a consistent quantum theory of gravity, its formal brane amplitudes have the right properties to be actual branes. the main datum of the domestic geometry of a 4d sugra is its gauge coupling, seen as a map from a manifold which satisfies the geometric ooguri - vafa conjectures to the siegel variety ; to understand the properties of the quantum - consistent gauge couplings we discuss several novel aspects of such ` ooguri - vafa ' manifolds, including their liouville properties. our second goal is to present some novel speculation on the extension of the swampland program to no - supersymmetric effective theories of gravity. the idea is that the domestic geometric description of the quantum - consistent effective theories extends, possibly with some qualifications, also to the non - supersymmetric case.
arxiv:2102.03205
in this article, i have considered a real scalar field theory and able to show that under bogoliubov transformation in infinite volume limit or thermodynamic limit the transformed hamiltonian no longer invariant under u ( 1 ) action defined appropriately as it was before doing transformation. we also have checked this fact by looking at the correlation functions under the action of u ( 1 ) group. we suitably defined field operators that are associated with particle production phenomena then we can also show that correlation functions of such field operators also don ' t follow u ( 1 ) invariance, shown in this article. this is a consequence of non - invariance of transformed hamiltonian under u ( 1 ) action. since, we know bogoliubov transformation in curved spacetime is equivalent to doing a coordinate transformation, therefore this result directly shows the phenomena of particle production under the affect of gravity since changing coordinate is equivalent to turn on gravity according to einstein ' s equivalence principle in gr. i also show that particle production does not take place out of vacuum state but it can happen out of other many - particle states and vacuum state is not an eigenvector of hamltonian operator in transformed fock space and vacuum state does not remain vacuum state under time evolution.
arxiv:1806.01123
ammonium hydrosulphide has long since been postulated to exist at least in certain layers of the giant planets. its radiation products may be the reason for the red colour seen on jupiter. several ammonium salts, the products of nh3 and an acid, have previously been detected at comet 67p / churyumov - gerasimenko. the acid h2s is the fifth most abundant molecule in the coma of 67p followed by nh3. in order to look for the salt nh4 + sh -, we analysed in situ measurements from the rosetta / rosina double focusing mass spectrometer during the rosetta mission. nh3 and h2s appear to be independent of each other when sublimating directly from the nucleus. however, we observe a strong correlation between the two species during dust impacts, clearly pointing to the salt. we find that nh4 + sh - is by far the most abundant salt, more abundant in the dust impacts than even water. we also find all previously detected ammonium salts and for the first time ammonium fluoride. the amount of ammonia and acids balance each other, confirming that ammonia is mostly in the form of salt embedded into dust grains. allotropes s2 and s3 are strongly enhanced in the impacts, while h2s2 and its fragment hs2 are not detected, which is most probably the result of radiolysis of nh4 + sh -. this makes a prestellar origin of the salt likely. our findings may explain the apparent depletion of nitrogen in comets and maybe help to solve the riddle of the missing sulphur in star forming regions.
arxiv:2208.11396
we prove that proper biharmonic hypersurfaces with constant scalar curvature in euclidean sphere $ \ mathbb s ^ 5 $ must have constant mean curvature. moreover, we also show that there exist no proper biharmonic hypersurfaces with constant scalar curvature in euclidean space $ \ mathbb e ^ 5 $ or hyperbolic space $ \ mathbb h ^ 5 $, which give affirmative partial answers to chen ' s conjecture and generalized chen ' s conjecture.
arxiv:1412.7394
in this paper we address the complexity of solving linear programming problems with a set of differential equations that converge to a fixed point that represents the optimal solution. assuming a probabilistic model, where the inputs are i. i. d. gaussian variables, we compute the distribution of the convergence rate to the attracting fixed point. using the framework of random matrix theory, we derive a simple expression for this distribution in the asymptotic limit of large problem size. in this limit, we find that the distribution of the convergence rate is a scaling function, namely it is a function of one variable that is a combination of three parameters : the number of variables, the number of constraints and the convergence rate, rather than a function of these parameters separately. we also estimate numerically the distribution of computation times, namely the time required to reach a vicinity of the attracting fixed point, and find that it is also a scaling function. using the problem size dependence of the distribution functions, we derive high probability bounds on the convergence rates and on the computation times.
arxiv:cs/0110056
solar forecasting from ground - based sky images has shown great promise in reducing the uncertainty in solar power generation. with more and more sky image datasets open sourced in recent years, the development of accurate and reliable deep learning - based solar forecasting methods has seen a huge growth in potential. in this study, we explore three different training strategies for solar forecasting models by leveraging three heterogeneous datasets collected globally with different climate patterns. specifically, we compare the performance of local models trained individually based on single datasets and global models trained jointly based on the fusion of multiple datasets, and further examine the knowledge transfer from pre - trained solar forecasting models to a new dataset of interest. the results suggest that the local models work well when deployed locally, but significant errors are observed when applied offsite. the global model can adapt well to individual locations at the cost of a potential increase in training efforts. pre - training models on a large and diversified source dataset and transferring to a target dataset generally achieves superior performance over the other two strategies. with 80 % less training data, it can achieve comparable performance as the local baseline trained using the entire dataset.
arxiv:2211.02108
binary hashing is widely used for effective approximate nearest neighbors search. even though various binary hashing methods have been proposed, very few methods are feasible for extremely high - dimensional features often used in visual tasks today. we propose a novel highly sparse linear hashing method based on pairwise rotations. the encoding cost of the proposed algorithm is $ \ mathrm { o } ( n \ log n ) $ for n - dimensional features, whereas that of the existing state - of - the - art method is typically $ \ mathrm { o } ( n ^ 2 ) $. the proposed method is also remarkably faster in the learning phase. along with the efficiency, the retrieval accuracy is comparable to or slightly outperforming the state - of - the - art. pairwise rotations used in our method are formulated from an analytical study of the trade - off relationship between quantization error and entropy of binary codes. although these hashing criteria are widely used in previous researches, its analytical behavior is rarely studied. all building blocks of our algorithm are based on the analytical solution, and it thus provides a fairly simple and efficient procedure.
arxiv:1501.07422
we present electrical resistivity and ac - susceptibility measurements of gdte $ _ 3 $, tbte $ _ 3 $ and dyte $ _ 3 $ performed under pressure. an upper charge - density - wave ( cdw ) is suppressed at a rate of $ \ mathrm { d } t _ { \ mathrm { cdw, 1 } } / \ mathrm { d } p $ = $ - $ 85 k / gpa. for tbte $ _ 3 $ and dyte $ _ 3 $, a second cdw below $ t _ { \ mathrm { cdw, 2 } } $ increases with pressure until it reaches the $ t _ { \ mathrm { cdw, 1 } } $ ( $ p $ ) line. for gdte $ _ 3 $, the lower cdw emerges as pressure is increased above $ \ sim $ 1 gpa. as these two cdw states are suppressed with pressure, superconductivity ( sc ) appears in the three compounds at lower temperatures. ac - susceptibility experiments performed on tbte $ _ 3 $ provide compelling evidence for bulk sc in the low - pressure region of the phase diagram. we provide measurements of superconducting critical fields and discuss the origin of a high - pressure superconducting phase occurring above 5 gpa.
arxiv:1504.07190
this paper advocates the use of organic priors in classical non - rigid structure from motion ( nrsfm ). by organic priors, we mean invaluable intermediate prior information intrinsic to the nrsfm matrix factorization theory. it is shown that such priors reside in the factorized matrices, and quite surprisingly, existing methods generally disregard them. the paper ' s main contribution is to put forward a simple, methodical, and practical method that can effectively exploit such organic priors to solve nrsfm. the proposed method does not make assumptions other than the popular one on the low - rank shape and offers a reliable solution to nrsfm under orthographic projection. our work reveals that the accessibility of organic priors is independent of the camera motion and shape deformation type. besides that, the paper provides insights into the nrsfm factorization - - both in terms of shape and motion - - and is the first approach to show the benefit of single rotation averaging for nrsfm. furthermore, we outline how to effectively recover motion and non - rigid 3d shape using the proposed organic prior based approach and demonstrate results that outperform prior - free nrsfm performance by a significant margin. finally, we present the benefits of our method via extensive experiments and evaluations on several benchmark datasets.
arxiv:2207.06262
community detection refers to finding densely connected groups of nodes in graphs. in important applications, such as cluster analysis and network modelling, the graph is sparse but outliers and heavy - tailed noise may obscure its structure. we propose a new method for sparsity - aware robust community detection ( sparcode ). starting from a densely connected and outlier - corrupted graph, we first extract a preliminary sparsity improved graph model where we optimize the level of sparsity by mapping the coordinates from different clusters such that the distance of their embedding is maximal. then, undesired edges are removed and the graph is constructed robustly by detecting the outliers using the connectivity of nodes in the improved graph model. finally, fast spectral partitioning is performed on the resulting robust sparse graph model. the number of communities is estimated using modularity optimization on the partitioning results. we compare the performance to popular graph and cluster - based community detection approaches on a variety of benchmark network and cluster analysis data sets. comprehensive experiments demonstrate that our method consistently finds the correct number of communities and outperforms existing methods in terms of detection performance, robustness and modularity score while requiring a reasonable computation time.
arxiv:2011.09196
we have produced an interacting quantum degenerate fermi gas of atoms composed of two spin - states of magnetically trapped $ ^ { 40 } $ k. the relative fermi energies are adjusted by controlling the population in each spin - state. measurements of the thermodynamics reveal the resulting imbalance in the mean energy per particle between the two species, which is as large as a factor of 1. 4 at our lowest temperature. this imbalance of energy comes from a suppression of collisions between atoms in the gas due to the pauli exclusion principle. through measurements of the thermal relaxation rate we have directly observed this pauli blocking as a factor of two reduction in the effective collision cross - section in the quantum degenerate regime.
arxiv:cond-mat/0101445
we investigate the impact of charm mixing on the model - independent gamma measurement using dalitz plot analysis of the three - body d decay from b + - > dk + process, and show that ignoring the mixing at all stages of the analysis is safe up to a sub - degree level of precision. we also find that in the coherent production of d0 - d0 * system in e + e - collisions, the effect of charm mixing is enhanced, and propose a model - independent method to measure charm mixing parameters in the time - integrated dalitz analysis at charm factories.
arxiv:1004.2350
we compute the relative divergence and the subgroup distortion of bestvina - brady subgroups. we also show that for each integer $ n \ geq 3 $, there is a free subgroup of rank $ n $ of some right - angled artin group whose inclusion is not a quasi - isometric embedding. this result answers the question of carr about the minimum rank $ n $ such that some right - angled artin group has a free subgroup of rank $ n $ whose inclusion is not a quasi - isometric embedding. it is well - known that a right - angled artin group $ a _ \ gamma $ is the fundamental group of a graph manifold whenever the defining graph $ \ gamma $ is a tree. we show that the bestvina - brady subgroup $ h _ \ gamma $ in this case is a horizontal surface subgroup.
arxiv:1606.00539
we investigate trilepton final states to probe top anomalous couplings at the large hadron collider. we focus on events originating from the associated production of a single top quark with a z - boson, a channel sensitive to several flavor - changing neutral interactions of top and up / charm quarks. in particular, we explore a way to access simultaneously their anomalous couplings to z - bosons and gluons and derive the discovery potential of trilepton final states to such interactions with 20 fb - 1 of 8 tev collisions. we show that effective coupling strengths of o ( 0. 1 - 1 ) tev - 1 can be reached. equivalently, branching fractions of top quarks into lighter quarks and gluons or z - bosons can be constrained to be below o ( 0. 1 - 1 ) %.
arxiv:1304.5551
we calculate the coupling between a vector resonance and two goldstone bosons in $ su ( 2 ) $ gauge theory with $ n _ f = 2 $ dirac fermions in the fundamental representation. the considered theory can be used to construct a minimal composite higgs models. the coupling is related to the width of the vector resonance and we determine it by simulating the scattering of two goldstone bosons where the resonance is produced. the resulting coupling is $ g _ { \ rm { vpp } } = 7. 8 \ pm 0. 6 $, not far from $ g _ { \ rho \ pi \ pi } \ simeq 6 $ in qcd. this is the first lattice calculation of the resonance properties for a minimal uv completion. this coupling controls the production cross section of the lightest expected resonance at the lhc and enters into other tests of the standard model, from vector boson fusion to electroweak precision tests. our prediction is crucial to constrain the model using lattice input and for understanding the behavior of the vector meson production cross section as a function of the underlying gauge theory. we also extract the coupling $ g _ { \ rm { vpp } } ^ { \ rm { ksrf } } = 9. 4 \ pm 0. 6 $ assuming the vector - dominance and find that this phenomenological estimate slightly overestimates the value of the coupling.
arxiv:2012.09761
we derive the class of covariant measurements which are optimal according to the maximum likelihood criterion. the optimization problem is fully resolved in the case of pure input states, under the physically meaningful hypotheses of unimodularity of the covariance group and measurability of the stability subgroup. the general result is applied to the case of covariant state estimation for finite dimension, and to the weyl - heisenberg displacement estimation in infinite dimension. we also consider estimation with multiple copies, and compare collective measurements on identical copies with the scheme of independent measurements on each copy. a " continuous - variables " analogue of the measurement of direction of the angular momentum with two anti - parallel spins by gisin and popescu is given.
arxiv:quant-ph/0403083
a new thermometer based on fragment momentum fluctuations is presented. this thermometer exhibited residual contamination from the collective motion of the fragments along the beam axis. for this reason, the transverse direction has been explored. additionally, a mass dependence was observed for this thermometer. this mass dependence may be the result of the fermi momentum of nucleons or the different properties of the fragments ( binding energy, spin etc.. ) which might be more sensitive to different densities and temperatures of the exploding fragments. we expect some of these aspects to be smaller for protons ( and / or neutrons ) ; consequently, the proton transverse momentum fluctuations were used to investigate the temperature dependence of the source.
arxiv:1004.0021
automatic solutions which enable the selection of the best algorithms for a new problem are commonly found in the literature. one research area which has recently received considerable efforts is collaborative filtering. existing work includes several approaches using metalearning, which relate the characteristics of datasets with the performance of the algorithms. this work explores an alternative approach to tackle this problem. since, in essence, both are recommendation problems, this work uses collaborative filtering algorithms to select collaborative filtering algorithms. our approach integrates subsampling landmarkers, which are a data characterization approach commonly used in metalearning, with a standard collaborative filtering method. the experimental results show that cf4cf competes with standard metalearning strategies in the problem of collaborative filtering algorithm selection.
arxiv:1803.02250
in certain conditions a macroscopic quantum - mechanical scattering may occur, which may lead to a coherent cross - section on a macroscopic scale in a monocrystal. the conditions are satisfied by neutrinos, but not satisfied by other projectiles, with a higher cross - section. this may explain weber - type experiments of neutrino detection by a perfect, stiff sapphire monocrystal. the occurrence of coherence domains for quantum - mechanical scattering and classical diffraction is analyzed, and the force exerted upon a macroscopic target is estimated. it is concluded that neutrinos exhibit a distinctive feature in this respect, due precisely to their very small cross - section.
arxiv:2310.03315
in this paper, we develop two parameter - robust numerical algorithms for biot model and applied the algorithms in brain edema simulations. by introducing an intermediate variable, we derive a multiphysics reformulation of the biot model. based on the reformulation, the biot model is viewed as a generalized stokes subproblem combining with a reaction - diffusion subproblem. solving the two subproblems together or separately will lead to a coupled or a decoupled algorithm. we conduct extensive numerical experiments to show that the two algorithms are robust with respect to the physics parameters. the algorithms are applied to study the brain swelling caused by abnormal accumulation of cerebrospinal fluid in injured areas. the effects of key physics parameters on brain swelling are carefully investigated. it is observe that the permeability has the greatest effect on intracranial pressure ( icp ) and tissue deformation ; the young ' s modulus and the poisson ratio will not affect the maximum icp too much but will affect the tissue deformation and the developing speed of brain swelling.
arxiv:1906.08802
the purpose of this article is to provide a solution to the $ m $ - fold laplace equation in the half space $ r _ + ^ d $ under certain dirichlet conditions. the solutions we present are a series of $ m $ boundary layer potentials. we give explicit formulas for these layer potentials as linear combinations of powers of the laplacian applied to the dirichlet data, with coefficients determined by certain path counting problems.
arxiv:1305.5063
we investigate the diffuse light ( dl ) content of dark matter haloes in the mass range $ 11. 5 \ leq \ log m _ { halo } \ leq13 $, a range that includes also the dark matter halo of the milky - way, taking advantage of a state - of - the - art semi - analytic model run on the merger trees extracted from a set of high - resolution cosmological simulations. the fraction of dl in such relatively small haloes is found to progressively decrease from the high to the low mass end, in good agreement with analytic ( \ citealt { purcell2007 } ) and numerical results from simulations ( \ citealt { proctor2023, ahvazi2023 } ), in good agreement also with the fraction of the dl observed in the milky - way ( \ citealt { deason2019 } ) and m31 ( \ citealt { harmsen2017 } ). haloes with different masses have a different efficiency in producing dl : $ \ log m _ { halo } \ simeq 13 $ is found to be the characteristic halo mass where the production of dl is the most efficient, while the overall efficiency decreases at both larger ( \ citealt { contini2024 } ) and smaller scales ( this work ). the dl content in this range of halo mass is the result of stellar stripping due to tidal interaction between satellites and its host ( 95 \ % ) and mergers between satellites and the central galaxy ( 5 \ % ), with pre - processed material, sub - channel of mergers and stripping and so already included in the 100 \ %, that contributes no more than 8 \ % on average. the halo concentration is the main driver of the dl formation : more concentrated haloes have higher dl fractions that come from stripping of more massive satellites in the high halo mass end, while dwarfs contribute mostly in the low halo mass end.
arxiv:2401.14650
we prove that if a subset of a $ d $ - dimensional vector space over a finite field with $ q $ elements has more than $ q ^ { d - 1 } $ elements, then it determines all the possible directions. if a set has more than $ q ^ k $ elements, it determines a $ k $ - dimensional set of directions. we prove stronger results for sets that are sufficiently random. this result is best possible as the example of a $ k $ - dimensional hyperplane shows. we can view this question as an erd \ h os type problem where a sufficiently large subset of a vector space determines a large number of configurations of a given type. for discrete subsets of $ { \ bbb r } ^ d $, this question has been previously studied by pach, pinchasi and sharir.
arxiv:1010.0749
we study the impact of nematic alignment on scalar active matter in the disordered phase. we show that nematic torques control the emergent physics of particles interacting via pairwise forces and can either induce or prevent phase separation. the underlying mechanism is a fluctuation - induced renormalization of the mass of the polar field that generically arises from nematic torques. the correlations between the fluctuations of the polar and nematic fields indeed conspire to increase the particle persistence length, contrary to what phenomenological computations predict. this effect is generic and our theory also quantitatively accounts for how nematic torques enhance particle accumulation along confining boundaries and opposes demixing in mixtures of active and passive particles.
arxiv:2301.02568
this work investigates three aspects : ( a ) a network vulnerability as the non - uniform vulnerable - host distribution, ( b ) threats, i. e., intelligent malwares that exploit such a vulnerability, and ( c ) defense, i. e., challenges for fighting the threats. we first study five large data sets and observe consistent clustered vulnerable - host distributions. we then present a new metric, referred to as the non - uniformity factor, which quantifies the unevenness of a vulnerable - host distribution. this metric is essentially the renyi information entropy and better characterizes the non - uniformity of a distribution than the shannon entropy. next, we analyze the propagation speed of network - aware malwares in view of information theory. in particular, we draw a relationship between renyi entropies and randomized epidemic malware - scanning algorithms. we find that the infection rates of malware - scanning methods are characterized by the renyi entropies that relate to the information bits in a non - unform vulnerable - host distribution extracted by a randomized scanning algorithm. meanwhile, we show that a representative network - aware malware can increase the spreading speed by exactly or nearly a non - uniformity factor when compared to a random - scanning malware at an early stage of malware propagation. this quantifies that how much more rapidly the internet can be infected at the early stage when a malware exploits an uneven vulnerable - host distribution as a network - wide vulnerability. furthermore, we analyze the effectiveness of defense strategies on the spread of network - aware malwares. our results demonstrate that counteracting network - aware malwares is a significant challenge for the strategies that include host - based defense and ipv6.
arxiv:0805.0802
we find the minimax rate of convergence in hausdorff distance for estimating a manifold m of dimension d embedded in r ^ d given a noisy sample from the manifold. we assume that the manifold satisfies a smoothness condition and that the noise distribution has compact support. we show that the optimal rate of convergence is n ^ { - 2 / ( 2 + d ) }. thus, the minimax rate depends only on the dimension of the manifold, not on the dimension of the space in which m is embedded.
arxiv:1007.0549
understanding how young stars gain their masses through disk - to - star accretion is of paramount importance in astrophysics. it affects our knowledge about the early stellar evolution, the disk lifetime and dissipation processes, the way the planets form on the smallest scales, or the connection to macroscopic parameters characterizing star - forming regions on the largest ones, among others. in turn, mass accretion rate estimates depend on the accretion paradigm assumed. for low - mass t tauri stars with strong magnetic fields there is consensus that magnetospheric accretion ( ma ) is the driving mechanism, but the transfer of mass in massive young stellar objects with weak or negligible magnetic fields probably occurs directly from the disk to the star through a hot boundary layer ( bl ). the intermediate - mass herbig ae / be ( haebe ) stars bridge the gap between both previous regimes and are still optically visible during the pre - main sequence phase, thus constituting a unique opportunity to test a possible change of accretion mode from ma to bl. this review deals with our estimates of accretion rates in haebes, critically discussing the different accretion paradigms. it shows that although mounting evidence supports that ma may extend to late - type haes but not to early - type hbes, there is not yet a consensus on the validity of this scenario versus the bl one. based on ma and bl shock modeling, it is argued that the ultraviolet regime could significantly contribute in the future to discriminating between these competing accretion scenarios.
arxiv:2005.01745
a $ 4 ^ - $ - power is a non - empty word of the form $ xxxx ^ - $, where $ x ^ - $ is obtained from $ x $ by erasing the last letter. a binary word is called { \ em faux - bonacci } if it contains no $ 4 ^ - $ - powers, and no factor 11. we show that faux - bonacci words bear the same relationship to the fibonacci morphism that overlap - free words bear to the thue - morse morphism. we prove the analogue of fife ' s theorem for faux - bonacci words, and characterize the lexicographically least and greatest infinite faux - bonacci words.
arxiv:2311.12962
the status of four and six fermion event generators for standard model processes at present and future e ^ + e ^ - colliders is briefly reviewed.
arxiv:hep-ph/9911483
3d gaussian splatting ( 3dgs ) has recently emerged as an innovative and efficient 3d representation technique. while its potential for extended reality ( xr ) applications is frequently highlighted, its practical effectiveness remains underexplored. in this work, we examine three distinct 3dgs - based approaches for virtual environment ( ve ) creation, leveraging their unique strengths for efficient and visually compelling scene representation. by conducting a comparable study, we evaluate the feasibility of 3dgs in creating immersive ves, identify its limitations in xr applications, and discuss future research and development opportunities.
arxiv:2501.09302
the interplay between the structure and dynamics of partially confined lennard jones ( lj ) fluids, deep into the supercritical phase, is studied over a wide range of densities in the context of the frenkel line ( fl ), which separates rigid liquidlike and non - rigid gaslike regimes in the phase diagram of the supercritical fluids. extensive molecular dynamics simulations carried out at the two ends of the fl ( p = 5000 bars, t = 300 k, and t = 1500 k ) reveal intriguing features in supercritical fluids as a function of stiffness of the partially confining atomistic walls. the liquidlike regime of a lj fluid ( p = 5000 bars, t = 300 k ), mimicking argon, partially confined between walls separated by 10 { \ aa } along the z - axis, and otherwise unconstrained, reveals amorphous and liquidlike structural signatures in the radial distribution function parallel to the walls and enhanced self - diffusion as the wall stiffness is decreased. in sharp contrast, in the gas - like regime ( p = 5000 bars, t = 1500 k ), soft walls lead to increasing structural order hindering self - diffusion. furthermore, the correlations between the structure and self - diffusion are found to be well captured by excess entropy. the rich behavior shown by supercritical fluids under partial confinement, even with simple interatomic potentials, is found to be fairly independent of hydrophilicity and hydrophobicity. the study identifies persisting sub - diffusive features over intermediate time scales, emerging from the strong interplay between density and confinement, to dictate the evolution and stabilization of structures. it is anticipated that these results may help gain a better understanding of the behavior of partially confined complex fluids found in nature.
arxiv:1903.07646
we prove a structural theorem that provides a precise local picture of how a sequence of closed embedded minimal hypersurfaces with uniformly bounded index ( and volume if the ambient dimension is greater than three ) in a riemannian manifold of dimension at most seven, can degenerate. loosely speaking, our results show that embedded minimal hypersurfaces with bounded index behave qualitatively like embedded stable minimal hypersurfaces, up to controlled errors. several compactness / finiteness theorems follows our local picture.
arxiv:1509.06724
standard bosonization techniques lead to phonon - like excitations in a luttinger liquid ( ll ), reflecting the absence of landau quasiparticles in these systems. yet in addition to the above excitations some ll are known to possess solitonic states carrying fractional quantum numbers ( e. g. the spin 1 / 2 heisenberg chain ). we have reconsidered the zero modes in the low - energy spectrum of the gaussian boson ll hamiltonian both for fermionic and bosonic ll : in the spinless case we find that two elementary excitations carrying fractional quantum numbers allow to generate all the charge and current excited states of the ll. we explicitly compute the wavefunctions of these two objects and show that one of them can be identified with the 1d version of the laughlin quasiparticle introduced in the context of the fractional quantum hall effect. for bosons, the other quasiparticle corresponds to a spinon excitation. the eigenfunctions of wen ' s chiral ll hamiltonian are also derived : they are quite simply the one dimensional restrictions of the 2d bulk laughlin wavefunctions.
arxiv:cond-mat/9905020
we look at the following question raised by koll \ ' ar and peskine. ( actually, it is a slightly weaker version of their question. ) let $ v _ t $ be a family of rank two vector bundles on $ \ bbb p ^ 3 $. assume that the general member of the family is a trivial vector bundle. then, is the special member $ v _ 0 $ also a trivial vector bundle? we show that this question is equivalent to the nonexistence of morphisms from $ \ bbb p ^ 3 \ to \ mathcal { x } $, where $ \ mathcal { x } $ is the infinite grassmannian associated to sl ( 2 ). we further reduce this question to the nonexistence of $ \ bbb c ^ * $ - equivariant morphisms from $ \ bbb c ^ 3 \ setminus \ { 0 \ } \ to \ mathcal { m } _ d $ ( for any $ d > 0 $ ), where $ \ mathcal { m } _ d $ is the donaldson moduli space of isomorphism classes of rank two vector bundles $ \ mathcal { v } $ over $ \ bbb p ^ 2 $ with trivial determinant and with second chern class $ d $ together with a trivialization of $ \ mathcal { v } _ { | \ bbb p ^ 1 } $.
arxiv:1202.1267
the hypothesis that the damped ly - alpha systems ( dlas ) are large, galactic disks ( milky way sized ) is tested by confronting predictions of models of the formation and evolution of ( large ) disk galaxies with observations, in particular the zinc abundance distribution with neutral hydrogen column density found for dlas. a pronounced mismatch is found strongly hinting that the majority of dlas may not be large, galactic disks.
arxiv:astro-ph/9907349
the ads / cft correspondence is a realization of the holographic principle in the context of string theory. it is a map between a quantum field theory and a string theory living in one or more extra dimensions. holography provides new tools to study strongly - coupled quantum field theories. it has important applications in quantum chromodynamics ( qcd ) and condensed matter ( cm ) systems, which are usually complicated and strongly coupled. quantum critical cm theories have scaling symmetries and can be connected to higher - dimensional scale invariant space - times. the effective holographic theory paradigm may be used to describe the low - energy ( ir ) holographic dynamics of quantum critical systems by the einstein - maxwell - dilaton ( emd ) theory. we find the magnetic critical scaling solutions of an emd theory containing an extra parity - odd term $ f \ wedge f $. previous studies in the absence of magnetic fields have shown the existence of quantum critical lines separated by quantum critical points. we find this is also true in the presence of a magnetic field. the critical solutions are characterized by the triplet of critical exponents ( $ \ theta, z, \ zeta $ ), the first two describing the geometry, while the latter describes the charge density.
arxiv:1411.3579