text
stringlengths
1
3.65k
source
stringlengths
15
79
recent work has established that large informatics graphs such as social and information networks have non - trivial tree - like structure when viewed at moderate size scales. here, we present results from the first detailed empirical evaluation of the use of tree decomposition ( td ) heuristics for structure identification and extraction in social graphs. although tds have historically been used in structural graph theory and scientific computing, we show that - - - even with existing td heuristics developed for those very different areas - - - td methods can identify interesting structure in a wide range of realistic informatics graphs. our main contributions are the following : we show that td methods can identify structures that correlate strongly with the core - periphery structure of realistic networks, even when using simple greedy heuristics ; we show that the peripheral bags of these tds correlate well with low - conductance communities ( when they exist ) found using local spectral computations ; and we show that several types of large - scale " ground - truth " communities, defined by demographic metadata on the nodes of the network, are well - localized in the large - scale and / or peripheral structures of the tds. our other main contributions are the following : we provide detailed empirical results for td heuristics on toy and synthetic networks to establish a baseline to understand better the behavior of the heuristics on more complex real - world networks ; and we prove a theorem providing formal justification for the intuition that the only two impediments to low - distortion hyperbolic embedding are high tree - width and long geodesic cycles. our results suggest future directions for improved td heuristics that are more appropriate for realistic social graphs.
arxiv:1411.1546
in this paper we present a unified picture concerning lie - trotter method for solving a large class of semilinear problems : nonlinear schr \ " odinger, schr \ " oginger - - poisson, gross - - pitaevskii, etc. this picture includes more general schemes such as strang and ruth - - yoshida. the convergence result is presented in suitable hilbert spaces related with the time regularity of the solution and is based on lipschitz estimates for the nonlinearity. in addition, with extra requirements both on the regularity of the initial datum and on the nonlinearity we show the linear convergence of the method.
arxiv:1211.5111
identifying a general quasi - local notion of energy - momentum and angular momentum would be an important advance in general relativity with potentially important consequences for mathematical and astrophysical studies in general relativity. in this paper we study a promising approach to this problem first proposed by wang and yau in 2009 based on isometric embeddings of closed surfaces in minkowski space. we study the properties of the wang - yau quasi - local mass in high accuracy numerical simulations of the head - on collisions of two non - spinning black holes within full general relativity. we discuss the behavior of the wang - yau quasi - local mass on constant expansion surfaces and we compare its behavior with the irreducible mass. we investigate the time evolution of the wang - yau quasi - local mass in numerical examples. in addition we discuss mathematical subtleties in defining the wang - yau mass for marginally trapped surfaces.
arxiv:2308.10906
impulsive solar energetic particle ( sep ) events originate from the energy dissipation process in small solar flares. anomalous abundances in impulsive sep events provide an evidence on unique, yet unclear, acceleration mechanism. the pattern of heavy - ion enhancements indicates that the temperature of the source plasma that is accelerated is low and not flare - like. we examine the solar source of the $ ^ 3 $ he - rich sep event of 2012 november 20 using solar dynamics observatory ( sdo ) / atmospheric imaging assembly ( aia ) images and investigate its thermal variation. the examined event is associated with recurrent coronal jets. the differential emission measure ( dem ) analysis is applied to study the temperature evolution / distribution of the source regions. preliminary results show that the temperature of the associated solar source is ranged between 1. 2 - 3. 1 mk.
arxiv:1712.07285
in this paper we formulate a corporate bond ( cb ) pricing model for deriving the term structure of default probabilities ( tsdp ) and the recovery rate ( rr ) for each pair of industry factor and credit rating grade, and these derived tsdp and rr are regarded as what investors imply in forming cb prices in the market at each time. a unique feature of this formulation is that the model allows each firm to run several business lines corresponding to some industry categories, which is typical in reality. in fact, treating all the cross - sectional cb prices simultaneously under a credit correlation structure at each time makes it possible to sort out the overlapping business lines of the firms which issued cbs and to extract the tsdps for each pair of individual industry factor and rating grade together with the rrs. the result is applied to a valuation of cds ( credit default swap ) and a loan portfolio management in banking business.
arxiv:1206.4766
one key challenge in multi - document summarization is to capture the relations among input documents that distinguish between single document summarization ( sds ) and multi - document summarization ( mds ). few existing mds works address this issue. one effective way is to encode document positional information to assist models in capturing cross - document relations. however, existing mds models, such as transformer - based models, only consider token - level positional information. moreover, these models fail to capture sentences ' linguistic structure, which inevitably causes confusions in the generated summaries. therefore, in this paper, we propose document - aware positional encoding and linguistic - guided encoding that can be fused with transformer architecture for mds. for document - aware positional encoding, we introduce a general protocol to guide the selection of document encoding functions. for linguistic - guided encoding, we propose to embed syntactic dependency relations into the dependency relation mask with a simple but effective non - linear encoding learner for feature learning. extensive experiments show the proposed model can generate summaries with high quality.
arxiv:2209.05929
let $ r, n $ be two natural numbers and let $ h ( r, n ) $ denote the maximal absolute value of $ r $ th coefficient of divisors of $ x ^ n - 1 $. in this paper, we show that $ \ sum _ { n \ leq x } h ( r, n ) $ is asymptotically equal to $ c ( r ) x ( \ log x ) ^ { 2 ^ r - 1 } $ for some constant $ c ( r ) > 0 $. furthermore, we give an explicit expression of $ c ( r ) $ in terms of $ r $.
arxiv:1710.10491
we examine the implications of several parameterizations of so - called target mass corrections ( tmcs ) for the physics of parity - violating deeply inelastic scattering ( dis ), especially at high values of the momentum fraction x. we consider the role played by perturbative corrections in alpha _ s in modifying tmcs ; we explicitly calculate these corrections at both the level of the individual electroweak structure function ( sf ), as well as in the observables of parity - violating dis. tmcs augment an inventory of previously studied corrections that become sizable at low q ^ 2 ( finite - q ^ 2 corrections ), and we give special attention to the effects that might lead to the violation of the approximate equality r ^ { gamma z } = r ^ { gamma }.
arxiv:1102.1106
self - supervised learning methods overcome the key bottleneck for building more capable ai : limited availability of labeled data. however, one of the drawbacks of self - supervised architectures is that the representations that they learn are implicit and it is hard to extract meaningful information about the encoded world states, such as 3d structure of the visual scene encoded in a depth map. moreover, in the visual domain such representations only rarely undergo evaluations that may be critical for downstream tasks, such as vision for autonomous cars. herein, we propose a framework for evaluating visual representations for illumination invariance in the context of depth perception. we develop a new predictive coding - based architecture and a hybrid fully - supervised / self - supervised learning method. we propose a novel architecture that extends the predictive coding approach : predictive lateral bottom - up and top - down encoder - decoder network ( preludenet ), which explicitly learns to infer and predict depth from video frames. in preludenet, the encoder ' s stack of predictive coding layers is trained in a self - supervised manner, while the predictive decoder is trained in a supervised manner to infer or predict the depth. we evaluate the robustness of our model on a new synthetic dataset, in which lighting conditions ( such as overall illumination, and effect of shadows ) can be be parametrically adjusted while keeping all other aspects of the world constant. preludenet achieves both competitive depth inference performance and next frame prediction accuracy. we also show how this new network architecture, coupled with the hybrid fully - supervised / self - supervised learning method, achieves balance between the said performance and invariance to changes in lighting. the proposed framework for evaluating visual representations can be extended to diverse task domains and invariance tests.
arxiv:2207.02972
negation is a fundamental linguistic concept used by humans to convey information that they do not desire. despite this, minimal research has focused on negation within text - guided image editing. this lack of research means that vision - language models ( vlms ) for image editing may struggle to understand negation, implying that they struggle to provide accurate results. one barrier to achieving human - level intelligence is the lack of a standard collection by which research into negation can be evaluated. this paper presents the first large - scale dataset, negative instruction ( nein ), for studying negation within instruction - based image editing. our dataset comprises 366, 957 quintuplets, i. e., source image, original caption, selected object, negative sentence, and target image in total, including 342, 775 queries for training and 24, 182 queries for benchmarking image editing methods. specifically, we automatically generate nein based on a large, existing vision - language dataset, ms - coco, via two steps : generation and filtering. during the generation phase, we leverage two vlms, blip and instructpix2pix ( fine - tuned on magicbrush dataset ), to generate nein ' s samples and the negative clauses that expresses the content of the source image. in the subsequent filtering phase, we apply blip and llava - next to remove erroneous samples. additionally, we introduce an evaluation protocol to assess the negation understanding for image editing models. extensive experiments using our dataset across multiple vlms for text - guided image editing demonstrate that even recent state - of - the - art vlms struggle to understand negative queries.
arxiv:2409.06481
we prove that for any semi - norm $ \ | \ cdot \ | $ on $ \ mathbb { r } ^ n, $ and any symmetric convex body $ k $ in $ \ mathbb { r } ^ n, $ \ begin { equation } \ label { ineq - abs2 } \ int _ { \ partial k } \ frac { \ | n _ x \ | ^ 2 } { \ langle x, n _ x \ rangle } \ leq \ frac { 1 } { | k | } \ left ( \ int _ { \ partial k } \ | n _ x \ | \ right ) ^ 2, \ end { equation } and characterize the equality cases of this new inequality. the above would also follow from the log - brunn - minkowski conjecture, if the latter was proven, and we believe that it may be of independent interest. we, furthermore, obtain an improvement of this inequality in some cases, involving the poincare constant of $ k. $ the conjectured log - brunn - minkowski inequality is a strengthening of the brunn - minkowski inequality in the partial case of symmetric convex bodies, equivalent to the validity of the following statement : for all symmetric convex smooth sets $ k $ in $ \ mathbb { r } ^ n $ and all smooth even $ f : \ partial k \ rightarrow \ mathbb { r }, $ \ begin { equation } \ label { ineq - abs } \ int _ { \ partial k } h _ x f ^ 2 - \ langle \ mbox { { ii } } ^ { - 1 } \ nabla _ { \ partial k } f, \ nabla _ { \ partial k } f \ rangle + \ frac { f ^ 2 } { \ langle x, n _ x \ rangle } \ leq \ frac { 1 } { | k | } \ left ( \ int _ { \ partial k } f \ right ) ^ 2. \ end { equation } in this note, we verify the above with the particular choice of speed function $ f ( x ) = | \ langle v, n _ x \ rangle | $, for all symmetric convex bodies $ k $, where $ v \ in \ mathbb { r } ^ n $ is an arbitrary vector.
arxiv:2004.06103
studying the physical conditions structuring the young circumstellar disks is required for understanding the onset of planet formation. of particular interest is the protoplanetary disk surrounding the herbig star mwc480. the structure and properties of the circumstellar disk of mwc480 are studied by infrared interferometry and interpreted from a modeling approach. new observations are driving this study, in particular some recent very large telescope interferometer ( vlti ) / midi data acquired in december 2013. our one - component disk model could not reproduce simultaneously all our data : the spectral energy distribution, the near - infrared keck interferometer data and the mid - infrared data obtained with the midi instrument. in order to explain all measurements, one possibility is to add an asymmetry in our one - component disk model with the assumption that the structure of the disk of mwc480 has not varied with time. several scenarios are tested, and the one considering the presence of an azimuthal bright feature in the inner component of the disk model provides a better fit of the data. ( in this study, we assumed that the size of the outer disk of mwc480 to be 20 au since most of the near and mid - ir emissions come from below 20 au. in our previous study ( jamialahmadi et al. 2015 ), we adopted an outer radius of 80 au, which is consistent with the value found by previous studies based on mm observations ).
arxiv:1709.08921
we consider the fundamental problems of determining the rooted and global edge and vertex connectivities ( and computing the corresponding cuts ) in directed graphs. for rooted ( and hence also global ) edge connectivity with small integer capacities we give a new randomized monte carlo algorithm that runs in time $ \ tilde { o } ( n ^ 2 ) $. for rooted edge connectivity this is the first algorithm to improve on the $ \ omega ( n ^ 3 ) $ time bound in the dense - graph high - connectivity regime. our result relies on a simple combination of sampling coupled with sparsification that appears new, and could lead to further tradeoffs for directed graph connectivity problems. we extend the edge connectivity ideas to rooted and global vertex connectivity in directed graphs. we obtain a $ ( 1 + \ epsilon ) $ - approximation for rooted vertex connectivity in $ \ tilde { o } ( nw / \ epsilon ) $ time where $ w $ is the total vertex weight ( assuming integral vertex weights ) ; in particular this yields an $ \ tilde { o } ( n ^ 2 / \ epsilon ) $ time randomized algorithm for unweighted graphs. this translates to a $ \ tilde { o } ( \ kappa nw ) $ time exact algorithm where $ \ kappa $ is the rooted connectivity. we build on this to obtain similar bounds for global vertex connectivity. our results complement the known results for these problems in the low connectivity regime due to work of gabow [ 9 ] for edge connectivity from 1991, and the very recent work of nanongkai et al. [ 24 ] and forster et al. [ 7 ] for vertex connectivity.
arxiv:2104.07205
simultaneous localization and mapping ( slam ) techniques can be used to navigate the visually impaired, but the development of robust slam solutions for crowded spaces is limited by the lack of realistic datasets. to address this, we introduce incrowd - vi, a novel visual - inertial dataset specifically designed for human navigation in indoor pedestrian - rich environments. recorded using meta aria project glasses, it captures realistic scenarios without environmental control. incrowd - vi features 58 sequences totaling a 5 km trajectory length and 1. 5 hours of recording time, including rgb, stereo images, and imu measurements. the dataset captures important challenges such as pedestrian occlusions, varying crowd densities, complex layouts, and lighting changes. ground - truth trajectories, accurate to approximately 2 cm, are provided in the dataset, originating from the meta aria project machine perception slam service. in addition, a semi - dense 3d point cloud of scenes is provided for each sequence. the evaluation of state - of - the - art visual odometry ( vo ) and slam algorithms on incrowd - vi revealed severe performance limitations in these realistic scenarios. under challenging conditions, systems exceeded the required localization accuracy of 0. 5 meters and the 1 \ % drift threshold, with classical methods showing drift up to 5 - 10 \ %. while deep learning - based approaches maintained high pose estimation coverage ( > 90 \ % ), they failed to achieve real - time processing speeds necessary for walking pace navigation. these results demonstrate the need and value of a new dataset to advance slam research for visually impaired navigation in complex indoor environments. the dataset and associated tools are publicly available at https : / / incrowd - vi. cloudlab. zhaw. ch /.
arxiv:2411.14358
across the entire dataset.
arxiv:2311.00072
motivated by recent experiments showing the buckling of microtubules in cells, we study theoretically the mechanical response of, and force propagation along elastic filaments embedded in a non - linear elastic medium. we find that, although embedded microtubules still buckle when their compressive load exceeds the critical value $ f _ c $ found earlier, the resulting deformation is restricted to a penetration depth that depends on both the non - linear material properties of the surrounding cytoskeleton, as well as the direct coupling of the microtubule to the cytoskeleton. the deformation amplitude depends on the applied load $ f $ as $ ( f - f _ c ) ^ { 1 / 2 } $. this work shows how the range of compressive force transmission by microtubules can be as large as tens of microns and is governed by the mechanical coupling to the surrounding cytoskeleton.
arxiv:0709.2344
reddit administrators have generally struggled to prevent or contain such discourse for several reasons including : ( 1 ) the inability for a handful of human administrators to track and react to millions of posts and comments per day and ( 2 ) fear of backlash as a consequence of administrative decisions to ban or quarantine hateful communities. consequently, as shown in our background research, administrative actions ( community bans and quarantines ) are often taken in reaction to media pressure following offensive discourse within a community spilling into the real world with serious consequences. in this paper, we investigate the feasibility of proactive moderation on reddit - - i. e., proactively identifying communities at risk of committing offenses that previously resulted in bans for other communities. proactive moderation strategies show promise for two reasons : ( 1 ) they have potential to narrow down the communities that administrators need to monitor for hateful content and ( 2 ) they give administrators a scientific rationale to back their administrative decisions and interventions. our work shows that communities are constantly evolving in their user base and topics of discourse and that evolution into hateful or dangerous ( i. e., considered bannable by reddit administrators ) communities can often be predicted months ahead of time. this makes proactive moderation feasible. further, we leverage explainable machine learning to help identify the strongest predictors of evolution into dangerous communities. this provides administrators with insights into the characteristics of communities at risk becoming dangerous or hateful. finally, we investigate, at scale, the impact of participation in hateful and dangerous subreddits and the effectiveness of community bans and quarantines on the behavior of members of these communities.
arxiv:1906.11932
two - dimensional ( 2d ) tin ( ii ) monosulfide ( sns ) with strong structural anisotropy has been proven to be a phosphorene analogue. however, difficulty in isolating very thin layer of sns pose challenges in practical utilization. here, we prepare ultrathin sns via liquid phase exfoliation. with transmission electron microscopy, we identify the buckled structure of 2d sns. we employ temperature dependent raman spectroscopy to elucidate electron - phonon interactions, which reveals a linear phonon shifts. the active raman modes of ultrathin sns exhibit higher sensitivity to temperature than other 2d materials. moreover, we demonstrate strong light - matter interaction in ultrathin sns using z - scan and ultrafast spectroscopy. rich exciton - exciton and coherent exciton - photon interactions arising from many - particle excited effects in ultrathin sns eventually enhances the nonlinear optical properties. our findings highlight the prospects for the synthesis of ultrathin anisotropic sns towards the betterment of thermoelectric and photonic devices.
arxiv:1811.00209
in this paper, we introduce adaptive cluster lasso ( acl ) method for variable selection in high dimensional sparse regression models with strongly correlated variables. to handle correlated variables, the concept of clustering or grouping variables and then pursuing model fitting is widely accepted. when the dimension is very high, finding an appropriate group structure is as difficult as the original problem. the acl is a three - stage procedure where, at the first stage, we use the lasso ( or its adaptive or thresholded version ) to do initial selection, then we also include those variables which are not selected by the lasso but are strongly correlated with the variables selected by the lasso. at the second stage we cluster the variables based on the reduced set of predictors and in the third stage we perform sparse estimation such as lasso on cluster representatives or the group lasso based on the structures generated by clustering procedure. we show that our procedure is consistent and efficient in finding true underlying population group structure ( under assumption of irrepresentable and beta - min conditions ). we also study the group selection consistency of our method and we support the theory using simulated and pseudo - real dataset examples.
arxiv:1603.03724
with decentralized optimization having increased applications in various domains ranging from machine learning, control, sensor networks, to robotics, its privacy is also receiving increased attention. existing privacy - preserving approaches for decentralized optimization achieve privacy preservation by patching decentralized optimization with information - technology privacy mechanisms such as differential privacy or homomorphic encryption, which either sacrifices optimization accuracy or incurs heavy computation / communication overhead. we propose an inherently privacy - preserving decentralized optimization algorithm by exploiting the robustness of decentralized optimization to uncertainties in optimization dynamics. more specifically, we present a general decentralized optimization framework, based on which we show that privacy can be enabled in decentralized optimization by adding randomness in optimization parameters. we further show that the added randomness has no influence on the accuracy of optimization, and prove that our inherently privacy - preserving algorithm has $ r $ - linear convergence when the global objective function is smooth and strongly convex. we also rigorously prove that the proposed algorithm can avoid the gradient of a node from being inferable by other nodes. numerical simulation results confirm the theoretical predictions.
arxiv:2207.05350
we present results for the chemical evolution of the galactic bulge in the context of an inside - out formation model of the galaxy. a supernova - driven wind was also included in analogy with elliptical galaxies. new observations of chemical abundance ratios and metallicity distribution have been employed in order to check the model results. we confirm previous findings that the bulge formed on a very short timescale with a quite high star formation efficiency and an initial mass function more skewed toward high masses than the one suitable for the solar neighbourhood. a certain amount of primary nitrogen from massive stars might be required to reproduce the nitrogen data at low and intermediate metallicities.
arxiv:astro-ph/0611650
for a targeted separation process is usually based on few requirements. membranes have to provide enough mass transfer area to process large amounts of feed stream. the selected membrane has to have high selectivity ( rejection ) properties for certain particles ; it has to resist fouling and to have high mechanical stability. it also needs to be reproducible and to have low manufacturing costs. the main modeling equation for the dead - end filtration at constant pressure drop is represented by darcy ' s law : d v p d t = q = δ p μ a ( 1 r m + r ) { \ displaystyle { \ frac { dv _ { p } } { dt } } = q = { \ frac { \ delta p } { \ mu } } \ a \ left ( { \ frac { 1 } { r _ { m } + r } } \ right ) } where vp and q are the volume of the permeate and its volumetric flow rate respectively ( proportional to same characteristics of the feed flow ), μ is dynamic viscosity of permeating fluid, a is membrane area, rm and r are the respective resistances of membrane and growing deposit of the foulants. rm can be interpreted as a membrane resistance to the solvent ( water ) permeation. this resistance is a membrane intrinsic property and is expected to be fairly constant and independent of the driving force, δp. r is related to the type of membrane foulant, its concentration in the filtering solution, and the nature of foulant - membrane interactions. darcy ' s law allows for calculation of the membrane area for a targeted separation at given conditions. the solute sieving coefficient is defined by the equation : s = c p c f { \ displaystyle s = { \ frac { c _ { p } } { c _ { f } } } } where cf and cp are the solute concentrations in feed and permeate respectively. hydraulic permeability is defined as the inverse of resistance and is represented by the equation : l p = j δ p { \ displaystyle l _ { p } = { \ frac { j } { \ delta p } } } where j is the permeate flux which is the volumetric flow rate per unit of membrane area. the solute sieving coefficient and hydraulic permeability allow the quick assessment of the synthetic membrane performance. = = membrane separation processes = = membrane separation processes have a very important role in the separation industry. nevertheless, they were not considered technically important
https://en.wikipedia.org/wiki/Membrane_technology
recently, it has been shown that there is a trade - off relation between thermodynamic cost and current fluctuations, referred to as the thermodynamic uncertainty relation ( tur ). the tur has been derived for various processes, such as discrete - time markov jump processes and overdamped langevin dynamics. for underdamped dynamics, it has recently been reported that some modification is necessary for application of the tur. in this study, we present a more generalized tur, applicable to a system driven by a velocity - dependent force in the context of underdamped langevin dynamics, by extending the theory of vu and hasegawa [ preprint arxiv : 1901. 05715 ]. we show that our tur accurately describes the trade - off properties of a molecular refrigerator ( cold damping ), brownian dynamics in a magnetic field, and an active particle system.
arxiv:1907.06221
while deep learning has outperformed other methods for various tasks, theoretical frameworks that explain its reason have not been fully established. to address this issue, we investigate the excess risk of two - layer relu neural networks in a teacher - student regression model, in which a student network learns an unknown teacher network through its outputs. especially, we consider the student network that has the same width as the teacher network and is trained in two phases : first by noisy gradient descent and then by the vanilla gradient descent. our result shows that the student network provably reaches a near - global optimal solution and outperforms any kernel methods estimator ( more generally, linear estimators ), including neural tangent kernel approach, random feature model, and other kernel methods, in a sense of the minimax optimal rate. the key concept inducing this superiority is the non - convexity of the neural network models. even though the loss landscape is highly non - convex, the student network adaptively learns the teacher neurons.
arxiv:2205.14818
an equitable graph coloring is a proper vertex coloring of a graph g where the sizes of the color classes differ by at most one. the equitable chromatic number is the smallest number k such that g admits such equitable k - coloring. we focus on enumerative algorithms for the computation of the equitable coloring number and propose a general scheme to derive pruning rules for them : we show how the extendability of a partial coloring into an equitable coloring can be modeled via network flows. thus, we obtain pruning rules which can be checked via flow algorithms. computational experiments show that the search tree of enumerative algorithms can be significantly reduced in size by these rules and, in most instances, such naive approach even yields a faster algorithm. moreover, the stability, i. e., the number of solved instances within a given time limit, is greatly improved. since the execution of flow algorithms at each node of a search tree is time consuming, we derive arithmetic pruning rules ( generalized hall - conditions ) from the network model. adding these rules to an enumerative algorithm yields an even larger runtime improvement.
arxiv:1607.08754
the operators $ \ lambda _ m $ ( $ m \ in \ mathbb { n } \ cup \ { 0 \ } $ ) arise when one studies the action of the beurling - ahlfors transform on certain radial function subspaces. it is known that the weak - type $ ( 1, 1 ) $ constant of $ \ lambda _ 0 $ is equal to $ 1 / \ ln ( 2 ) \ approx 1. 44 $. we construct examples showing that the weak - type $ ( 1, 1 ) $ constant of $ \ lambda _ 1 $ is larger than $ 1. 38 $ and that the weak - type $ ( 1, 1 ) $ constant of $ \ lambda _ m $ does not tend to $ 1 $ when $ m \ to \ infty $. this disproves a conjecture of gill [ mich. math. j. 59 ( 2010 ), no. 2, 353 - 363 ]. we also prove a companion result for the adjoint operators. this is the arxiv version of the paper - it includes some additional discussion in the appendices.
arxiv:2411.09340
recent years have seen an increased interest in the question of whether the gravitational action of planets could have an influence on the solar dynamo. without discussing the observational validity of the claimed correlations, we ask for a possible physical mechanism that might link the weak planetary forces with solar dynamo action. we focus on the helicity oscillations that were recently found in simulations of the current - driven, kink - type tayler instability, which is characterized by an m = 1 azimuthal dependence. we show how these helicity oscillations can be resonantly excited by some m = 2 perturbation that reflects a tidal oscillation. specifically, we speculate that the 11. 07 years tidal oscillation induced by the venus - - earth - - jupiter system may lead to a 1 : 1 resonant excitation of the oscillation of the alpha - effect. finally, in the framework of a reduced, zero - dimensional alpha - - omega dynamo model we recover a 22. 14 - year cycle of the solar dynamo.
arxiv:1511.09335
we have started high precision photometric monitoring observations at the aiu jena observatory in grossschwabhausen near jena in fall 2006. we used a 25 cm cassegrain telescope equipped with a ccd - camera mounted picky - pack on a 90 cm telescope. to test the obtainable photometric precision, we observed stars with known transiting planets. we could recover all planetary transits observed by us. we observed the parent star of the transiting planet tres - 2 over a longer period in grossschwabhausen. between march and november 2007 seven different transits and almost a complete orbital period were analyzed. overall, in 31 nights of observation 3423 exposures ( in total 57. 05 h of observation ) of the tres - 2 parent star were taken. here, we present our methods and the resulting light curves. using our observations we could improve the orbital parameters of the system.
arxiv:0812.3549
we consider the $ d $ - dimensional nonlinear schr \ " odinger equation under periodic boundary conditions : $ - i \ dot u = - \ delta u + v ( x ) * u + \ ep \ frac { \ p f } { \ p \ bar u } ( x, u, \ bar u ), \ quad u = u ( t, x ), x \ in \ t ^ d $ where $ v ( x ) = \ sum \ hat v ( a ) e ^ { i \ sc { a, x } } $ is an analytic function with $ \ hat v $ real, and $ f $ is a real analytic function in $ \ re u $, $ \ im u $ and $ x $. ( this equation is a popular model for the ` real ' nls equation, where instead of the convolution term $ v * u $ we have the potential term $ vu $. ) for $ \ ep = 0 $ the equation is linear and has time - - quasi - periodic solutions $ u $, $ $ u ( t, x ) = \ sum _ { a \ in \ aa } \ hat u ( a ) e ^ { i ( | a | ^ 2 + \ hat v ( a ) ) t } e ^ { i \ sc { a, x } } \ quad ( | \ hat u ( a ) | > 0 ), $ $ where $ \ aa $ is any finite subset of $ \ z ^ d $. we shall treat $ \ omega _ a = | a | ^ 2 + \ hat v ( a ) $, $ a \ in \ aa $, as free parameters in some domain $ u \ subset \ r ^ { \ aa } $. this is a hamiltonian system in infinite degrees of freedom, degenerate but with external parameters, and we shall describe a kam - theory which, under general conditions, will have the following consequence : if $ | \ ep | $ is sufficiently small, then there is a large subset $ u ' $ of $ u $ such that for all $ \ omega \ in u ' $ the solution $ u $ persists as a time - - quasi - periodic solution which has all lyapounov exponents equal to zero and whose linearized equation is reducible to constant coefficients.
arxiv:0709.2393
the solar neighborhood is the closest and most easily studied sample of the galactic interstellar medium, an understanding of which is essential for models of star formation and galaxy evolution. observations of an unexpectedly intense diffuse flux of easily - absorbed 1 / 4 kev x rays, coupled with the discovery that interstellar space within ~ 100 pc of the sun is almost completely devoid of cool absorbing gas led to a picture of a " local cavity " filled with x - ray emitting hot gas dubbed the local hot bubble. this model was recently upset by suggestions that the emission could instead be produced readily within the solar system by heavy solar wind ions charge exchanging with neutral h and he in interplanetary space, potentially removing the major piece of evidence for the existence of million - degree gas within the galactic disk. here we report results showing that the total solar wind charge exchange contribution is 40 % + / - 5 % ( stat ) + / - 5 % ( sys ) of the 1 / 4 kev flux in the galactic plane. the fact that the measured flux is not dominated by charge exchange supports the notion of a million - degree hot bubble of order 100 pc extent surrounding the sun.
arxiv:1407.7539
we compute the quark number susceptibilities in two flavor qcd for staggered fermions by adding the chemical potential as a lagrange multiplier for the point - split number density term. since lesser number of quark propagators are required at any order, this method leads to faster computations. we propose a subtraction procedure to remove the inherent undesired lattice terms and check that it works well by comparing our results with the existing ones where the elimination of these terms is analytically guaranteed. we also show that the ratios of susceptibilities are robust, opening a door for better estimates of location of the qcd critical point through the computation of the tenth and twelfth order baryon number susceptibilities without significant additional computational overload.
arxiv:1112.5428
the stochastic hodgkin - huxley neurons considered in this paper replace time - constant deterministic input $ a dt $ of the classical deterministic model by increments $ \ vartheta dt + dx _ t $ of a stochastic process : $ x $ is ornstein - uhlenbeck with volatility $ \ sigma > 0 $ and backdriving force $ \ tau > 0 $, and we call $ \ vartheta > 0 $ the signal. we have ergodicity and strong laws of large numbers for various functionals of the process, and characterize ' quiet behaviour ' and ' regular spiking ' as events whose probability depends on the parameters $ ( \ tau, \ sigma ) $ and on the signal $ \ vartheta $. the notions of quiet behaviour and regular spiking allow for a construction of circuits of interacting stochastic hodgkin - huxley neurons, combining excitation with inhibition according to a bloc structure along the circuit, on which self - organized rhythmic oscillations can be observed.
arxiv:2203.16160
in this paper we study problems of drawing graphs in the plane using edge length constraints and angle optimization. specifically we consider the problem of maximizing the minimum angle, the mma problem. we solve the mma problem using a spring - embedding approach where two forces are applied to the vertices of the graph : a force optimizing edge lengths and a force optimizing angles. we solve analytically the problem of computing an optimal displacement of a graph vertex optimizing the angles between edges incident to it if the degree of the vertex is at most three. we also apply a numerical approach for computing the forces applied to vertices of higher degree. we implemented our algorithm in java and present drawings of some graphs.
arxiv:1211.4927
we develop a linear one - sex dynamical model of human population reproduction through marriage. in our model, we assume that a woman can repeatedly marry and divorce but only married women produce children. the iterative marriage process is formulated by a three - state compartmental model, which is described by the system of mckendrick equations with the duration - specific birth rate by age at marriage. in order to see the effect of changes in nuptiality on fertility, new formulae for the reproduction indices are given. in particular, the total fertility rate ( tfr ) is considered as the product of the indices of marriage and marital fertility. using the vital statistics of japan, we can show that our model can provide a reasonable estimate of the japanese tfr, and its possible future scenario.
arxiv:2505.01727
given a group with at least two more generators than relations, we give an effective estimate on the minimal index of a subgroup with a nonabelian free quotient. we show that the index is bounded by a polynomial in the length of the relator word. we also provide a lower bound on the index.
arxiv:0905.2713
this paper presents improved constraints on the low - mass stellar initial mass function ( imf ) of the bo \ " otes i ( boo ~ i ) ultrafaint dwarf galaxy, based on our analysis of recent deep imaging from the hubble space telescope. the identification of candidate stellar members of boo ~ i in the photometric catalog produced from these data was achieved using a bayesian approach, informed by complementary archival imaging data for the hubble ultra deep field. additionally, the existence of earlier - epoch data for the fields in boo ~ i allowed us to derive proper motions for a subset of the sources and thus identify and remove likely milky way stars. we were also able to determine the absolute proper motion of boo ~ i, and our result is in agreement with, but completely independent of, the measurement ( s ) by \ textit { gaia }. the best - fitting parameter values of three different forms of the low - mass imf were then obtained through forward modeling of the color - magnitude data for likely boo ~ i member stars within an approximate bayesian computation markov chain monte carlo algorithm. the best - fitting single power - law imf slope is $ \ alpha = - 1. 95 _ { - 0. 28 } ^ { + 0. 32 } $, while the best - fitting broken power - law slopes are $ \ alpha _ 1 = - 1. 67 _ { - 0. 57 } ^ { + 0. 48 } $ and $ \ alpha _ 2 = - 2. 57 _ { - 1. 04 } ^ { + 0. 93 } $. the best - fitting lognormal characteristic mass and width parameters are $ \ rm { m } _ { \ rm { c } } = 0. 17 _ { - 0. 11 } ^ { + 0. 05 } \ cal m _ \ odot $ and $ \ sigma = 0. 49 _ { - 0. 20 } ^ { + 0. 13 } $. these broken power - law and lognormal imf parameters for boo ~ i are consistent with published results for the stars within the milky way and thus it is plausible that bo { \ " o } tes i and the milky way are populated by the same stellar imf.
arxiv:2209.10461
a subfamily $ \ mathcal { g } \ subseteq \ mathcal { f } \ subseteq 2 ^ { [ n ] } $ of sets is a non - induced ( weak ) copy of a poset $ p $ in $ \ mathcal { f } $ if there exists a bijection $ i : p \ rightarrow \ mathcal { g } $ such that $ p \ le _ p q $ implies $ i ( p ) \ subseteq i ( q ) $. in the case where in addition $ p \ le _ p q $ holds if and only if $ i ( p ) \ subseteq i ( q ) $, then $ \ mathcal { g } $ is an induced ( strong ) copy of $ p $ in $ \ mathcal { f } $. we consider the minimum number $ sat ( n, p ) $ [ resp. \ $ sat ^ * ( n, p ) $ ] of sets that a family $ \ mathcal { f } \ subseteq 2 ^ { [ n ] } $ can have without containing a non - induced [ induced ] copy of $ p $ and being maximal with respect to this property, i. e., the addition of any $ g \ in 2 ^ { [ n ] } \ setminus \ mathcal { f } $ creates a non - induced [ induced ] copy of $ p $. we prove for any finite poset $ p $ that $ sat ( n, p ) \ le 2 ^ { | p | - 2 } $, a bound independent of the size $ n $ of the ground set. for induced copies of $ p $, there is a dichotomy : for any poset $ p $ either $ sat ^ * ( n, p ) \ le k _ p $ for some constant depending only on $ p $ or $ sat ^ * ( n, p ) \ ge \ log _ 2 n $. we classify several posets according to this dichotomy, and also show better upper and lower bounds on $ sat ( n, p ) $ and $ sat ^ * ( n, p ) $ for specific classes of posets. our main new tool is a special ordering of the sets based on the colexicographic order. it turns out that if $ p $ is given, processing the sets in this order and adding the sets greedily into our family whenever this does not ruin non - induced [ induced ] $ p
arxiv:2003.04282
we consider the enhancement of sl ( 2, r ) to virasoro algebra in a system of n particles on ads2. we restrict our discussion to the case of non - interacting particles, and argue that they must be treated as fermions. we find operators l _ n whose commutators on the ground state, | vac >, satisfy relations that are reminisent of c = 1 virasoro algebra, provided n \ geq n \ geq - n. same relations hold also on the states l _ { - k } | vac >, if ( n - k ) \ geq n \ geq - ( n - k ). the conditions l _ n ^ \ dag = l _ { - n }, and l _ k | vac > = 0 for k \ geq 1 are also satisfied.
arxiv:hep-th/0104142
gauss - bonnet gravity provides one of the most promising frameworks to study curvature corrections to the einstein action in supersymmetric string theories, while avoiding ghosts and keeping second order field equations. although schwarzschild - type solutions for gauss - bonnet black holes have been known for long, the kerr - gauss - bonnet metric is missing. in this paper, a five dimensional gauss - bonnet approximation is analytically derived for spinning black holes and the related thermodynamical properties are briefly outlined.
arxiv:0712.3546
we have investigated the electronic states and spin polarization of half - metallic ferromagnet cro $ _ 2 $ ( 100 ) epitaxial films by bulk - sensitive spin - resolved photoemission spectroscopy with a focus on non - quasiparticle ( nqp ) states derived from electron - magnon interactions. we found that the averaged values of the spin polarization are approximately 100 % and 40 % at 40 k and 300 k, respectively. this is consistent with the previously reported result [ h. fujiwara et al., appl. phys. lett. 106, 202404 ( 2015 ). ]. at 100 k, peculiar spin depolarization was observed at the fermi level ( $ e _ { f } $ ), which is supported by theoretical calculations predicting nqp states. this suggests the possible appearance of nqp states in cro $ _ 2 $. we also compare the temperature dependence of our spin polarizations with that of the magnetization.
arxiv:1711.01781
it is shown that the sum of stress - energy tensors of the electromagnetic and gravitational fields, the acceleration field and the pressure field inside a stationary uniform spherical body within the framework of relativistic uniform model vanishes. this fact significantly simplifies solution of equation for the metric in covariant theory of gravitation ( ctg ). the metric tensor components are calculated inside the body, and on its surface they are combined with the components of external metric tensor. this also allows us to exactly determine one of the two unknown coefficients in the metric outside the body. comparing the ctg metric and the reissner - nordstr \ " om metric in general theory of relativity shows their difference, which is a consequence of difference between equations for the metric and different understanding of essence of cosmological constant.
arxiv:2110.00342
in this paper we deal with the multiplicity and concentration of positive solutions for the following fractional schr \ " odinger - kirchhoff type equation \ begin { equation * } m \ left ( \ frac { 1 } { \ varepsilon ^ { 3 - 2s } } \ iint _ { \ mathbb { r } ^ { 6 } } \ frac { | u ( x ) - u ( y ) | ^ { 2 } } { | x - y | ^ { 3 + 2s } } dxdy + \ frac { 1 } { \ varepsilon ^ { 3 } } \ int _ { \ mathbb { r } ^ { 3 } } v ( x ) u ^ { 2 } dx \ right ) [ \ varepsilon ^ { 2s } ( - \ delta ) ^ { s } u + v ( x ) u ] = f ( u ) \, \ mbox { in } \ mathbb { r } ^ { 3 } \ end { equation * } where $ \ varepsilon > 0 $ is a small parameter, $ s \ in ( \ frac { 3 } { 4 }, 1 ) $, $ ( - \ delta ) ^ { s } $ is the fractional laplacian, $ m $ is a kirchhoff function, $ v $ is a continuous positive potential and $ f $ is a superlinear continuous function with subcritical growth. by using penalization techniques and ljusternik - schnirelmann theory, we investigate the relation between the number of positive solutions with the topology of the set where the potential attains its minimum.
arxiv:1705.00702
computational grids that couple geographically distributed resources are becoming the de - facto computing platform for solving large - scale problems in science, engineering, and commerce. software to enable grid computing has been primarily written for unix - class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of desktop computers i. e. those running variants of the microsoft windows operating system. addressing windows - based grid computing is particularly important from the software industry ' s viewpoint where interest in grids is emerging rapidly. microsoft ' s. net framework has become near - ubiquitous for implementing commercial distributed systems for windows - based platforms, positioning it as the ideal platform for grid computing in this context. in this paper we present alchemi, a. net - based grid computing framework that provides the runtime machinery and programming environment required to construct desktop grids and develop grid applications. it allows flexible application composition by supporting an object - oriented grid application programming model in addition to a grid job model. cross - platform support is provided via a web services interface and a flexible execution model supports dedicated and non - dedicated ( voluntary ) execution by grid nodes.
arxiv:cs/0402017
efficient medical image segmentation aims to provide accurate pixel - wise predictions for medical images with a lightweight implementation framework. however, lightweight frameworks generally fail to achieve superior performance and suffer from poor generalizable ability on cross - domain tasks. in this paper, we explore the generalizable knowledge distillation for the efficient segmentation of cross - domain medical images. considering the domain gaps between different medical datasets, we propose the model - specific alignment networks ( msan ) to obtain the domain - invariant representations. meanwhile, a customized alignment consistency training ( act ) strategy is designed to promote the msan training. considering the domain - invariant representative vectors in msan, we propose two generalizable knowledge distillation schemes for cross - domain distillation, dual contrastive graph distillation ( dcgd ) and domain - invariant cross distillation ( dicd ). specifically, in dcgd, two types of implicit contrastive graphs are designed to represent the intra - coupling and inter - coupling semantic correlations from the perspective of data distribution. in dicd, the domain - invariant semantic vectors from the two models ( i. e., teacher and student ) are leveraged to cross - reconstruct features by the header exchange of msan, which achieves improvement in the generalization of both the encoder and decoder in the student model. furthermore, a metric named frechet semantic distance ( fsd ) is tailored to verify the effectiveness of the regularized domain - invariant features. extensive experiments conducted on the liver and retinal vessel segmentation datasets demonstrate the superiority of our method, in terms of performance and generalization on lightweight frameworks.
arxiv:2207.12995
non - locality is crucial to understand the plastic flow of an amorphous material, and has been successfully described by the fluidity, along with a cooperativity length scale { \ xi }. we demonstrate, by applying the scaling hypothesis to the yielding transition, that non - local effects in non - uniform stress configurations can be explained within the framework of critical phenomena. from the scaling description, scaling relations between different exponents are derived, and collapses of strain rate profiles are made both in shear driven and pressure driven flow. we find that the cooperative length in non - local flow is governed by the same correlation length in finite dimensional homogeneous flow, excluding the mean field exponents. we also show that non - locality also affects the finite size scaling of the yield stress, especially the large finite size effects observed in pressure driven flow. our theoretical results are nicely verified by the elasto - plastic model, and experimental data.
arxiv:1607.07290
we investigate the statistics of phase fluctuations of an acoustic wave propagating through a turbulent flow in line of sight ( los ) configuration. experiments are performed on a closed von karman swirling flow whose boundaries are maintained at a constant temperature. in particular, we analyze the root mean square ( rms ) and the power spectrum density ( psd ) of phase fluctuations. a model is developed and analytical predictions obtained for these quantities using geometrical acoustics are shown to be in agreement with experimental observations.
arxiv:2211.09264
management of a portfolio that includes an illiquid asset is an important problem of modern mathematical finance. one of the ways to model illiquidity among others is to build an optimization problem and assume that one of the assets in a portfolio can not be sold until a certain finite, infinite or random moment of time. this approach arises a certain amount of models that are actively studied at the moment. working in the merton ' s optimal consumption framework with continuous time we consider an optimization problem for a portfolio with an illiquid, a risky and a risk - free asset. our goal in this paper is to carry out a complete lie group analysis of pdes describing value function and investment and consumption strategies for a portfolio with an illiquid asset that is sold in an exogenous random moment of time with a prescribed liquidation time distribution. the problem of such type leads to three dimensional nonlinear hamilton - jacobi - bellman ( hjb ) equations. such equations are not only tedious for analytical methods but are also quite challenging form a numeric point of view. to reduce the three - dimensional problem to a two - dimensional one or even to an ode one usually uses some substitutions, yet the methods used to find such substitutions are rarely discussed by the authors. we find the admitted lie algebra for a broad class of liquidation time distributions in cases of hara and log utility functions and formulate corresponding theorems for all these cases. we use found lie algebras to obtain reductions of the studied equations. several of similar substitutions were used in other papers before whereas others are new to our knowledge. this method gives us the possibility to provide a complete set of non - equivalent substitutions and reduced equations.
arxiv:1512.06295
we present a gradient - based theoretical framework for predicting hydrogen assisted fracture in elastic - plastic solids. the novelty of the model lies in the combination of : ( i ) stress - assisted diffusion of solute species, ( ii ) strain gradient plasticity, and ( iii ) a hydrogen - sensitive phase field fracture formulation, inspired by first principles calculations. the theoretical model is numerically implemented using a mixed finite element formulation and several boundary value problems are addressed to gain physical insight and showcase model predictions. the results reveal the critical role of plastic strain gradients in rationalising decohesion - based arguments and capturing the transition to brittle fracture observed in hydrogen - rich environments. large crack tip stresses are predicted, which in turn raise the hydrogen concentration and reduce the fracture energy. the computation of the steady state fracture toughness as a function of the cohesive strength shows that cleavage fracture can be predicted in otherwise ductile metals using sensible values for the material parameters and the hydrogen concentration. in addition, we compute crack growth resistance curves in a wide variety of scenarios and demonstrate that the model can appropriately capture the sensitivity to : the plastic length scales, the fracture length scale, the loading rate and the hydrogen concentration. model predictions are also compared with fracture experiments on a modern ultra - high strength steel, aermet100. a promising agreement is observed with experimental measurements of threshold stress intensity factor $ k _ { th } $ over a wide range of applied potentials.
arxiv:2007.07093
we formulate noncommutative three - dimensional ( 3d ) gravity by making use of its connection with 3d chern - simons theory. in the euclidean sector, we consider the particular example of topology $ t ^ 2 \ times r $ and show that the 3d black hole solves the noncommutative equations. we then consider the black hole on a constant u ( 1 ) background and show that the black hole charges ( mass and angular momentum ) are modified by the presence of this background.
arxiv:hep-th/0104264
the work is devoted to the development of an information system isanm that will be useful for teaching students and applicable in the work of engineers and researchers for automating the analysis of colloidal structure morphology. the isanm system integrates various analysis methods and is designed prioritize usability, catering primarily to users rather than software developers. this paper outlines the scope of the subject area, reviews selected methods for morphology analysis, and presents the scripts developed for implementing these methods in python. in the future, new methods and tools will be gradually added to the system for chemical engineers, physicists, and materials scientists. we hope that the system implementation methodology described here will be useful in the implementation of other projects related to training chemical engineers and beyond. the system core is implemented in c #, utilizing the. net framework and the ms sql server, and is developed within the microsoft visual studio 2019 environment on windows 10 using the client - server architecture. the system has been deployed on a vds server using the ubuntu 20. 04 distribution. ms sql server is used as the database management system. the system can be accessed at https : / / isanm. space. geometric analysis of colloidal structures is significant in applications such as photonic crystals for optoelectronics and microelectronics, functional coatings in materials science, and biosensors for medical and environmental applications. this demonstrates the importance of digitalization in this area to improve the quality of student education.
arxiv:2411.05674
in this paper, we consider a general subclass of analytic and bi - univalent functions in the open unit disk in the complex plane. making use of the chebyshev polynomials, we obtain upper bound estimate for the second hankel determinant for this function class.
arxiv:1702.06826
we introduce a deconstructed model that incorporates both higgsless and top - color mechanisms. the model alleviates the typical tension in higgsless models between obtaining the correct top quark mass and keeping delta - rho small. it does so by singling out the top quark mass generation as arising from a yukawa coupling to an effective top - higgs which develops a small vacuum expectation value, while electroweak symmetry breaking results largely from a higgsless mechanism. as a result, the heavy partners of the sm fermions can be light enough to be seen at the lhc.
arxiv:1002.2376
in the large - n limit of d = 4, n = 4 gauge theory, the dual ads spacetime becomes flat. we identify a gauge theory correlator whose large - n limit is the flat spacetime s - matrix.
arxiv:hep-th/9901076
in this paper, we address the discrete - time average consensus problem in strongly connected directed graphs, where nodes exchange information over unreliable error - prone communication links. we enhance the robustified ratio consensus algorithm by exploiting features of the ( hybrid ) automatic repeat request - ( h ) arq protocol used for error control of data transmissions, in order to allow the nodes to reach asymptotic average consensus even when information is exchanged over error - prone directional networks. this strategy, apart from handling time - varying information delays induced by retransmissions of erroneous packets, can also handle packet drops that occur when exceeding a predefined packet retransmission limit. invoking the ( h ) arq protocol allows nodes to : ( a ) exploit the incoming error - free acknowledgement feedback to initially acquire or later update their out - degree, ( b ) know whether a packet has arrived or not, and ( c ) determine a local upper - bound on the delays imposed by the retransmission limit. by augmenting the network ' s corresponding weight matrix, we show that nodes utilizing our proposed ( h ) arq ratio consensus algorithm can reach asymptotic average consensus over unreliable networks, while improving their convergence speed and maintaining low values in their local buffers compared to the current state - of - the - art.
arxiv:2209.14699
i used the 2. 4 m hiltner telescope at mdm observatory in an attempt to measure trigonometric parallaxes for 14 cataclysmic variable stars. techniques are described in detail. in the best cases the parallax uncertainties are below 1 mas, and significant parallaxes are found for most of the program stars. a bayesian method which combines the parallaxes together with proper motions and absolute magnitude constraints is developed and used to derive distance estimates and confidence intervals. the most precise distance derived here is for wz sge, for which i find 43. 3 ( + 1. 6, - 1. 5 ) pc. six luyten half - second stars with previous precise parallax measurements were re - measured to test the techniques, and good agreement is found.
arxiv:astro-ph/0308516
we introduce a new stata package called summclust that summarizes the cluster structure of the dataset for linear regression models with clustered disturbances. the key unit of observation for such a model is the cluster. we therefore propose cluster - level measures of leverage, partial leverage, and influence and show how to compute them quickly in most cases. the measures of leverage and partial leverage can be used as diagnostic tools to identify datasets and regression designs in which cluster - robust inference is likely to be challenging. the measures of influence can provide valuable information about how the results depend on the data in the various clusters. we also show how to calculate two jackknife variance matrix estimators efficiently as a byproduct of our other computations. these estimators, which are already available in stata, are generally more conservative than conventional variance matrix estimators. the summclust package computes all the quantities that we discuss.
arxiv:2205.03288
the study of mutual entropy ( information ) and capacity in classica l system was extensively done after shannon by several authors like kolmogor ov and gelfand. in quantum systems, there have been several definitions of t he mutual entropy for classical input and quantum output. in 1983, the autho r defined the fully quantum mechanical mutual entropy by means of the relati ve entropy of umegaki, and he extended it to general quantum systems by the relative entropy of araki and uhlmann. when the author introduced the quantu m mutual entropy, he did not indicate that it contains other definitions of the mutual entropy including classical one, so that there exist several misu nderstandings for the use of the mutual entropy ( information ) to compute the capacity of quantum channels. therefore in this note we point out that our quantum mutual entropy generalizes others and where the m isuse occurs.
arxiv:quant-ph/9806042
the concept of residual migration to zero offset is introduced for the case that a prestack migration and a subsequent residual moveout analysis have been performed for a seismic survey and a new depth model has not yet been determined. travel times of reflected events for individual traces are determined for diffraction points situated along horizons of analysis. these events are downward continued into the model used for the migration of the survey. the nip wave theorem can then be applied to events exhibiting small subsurface offsets in order to determine residual radii of curvature to be used in the updating of seismic velocity model by seismic stripping. alternatively, it is suggested to construct an aplanat and hence to determine a focus point of the reflected event without knowledge of the true velocity model. the distance of the estimated point of focus from the observed zero offset response of the migrated reflector serves as an indication of the focusing of the considered event at the particular subsurface offset and can be used as a measure for the necessity of a subsequent remigration of the survey with a new velocity model. a summation over the first fresnel zone at the migrated zero offset position is suggested as an alternative to conventional stacking leading to improvement of the signal to noise ratio. applications to model computations and to a seismic survey over an overthrust structure show promising results regarding the applicability of the suggested method.
arxiv:2111.09371
quadratic nodal point ( qnp ) in two dimensions has so far been reported only in nonmagnetic materials and in the absence of spin - orbit coupling. here, by first - principles calculations and symmetry analysis, we predict stable qnp near fermi level in a two - dimensional kagome metal - organic framework material, cr $ _ 3 $ ( hab ) $ _ 2 $, which features noncollinear antiferromagnetic ordering and sizable spin - orbit coupling. effective kp and lattice models are constructed to capture such magnetic qnps. besides qnp, we find cr $ _ 3 $ ( hab ) $ _ 2 $ also hosts six magnetic linear nodal points protected by mirror as well as $ c _ { 2z } t $ symmetry. properties associated to these nodal points, such as topological edge states and quantized optical absorbance, are discussed.
arxiv:2310.11810
the last decade has seen a significant increase in the number of studies devoted to wave turbulence. many deal with water waves, as modeling of ocean waves has historically motivated the development of weak turbulence theory, which adresses the dynamics of a random ensemble of weakly nonlinear waves in interaction. recent advances in experiments have shown that this theoretical picture is too idealized to capture experimental observations. while gravity dominates much of the oceanic spectrum, waves observed in the laboratory are in fact gravity - capillary waves, due to the restricted size of wave basins. this richer physics induces many interleaved physical effects far beyond the theoretical framework, notably in the vicinity of the gravity - capillary crossover. these include dissipation, finite - system size effects, and finite nonlinearity effects. simultaneous space - and - time resolved techniques, now available, open the way for a much more advanced analysis of these effects.
arxiv:2107.04015
forecasting, to estimate future events, is crucial for business and decision - making. this paper proposes qxeai, a methodology that produces a probabilistic forecast that utilizes a quantum - like evolutionary algorithm based on training a quantum - like logic decision tree and a classical value tree on a small number of related time series. we demonstrate how the application of our quantum - like evolutionary algorithm to forecasting can overcome the challenges faced by classical and other machine learning approaches. by using three real - world datasets ( dow jones index, retail sales, gas consumption ), we show how our methodology produces accurate forecasts while requiring little to none manual work.
arxiv:2405.03701
features of rheological laws applied to solid - like granular materials are recalled and confronted to microscopic approaches via discrete numerical simulations. we give examples of model systems with very similar equilibrium stress transport properties - - the much - studied force chains and force distribution - - but qualitatively different strain responses to stress increments. results on the stability of elastoplastic contact networks lead to the definition of two different rheological regimes, according to whether a macroscopic fragility property ( propensity to rearrange under arbitrary small stress increments in the thermodynamic limit ) applies. possible consequences are discussed.
arxiv:0901.2846
our recently developed variant of variationnally optimized perturbation ( opt ), in particular consistently incorporating renormalization group properties ( rgopt ), is adapted to the calculation of the qcd spectral density of the dirac operator and the related chiral quark condensate $ \ langle \ bar q q \ rangle $ in the chiral limit, for $ n _ f = 2 $ and $ n _ f = 3 $ massless quarks. the results of successive sequences of approximations at two -, three -, and four - loop orders of this modified perturbation, exhibit a remarkable stability. we obtain $ \ langle \ bar q q \ rangle ^ { 1 / 3 } _ { n _ f = 2 } ( 2 \, { \ rm gev } ) = - ( 0. 833 - 0. 845 ) \ bar \ lambda _ 2 $, and $ \ langle \ bar q q \ rangle ^ { 1 / 3 } _ { n _ f = 3 } ( 2 \, { \ rm gev } ) = - ( 0. 814 - 0. 838 ) \ bar \ lambda _ 3 $ where the range spanned by the first and second numbers ( respectively four - and three - loop order results ) defines our theoretical error, and $ \ bar \ lambda _ { n _ f } $ is the basic qcd scale in the $ \ overline { ms } $ - scheme. we obtain a moderate suppression of the chiral condensate when going from $ n _ f = 2 $ to $ n _ f = 3 $. we compare these results with some other recent determinations from other nonperturbative methods ( mainly lattice and spectral sum rules ).
arxiv:1506.07506
we investigate the minimization of the energy per point $ e \ _ f $ among $ d $ - dimensional bravais lattices, depending on the choice of pairwise potential equal to a radially symmetric function $ f ( | x | ^ 2 ) $. we formulate criteria for minimality and non - minimality of some lattices for $ e \ _ f $ at fixed scale based on the sign of the inverse laplace transform of $ f $ when $ f $ is a superposition of exponentials, beyond the class of completely monotone functions. we also construct a family of non - completely monotone functions having the triangular lattice as the unique minimizer of $ e \ _ f $ at any scale. for lennard - jones type potentials, we reduce the minimization problem among all bravais lattices to a minimization over the smaller space of unit - density lattices and we establish a link to the maximum kissing problem. new numerical evidence for the optimality of particular lattices for all the exponents are also given. we finally design one - well potentials $ f $ such that the square lattice has lower energy $ e \ _ f $ than the triangular one. many open questions are also presented.
arxiv:1806.02233
simulation has become an essential component of designing and developing scientific experiments. the conventional procedural approach to coding simulations of complex experiments is often error - prone, hard to interpret, and inflexible, making it hard to incorporate changes such as algorithm updates, experimental protocol modifications, and looping over experimental parameters. we present mmodel, a framework designed to accelerate the writing of experimental simulation packages. mmodel uses a graph - theory approach to represent the experiment steps and can rewrite its own code to implement modifications, such as adding a loop to vary simulation parameters systematically. the framework aims to avoid duplication of effort, increase code readability and testability, and decrease development time.
arxiv:2304.03421
convolutional neural network ( cnn ) has achieved state - of - the - art performance in many different visual tasks. learned from a large - scale training dataset, cnn features are much more discriminative and accurate than the hand - crafted features. moreover, cnn features are also transferable among different domains. on the other hand, traditional dictionarybased features ( such as bow and spm ) contain much more local discriminative and structural information, which is implicitly embedded in the images. to further improve the performance, in this paper, we propose to combine cnn with dictionarybased models for scene recognition and visual domain adaptation. specifically, based on the well - tuned cnn models ( e. g., alexnet and vgg net ), two dictionary - based representations are further constructed, namely mid - level local representation ( mlr ) and convolutional fisher vector representation ( cfv ). in mlr, an efficient two - stage clustering method, i. e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class - mixture or a classspecific part dictionary. after that, the part dictionary is used to operate with the multi - scale image inputs for generating midlevel representation. in cfv, a multi - scale and scale - proportional gmm training strategy is utilized to generate fisher vectors based on the last convolutional layer of cnn. by integrating the complementary information of mlr, cfv and the cnn features of the fully connected layer, the state - of - the - art performance can be achieved on scene recognition and domain adaptation problems. an interested finding is that our proposed hybrid representation ( from vgg net trained on imagenet ) is also complementary with googlenet and / or vgg - 11 ( trained on place205 ) greatly.
arxiv:1601.07977
in this paper we provide an analytical procedure for explicit calculation of the left and right invariant vector fields and one - forms on su ( n ) manifold. the calculations are based on the coset parametrization of su ( n ) group. the results enable us to calculate the invariant measure or haar measure on the group. as an illustrative example, we calculate invariant vector fields and one - forms on su ( 2 ) group.
arxiv:1003.2708
language models for code ( codelms ) have emerged as powerful tools for code - related tasks, outperforming traditional methods and standard machine learning approaches. however, these models are susceptible to security vulnerabilities, drawing increasing research attention from domains such as software engineering, artificial intelligence, and cybersecurity. despite the growing body of research focused on the security of codelms, a comprehensive survey in this area remains absent. to address this gap, we systematically review 67 relevant papers, organizing them based on attack and defense strategies. furthermore, we provide an overview of commonly used language models, datasets, and evaluation metrics, and highlight open - source tools and promising directions for future research in securing codelms.
arxiv:2410.15631
generative ai technologies are gaining unprecedented popularity, causing a mix of excitement and apprehension through their remarkable capabilities. in this paper, we study the challenges associated with deploying synthetic data, a subfield of generative ai. our focus centers on enterprise deployment, with an emphasis on privacy concerns caused by the vast amount of personal and highly sensitive data. we identify 40 + challenges and systematize them into five main groups - - i ) generation, ii ) infrastructure & architecture, iii ) governance, iv ) compliance & regulation, and v ) adoption. additionally, we discuss a strategic and systematic approach that enterprises can employ to effectively address the challenges and achieve their goals by establishing trust in the implemented solutions.
arxiv:2307.04208
we study a parabolic differential equation whose solution represents the atom dislocation in a crystal for a general type of peierls - nabarro model with possibly long range interactions and an external stress. differently from the previous literature, we treat here the case in which such dislocation is not the superpositions of transitions all occurring with the same orientations ( i. e. opposite orientations are allowed as well ).
arxiv:1407.0620
the interaction of electron - hole pairs with lattice vibrations exhibits a wealth of intriguing physical phenomena. the kohn anomaly is a renowned example where electron - phonon coupling leads to non - analytic phonon dispersion at specific momentum nesting the fermi surface. here we report evidence of another type of phonon anomaly discovered by low temperature raman spectroscopy in bilayer graphene where the charge density is modulated by the electric field effect. this anomaly, arising from charge - tunable modulations of particle - hole pairs that are resonantly coupled to lattice vibrations, is predicted to exhibit a logarithmic divergence in the long - wavelength optical - phonon energy. in a non - uniform bilayer of graphene, the logarithmic divergence is abated by charge density inhomogeneity leaving as a vestige an anomalous phonon softening. the observed softening marks the first confirmation of the phonon anomaly as a key signature of the resonant deformation - potential electron - phonon coupling. the high sensitivity of the phonon softening to charge density non - uniformity creates significant venues to explore the interplay between fundamental interactions and disorder in the atomic layers.
arxiv:0712.3879
we present 0 ". 035 resolution ( ~ 200 pc ) imaging of the 158 um [ cii ] line and the underlying dust continuum of the z = 6. 9 quasar j234833. 34 - 305410. 0. the 18 h alma observations reveal extremely compact emission ( diameter ~ 1 kpc ) that is consistent with a simple, almost face - on, rotation - supported disk with a significant velocity dispersion of ~ 160 km / s. the gas mass in just the central 200 pc is ~ 4x10 ^ 9 m _ sun, about a factor two higher than that of the central supermassive black hole. consequently we do not resolve the black hole ' s sphere of influence, and find no kinematic signature of the central supermassive black hole. kinematic modeling of the [ cii ] line shows that the dynamical mass at large radii is consistent with the gas mass, leaving little room for a significant mass contribution by stars and / or dark matter. the toomre - q parameter is less than unity throughout the disk, and thus is conducive to star formation, consistent with the high infrared luminosity of the system. the dust in the central region is optically thick, at a temperature > 132 k. using standard scaling relations of dust heating by star formation, this implies an unprecedented high star formation rate density of > 10 ^ 4 m _ sun / yr / kpc ^ 2. such a high number can still be explained with the eddington limit for star formation under certain assumptions, but could also imply that the central supermassive black hole contributes to the heating of the dust in the central 110 pc.
arxiv:2201.06396
reinforcement learning ( rl ) has demonstrated a great potential for automatically solving decision - making problems in complex uncertain environments. rl proposes a computational approach that allows learning through interaction in an environment with stochastic behavior, where agents take actions to maximize some cumulative short - term and long - term rewards. some of the most impressive results have been shown in game theory where agents exhibited superhuman performance in games like go or starcraft 2, which led to its gradual adoption in many other domains, including cloud computing. therefore, rl appears as a promising approach for autoscaling in cloud since it is possible to learn transparent ( with no human intervention ), dynamic ( no static plans ), and adaptable ( constantly updated ) resource management policies to execute applications. these are three important distinctive aspects to consider in comparison with other widely used autoscaling policies that are defined in an ad - hoc way or statically computed as in solutions based on meta - heuristics. autoscaling exploits the cloud elasticity to optimize the execution of applications according to given optimization criteria, which demands to decide when and how to scale - up / down computational resources, and how to assign them to the upcoming processing workload. such actions have to be taken considering that the cloud is a dynamic and uncertain environment. motivated by this, many works apply rl to the autoscaling problem in the cloud. in this work, we survey exhaustively those proposals from major venues, and uniformly compare them based on a set of proposed taxonomies. we also discuss open problems and prospective research in the area.
arxiv:2001.09957
firstly, we shall introduce the so - called snapping out walsh ' s brownian motion and present its relation with walsh ' s brownian motion. then the stiff problem related to walsh ' s brownian motion will be described and we shall build a phase transition for it. the snapping out walsh ' s brownian motion corresponds to the so - called semi - permeable pattern of this stiff problem.
arxiv:1805.08158
fast - converging algorithms are a contemporary requirement in reinforcement learning. in the context of linear function approximation, the magnitude of the smallest eigenvalue of the key matrix is a major factor reflecting the convergence speed. traditional value - based rl algorithms focus on minimizing errors. this paper introduces a variance minimization ( vm ) approach for value - based rl instead of error minimization. based on this approach, we proposed two objectives, the variance of bellman error ( vbe ) and the variance of projected bellman error ( vpbe ), and derived the vmtd, vmtdc, and vmetd algorithms. we provided proofs of their convergence and optimal policy invariance of the variance minimization. experimental studies validate the effectiveness of the proposed algorithms.
arxiv:2411.06396
the main aim of this paper is to investigate $ \ left ( h _ { p }, l _ { p } \ right ) $ and $ \ left ( h _ { p }, l _ { p, \ infty } \ right ) $ type inequalities for maximal operators of riesz logarithmic means of one - dimensional vilenkin - fourier series.
arxiv:1410.7975
we present vivace, the virac variable classification ensemble, a catalogue of variable stars extracted from an automated classification pipeline for the vista variables in the v \ ' ia l \ ' actea ( vvv ) infrared survey of the galactic bar / bulge and southern disc. our procedure utilises a two - stage hierarchical classifier to first isolate likely variable sources using simple variability summary statistics and training sets of non - variable sources from the gaia early third data release, and then classify candidate variables using more detailed light curve statistics and training labels primarily from ogle and vsx. the methodology is applied to point - spread - function photometry for $ \ sim490 $ million light curves from the virac v2 astrometric and photometric catalogue resulting in a catalogue of $ \ sim1. 4 $ million likely variable stars, of which $ \ sim39, 000 $ are high - confidence ( classification probability $ > 0. 9 $ ) rr lyrae ab stars, $ \ sim8000 $ rr lyrae c / d stars, $ \ sim187, 000 $ detached / semi - detached eclipsing binaries, $ \ sim18, 000 $ contact eclipsing binaries, $ \ sim1400 $ classical cepheid variables and $ \ sim2200 $ type ii cepheid variables. comparison with ogle - 4 suggests a completeness of around $ 90 \, \ % $ for rrab and $ \ lesssim60 \ % $ for rrc / d, and a misclassification rate for known rr lyrae stars of around $ 1 \ % $ for the high confidence sample. we close with two science demonstrations of our new vivace catalogue : first, a brief investigation of the spatial and kinematic properties of the rr lyrae stars within the disc / bulge, demonstrating the spatial elongation of bar - bulge rr lyrae stars is in the same sense as the more metal - rich red giant population whilst having a slower rotation rate of $ \ sim40 \, \ mathrm { km \, s } ^ { - 1 } \ mathrm { kpc } ^ { - 1 } $ ; and secondly, an investigation of the gaia edr3 parallax zeropoint using contact eclipsing binaries across the galactic disc plane and bulge.
arxiv:2110.15371
cataclysmic variable sdss j143317. 78 + 101123. 3 with p = 0. 054241 d ( 78. 1 - min ) was suspected to be a possible dwarf nova of wz sge type with the sub - stellar donor, but without detected outbursts so far. checking the newly available data from atlas survey has revealed the outburst by at least 6 magnitudes in september 2020, thus confirming the dwarf nova nature of this object with the brown dwarf secondary. other projects and individual observers have stopped their monitoring of this target several days before the outburst. this finding strengthens the value of observing the twilight zone by the professional surveys and amateurs.
arxiv:2103.16179
this paper introduces dan +, a new multi - domain corpus and annotation guidelines for danish nested named entities ( nes ) and lexical normalization to support research on cross - lingual cross - domain learning for a less - resourced language. we empirically assess three strategies to model the two - layer named entity recognition ( ner ) task. we compare transfer capabilities from german versus in - language annotation from scratch. we examine language - specific versus multilingual bert, and study the effect of lexical normalization on ner. our results show that 1 ) the most robust strategy is multi - task learning which is rivaled by multi - label decoding, 2 ) bert - based ner models are sensitive to domain shifts, and 3 ) in - language bert and lexical normalization are the most beneficial on the least canonical data. our results also show that an out - of - domain setup remains challenging, while performance on news plateaus quickly. this highlights the importance of cross - domain evaluation of cross - lingual transfer.
arxiv:2105.11301
we apply the method of principal value resummation of large momentum - dependent radiative corrections to the calculation of the drell yan cross section. we sum all next - to - leading logarithms and provide numerical results for the resummed exponent and the corresponding hard scattering function.
arxiv:hep-ph/9407293
we introduce a general model of trapping for random walks on graphs. we give the possible scaling limits of these randomly trapped random walks on $ \ mathbb { z } $. these scaling limits include the well - known fractional kinetics process, the fontes - isopi - newman singular diffusion as well as a new broad class we call spatially subordinated brownian motions. we give sufficient conditions for convergence and illustrate these on two important examples.
arxiv:1302.7227
irreversible entropy production ( iep ) plays an important role in quantum thermodynamic processes. here we investigate the geometrical bounds of iep in nonequilibrium thermodynamics by exemplifying a system coupled to a squeezed thermal bath subject to dissipation and dephasing, respectively. we find that the geometrical bounds of the iep always shift in contrary way under dissipation and dephasing, where the lower and upper bounds turning to be tighter occurs in the situation of dephasing and dissipation, respectively. however, either under dissipation or under dephasing, we may reduce both the critical time of the iep itself and the critical time of the bounds for reaching an equilibrium by harvesting the benefits of squeezing effects, in which the values of the iep, quantifying the degree of thermodynamic irreversibility, also becomes smaller. therefore, due to the nonequilibrium nature of the squeezed thermal bath, the system - bath interaction energy brings prominent impact on the iep, leading to tightness of its bounds. our results are not contradictory with the second law of thermodynamics by involving squeezing of the bath as an available resource, which can improve the performance of quantum thermodynamic devices.
arxiv:2204.08260
in this work, we study the existence, multiplicity and concentration of positive solutions for the following class of quasilinear problem : \ [ - \ delta _ { \ phi } u + v ( \ epsilon x ) \ phi ( \ vert u \ vert ) u = f ( u ) \ quad \ mbox { in } \ quad \ mathbb { r } ^ { n }, \ ] where $ \ phi ( t ) = \ int _ { 0 } ^ { \ vert t \ vert } \ phi ( s ) sds $ is a n - function, $ \ delta _ { \ phi } $ is the $ \ phi $ - laplacian operator, $ \ epsilon $ is a positive parameter, $ n \ geq 2 $, $ v : \ mathbb { r } ^ { n } \ rightarrow \ mathbb { r } $ is a continuous function and $ f : \ mathbb { r } \ rightarrow \ mathbb { r } $ is a $ c ^ { 1 } $ - function.
arxiv:1506.05331
the two - dimensional boron monolayers were reported to be metallic both in previous theoretical predictions and experimental observations, however, we have firstly found a family of boron monolayers with the novel semiconducting property as confirmed by the first - principles calculations with the quasi - particle g0w0 approach. we demonstrate that the vanished metallicity characterized by the pz - derived bands cross the fermi level is attributed to the motif of a triple - hexagonal - vacancy, with which various semiconducting boron monolayers are designed to realize the band - gap engineering for the potential applications in electronic devices. the semiconducting boron monolayers in our predictions are expected to be synthesized on the proper substrates, due to the similar stabilities to the ones observed experimentally.
arxiv:1707.09736
online social networks provide a platform for sharing information and free expression. however, these networks are also used for malicious purposes, such as distributing misinformation and hate speech, selling illegal drugs, and coordinating sex trafficking or child exploitation. this paper surveys the state of the art in keeping online platforms and their users safe from such harm, also known as the problem of preserving integrity. this survey comes from the perspective of having to combat a broad spectrum of integrity violations at facebook. we highlight the techniques that have been proven useful in practice and that deserve additional attention from the academic community. instead of discussing the many individual violation types, we identify key aspects of the social - media eco - system, each of which is common to a wide variety violation types. furthermore, each of these components represents an area for research and development, and the innovations that are found can be applied widely.
arxiv:2009.10311
informally, the erd \ h { o } s - hajnal conjecture ( shortly eh - conjecture ) asserts that if a sufficiently large host clique on $ n $ vertices is edge - coloured avoiding a copy of some fixed edge - coloured clique, then there is a large homogeneous set of size $ n ^ \ beta $ for some positive $ \ beta $, where a set of vertices is homogeneous if it does not induce all the colours. this conjecture, if true, claims that imposing local conditions on edge - partitions of cliques results in a global structural consequence such as a large homogeneous set, a set avoiding all edges of some part. while this conjecture attracted a lot of attention, it is still open even for two colours. in this note, we reduce the multicolour eh - conjecture to the case when the number of colours used in a host clique is either the same as in the forbidden pattern or one more. we exhibit a non - monotonicity behaviour of homogeneous sets in coloured cliques with forbidden patterns by showing that allowing an extra colour in the host graph could actually decrease the size of a largest homogeneous set.
arxiv:2311.03249
we re - fit for the neutrino mass - squared difference $ \ delta m ^ 2 $ in the threefold maximal ( ie. tri - maximal ) mixing scenario using recent chooz and super - k data, taking account of matter effects in the earth. while matter effects have little influence on reactor experiments and proposed long - baseline accelerator experiments with $ l \ simlt 1000 km $, they are highly significant for atmospheric experiments, suppressing naturally $ \ nu _ e $ mixing and enhancing $ \ nu _ { \ mu } - \ nu _ { \ tau } $ mixing, so as to effectively remove the experimental distinction between threefold maximal and twofold maximal $ \ nu _ { \ mu } - \ nu _ { \ tau } $ mixing. threefold maximal mixing is fully consistent with the chooz and super - k data and the best - fit value for the neutrino mass - squared difference is $ \ delta m ^ 2 = ( 0. 98 \ pm ^ { 0. 30 } _ { 0. 23 } ) \ times 10 ^ { - 3 } ev ^ 2 $.
arxiv:hep-ph/9904297
the recent results ( pardo & poretti 1997, a & a 324, 121 ; poretti & pardo 1997, a & a 324, 133 ) obtained on the frequency content of double - mode cepheids light curves and the properties of their fourier parameters are reviewed. some points briefly discussed in previous papers ( no third periodicity, methodological aspects on the true peaks detection, the action of the cross coupling terms and the impact on theoretical models ) are described.
arxiv:astro-ph/9711024
the nonlinear scalar field model of space - time film ( born - - infeld type nonlinear scalar field model ) is considered. its spherically symmetrical solution is obtained. this solution gives the class of moving solitary solutions or solitons with the lorentz transformation. we consider the distant interaction between such spheroidal solitons or spherons. this interaction is caused by the nonlinearity of the model. starting from the static configuration with two spherons we show that the interaction under investigation is similar to electromagnetic one.
arxiv:1804.09022
abstract meaning representation ( amr ) provides many information of a sentence such as semantic relations, coreferences, and named entity relation in one representation. however, research on amr parsing for indonesian sentence is fairly limited. in this paper, we develop a system that aims to parse an indonesian sentence using a machine learning approach. based on zhang et al. work, our system consists of three steps : pair prediction, label prediction, and graph construction. pair prediction uses dependency parsing component to get the edges between the words for the amr. the result of pair prediction is passed to the label prediction process which used a supervised learning algorithm to predict the label between the edges of the amr. we used simple sentence dataset that is gathered from articles and news article sentences. our model achieved the smatch score of 0. 820 for simple sentence test data.
arxiv:2103.03730
braces were introduced by w. rump in 2006 as an algebraic system related to the quantum yang - baxter equation. in 2017, l. guarnieri and l. vendramin defined for the same purposes a more general notion of a skew left brace. recently, l. guo, h. lang, y. sheng [ arxiv : 2009. 03492 ] gave a definition of what is a rota - baxter operator on a group. we connect these two notions as follows. it is shown that every rota - baxter group gives rise to a skew left brace. moreover, every skew left brace can be injectively embedded into a rota - baxter group. when the additive group of a skew left brace is complete, then this brace is induced by a rota - baxter group. we interpret some notions of the theory of skew left braces in terms of rota - baxter operators.
arxiv:2105.00428
given a graph $ g $, a vertex switch of $ v \ in v ( g ) $ results in a new graph where neighbors of $ v $ become nonneighbors and vice versa. this operation gives rise to an equivalence relation over the set of labeled digraphs on $ n $ vertices. the equivalence class of $ g $ with respect to the switching operation is commonly referred to as $ g $ ' s switching class. the algebraic and combinatorial properties of switching classes have been studied in depth ; however, they have not been studied as thoroughly from an algorithmic point of view. the intent of this work is to further investigate the algorithmic properties of switching classes. in particular, we show that switching classes can be used to asymptotically speed up several super - linear unweighted graph algorithms. the current techniques for speeding up graph algorithms are all somewhat involved insofar that they employ sophisticated pre - processing, data - structures, or use " word tricks " on the ram model to achieve at most a $ o ( \ log ( n ) ) $ speed up for sufficiently dense graphs. our methods are much simpler and can result in super - polylogarithmic speedups. in particular, we achieve better bounds for diameter, transitive closure, bipartite maximum matching, and general maximum matching.
arxiv:1408.4900
many macroeconomic policy questions may be assessed in a case study framework, where the time series of a treated unit is compared to a counterfactual constructed from a large pool of control units. i provide a general framework for this setting, tailored to predict the counterfactual by minimizing a tradeoff between underfitting ( bias ) and overfitting ( variance ). the framework nests recently proposed structural and reduced form machine learning approaches as special cases. furthermore, difference - in - differences with matching and the original synthetic control are restrictive cases of the framework, in general not minimizing the bias - variance objective. using simulation studies i find that machine learning methods outperform traditional methods when the number of potential controls is large or the treated unit is substantially different from the controls. equipped with a toolbox of approaches, i revisit a study on the effect of economic liberalisation on economic growth. i find effects for several countries where no effect was found in the original study. furthermore, i inspect how a systematically important bank respond to increasing capital requirements by using a large pool of banks to estimate the counterfactual. finally, i assess the effect of a changing product price on product sales using a novel scanner dataset.
arxiv:1803.00096
we consider the problem of causal effect estimation with an unobserved confounder, where we observe a single proxy variable that is associated with the confounder. although it has been shown that the recovery of an average causal effect is impossible in general from a single proxy variable, we show that causal recovery is possible if the outcome is generated deterministically. this generalizes existing work on causal methods with a single proxy variable to the continuous treatment setting. we propose two kernel - based methods for this setting : the first based on the two - stage regression approach, and the second based on a maximum moment restriction approach. we prove that both approaches can consistently estimate the causal effect, and we empirically demonstrate that we can successfully recover the causal effect on challenging synthetic benchmarks.
arxiv:2308.04585
we associate to every positive braid a braid monodromy group, generalizing the geometric monodromy group of an isolated plane curve singularity. if the closure of the braid is a knot, we identify the corresponding group with a framed mapping class group. in particular, this gives a well defined knot invariant. as an application, we obtain that the geometric monodromy group of an irreducible singularity is determined by the genus and the arf invariant of the associated knot.
arxiv:2111.08150
this paper is concerned with the existence of positive solutions of second - order impulsive differential equations with integral boundary conditions on an infinite interval. as an application, an example is given to demonstrate our main results.
arxiv:2005.14660
machine learning has made remarkable progress in a wide range of fields. in many scenarios, learning is performed on datasets involving sensitive information, in which privacy protection is essential for learning algorithms. in this work, we study pure private learning in the agnostic model - - a framework reflecting the learning process in practice. we examine the number of users required under item - level ( where each user contributes one example ) and user - level ( where each user contributes multiple examples ) privacy and derive several improved upper bounds. for item - level privacy, our algorithm achieves a near optimal bound for general concept classes. we extend this to the user - level setting, rendering a tighter upper bound than the one proved by ghazi et al. ( 2023 ). lastly, we consider the problem of learning thresholds under user - level privacy and present an algorithm with a nearly tight user complexity.
arxiv:2407.20640
the pair correlation function is a fundamental spatial point process characteristic that, given the intensity function, determines second order moments of the point process. non - parametric estimation of the pair correlation function is a typical initial step of a statistical analysis of a spatial point pattern. kernel estimators are popular but especially for clustered point patterns suffer from bias for small spatial lags. in this paper we introduce a new orthogonal series estimator. the new estimator is consistent and asymptotically normal according to our theoretical and simulation results. our simulations further show that the new estimator can outperform the kernel estimators in particular for poisson and clustered point processes.
arxiv:1702.01736
in this work we study arrangements of $ k $ - dimensional subspaces $ v _ 1, \ ldots, v _ n \ subset \ mathbb { c } ^ \ ell $. our main result shows that, if every pair $ v _ { a }, v _ b $ of subspaces is contained in a dependent triple ( a triple $ v _ { a }, v _ b, v _ c $ contained in a $ 2k $ - dimensional space ), then the entire arrangement must be contained in a subspace whose dimension depends only on $ k $ ( and not on $ n $ ). the theorem holds under the assumption that $ v _ a \ cap v _ b = \ { 0 \ } $ for every pair ( otherwise it is false ). this generalizes the sylvester - gallai theorem ( or kelly ' s theorem for complex numbers ), which proves the $ k = 1 $ case. our proof also handles arrangements in which we have many pairs ( instead of all ) appearing in dependent triples, generalizing the quantitative results of barak et. al. [ bdwy - pnas ]. one of the main ingredients in the proof is a strengthening of a theorem of barthe [ bar98 ] ( from the $ k = 1 $ to $ k > 1 $ case ) proving the existence of a linear map that makes the angles between pairs of subspaces large on average. such a mapping can be found, unless there is an obstruction in the form of a low dimensional subspace intersecting many of the spaces in the arrangement ( in which case one can use a different argument to prove the main theorem ).
arxiv:1412.0795
domar has given a condition that ensures the existence of the largest subharmonic minorant of a given function. later rippon pointed out that a modification of domar ' s argument gives in fact a better result. using our previous, rather general and flexible, modification of domar ' s original argument, we extend their results both to the subharmonic and quasinearly subharmonic settings.
arxiv:1102.1136