text stringlengths 1 3.65k | source stringlengths 15 79 |
|---|---|
we survey the salient features and problems of conformal and superconformal mechanics and portray some of its developments over the past decade. both classical and quantum issues of single - and multiparticle systems are covered. | arxiv:1112.1947 |
we suggest using the half - width rule to make an estimate of the 1 / nc errors in hadronic models containing resonances. we show simple consequences ranging from the analysis of meson regge trajectories, the hadron resonance gas at finite temperature and generalized hadronic form factors. | arxiv:1210.7153 |
artificial intelligence ( ai ) in healthcare, especially in white blood cell cancer diagnosis, is hindered by two primary challenges : the lack of large - scale labeled datasets for white blood cell ( wbc ) segmentation and outdated segmentation methods. these challenges inhibit the development of more accurate and modern techniques to diagnose cancer relating to white blood cells. to address the first challenge, a semi - supervised learning framework should be devised to efficiently capitalize on the scarcity of the dataset available. in this work, we address this issue by proposing a novel self - training pipeline with the incorporation of fixmatch. self - training is a technique that utilizes the model trained on labeled data to generate pseudo - labels for the unlabeled data and then re - train on both of them. fixmatch is a consistency - regularization algorithm to enforce the model ' s robustness against variations in the input image. we discover that by incorporating fixmatch in the self - training pipeline, the performance improves in the majority of cases. our performance achieved the best performance with the self - training scheme with consistency on deeplab - v3 architecture and resnet - 50, reaching 90. 69 %, 87. 37 %, and 76. 49 % on zheng 1, zheng 2, and lisc datasets, respectively. | arxiv:2401.07278 |
we consider two particles performing continuous - time nearest neighbor random walk on $ \ mathbb z $ and interacting with each other when they are at neighboring positions. typical examples are two particles in the partial exclusion process or in the inclusion process. we provide an exact formula for the laplace - fourier transform of the transition probabilities of the two - particle dynamics. from this we derive a general scaling limit result, which shows that the possible scaling limits are coalescing brownian motions, reflected brownian motions, and sticky brownian motions. in particle systems with duality, the solution of the dynamics of two dual particles provides relevant information. we apply the exact formula to the the symmetric inclusion process, that is self - dual, in the condensation regime. we thus obtain two results. first, by computing the time - dependent covariance of the particle occupation number at two lattice sites we characterize the time - dependent coarsening in infinite volume when the process is started from a homogeneous product measure. second, we identify the limiting variance of the density field in the diffusive scaling limit, relating it to the local time of sticky brownian motion. | arxiv:1711.11283 |
rapidly rotating black holes are known to develop instabilities in the presence of a sufficiently light boson, a process which becomes efficient when the boson ' s compton wavelength is roughly the size of the black hole. this phenomenon, known as black hole superradiance, generates an exponentially growing boson cloud at the expense of the rotational energy of the black hole. for astrophysical black holes with $ m \ sim \ mathcal { o } ( 10 ) \, m _ \ odot $, the superradiant condition is achieved for bosons with $ m _ b \ sim \ mathcal { o } ( 10 ^ { - 11 } ) \, { \ rm ev } $ ; intriguingly, photons traversing the intergalactic medium ( igm ) acquire an effective mass ( due to their interactions with the ambient plasma ) which naturally resides in this range. the implications of photon superradiance, i. e. the evolution of the superradiant photon cloud and ambient plasma in the presence of scattering and particle production processes, have yet to be thoroughly investigated. here, we enumerate and discuss a number of different processes capable of quenching the growth of the photon cloud, including particle interactions with the ambient electrons and back - reactions on the effective mass ( arising e. g. from thermal effects, pair - production, ionization of of the local background, and modifications to the dispersion relation from strong electric fields ). this work naturally serves as a guide in understanding how interactions may allow light exotic bosons to evade superradiant constraints. | arxiv:2009.10075 |
farview is an early - stage concept for a large, low - frequency radio observatory, manufactured in - situ on the lunar far side using metals extracted from the lunar regolith. it consists of 100, 000 dipole antennas in compact subarrays distributed over a large area but with empty space between subarrays in a core - halo structure. farview covers a total area of ~ 200 km2, has a dense core within the inner ~ 36 km2, and a ~ power - law falloff of antenna density out to ~ 14 km from the center. with this design, it is relatively easy to identify multiple viable build sites on the lunar far side. the science case for farview emphasizes the unique capabilities to probe the unexplored cosmic dark ages - identified by the 2020 astrophysics decadal survey as the discovery area for cosmology. farview will deliver power spectra and tomographic maps tracing the evolution of the universe from before the birth of the first stars to the beginning of cosmic dawn, and potentially provide unique insights into dark matter, early dark energy, neutrino masses, and the physics of inflation. what makes farview feasible and affordable in the timeframe of the 2030s is that it is manufactured in - situ, utilizing space industrial technologies. this in - situ manufacturing architecture utilizes earth - built equipment that is transported to the lunar surface to extract metals from the regolith and will use those metals to manufacture most of the array components : dipole antennas, power lines, and silicon solar cell power systems. this approach also enables a long functional lifetime, by permitting servicing and repair of the observatory. the full 100, 000 dipole farview observatory will take 4 - 8 years to build, depending on the realized performance of the manufacturing elements and the lunar delivery scenario. | arxiv:2404.03840 |
we explore the use of symmetry - adapted perturbation theory ( sapt ) as a simple and efficient means to compute interaction energies between large molecular systems with a hybrid method combing nisq - era quantum and classical computers. from the one - and two - particle reduced density matrices of the monomer wavefunctions obtained by the variational quantum eigensolver ( vqe ), we compute sapt contributions to the interaction energy [ sapt ( vqe ) ]. at first order, this energy yields the electrostatic and exchange contributions for non - covalently bound systems. we empirically find from ideal statevector simulations that the sapt ( vqe ) interaction energy components display orders of magnitude lower absolute errors than the corresponding vqe total energies. therefore, even with coarsely optimized low - depth vqe wavefunctions, we still obtain sub kcal / mol accuracy in the sapt interaction energies. in sapt ( vqe ), the quantum requirements, such as qubit count and circuit depth, are lowered by performing computations on the separate molecular systems. furthermore, active spaces allow for large systems containing thousands of orbitals to be reduced to a small enough orbital set to perform the quantum portions of the computations. we benchmark sapt ( vqe ) ( with the vqe component simulated by ideal state - vector simulators ) against a handful of small multi - reference dimer systems and the iron center containing human cancer - relevant protein lysine - specific demethylase 5 ( kdm5a ). | arxiv:2110.01589 |
the ductile fracture process in porous metals due to growth and coalescence of micron scale voids is not only affected by the imposed stress state but also by the distribution of the voids and the material size effect. the objective of this work is to understand the interaction of the inter - void spacing ( or ligaments ) and the resultant gradient induced material size effect on void coalescence for a range of imposed stress states. to this end, three dimensional finite element calculations of unit cell models with a discrete void embedded in a strain gradient enhanced material matrix are performed. the calculations are carried out for a range of initial inter - void ligament sizes and imposed stress states characterised by fixed values of the stress triaxiality and the lode parameter. our results show that in the absence of strain gradient effects on the material response, decreasing the inter - void ligament size results in an increase in the propensity for void coalescence. however, in a strain gradient enhanced material matrix, the strain gradients harden the material in the inter - void ligament and decrease the effect of inter - void ligament size on the propensity for void coalescence. | arxiv:2011.04937 |
reinforcement learning from human feedback ( rlhf ) has emerged as a popular paradigm for aligning models with human intent. typically rlhf algorithms operate in two phases : first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning ( rl ). this paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the regret under the user ' s optimal policy. thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the rl phase. because of these optimization challenges, contemporary rlhf methods restrict themselves to contextual bandit settings ( e. g., as in large language models ) or limit observation dimensionality ( e. g., state - based robotics ). we overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret - based model of human preferences. using the principle of maximum entropy, we derive contrastive preference learning ( cpl ), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for rl. cpl is fully off - policy, uses only a simple contrastive objective, and can be applied to arbitrary mdps. this enables cpl to elegantly scale to high - dimensional and sequential rlhf problems while being simpler than prior methods. | arxiv:2310.13639 |
collisions between interstellar gas clouds are potentially an important mechanism for triggering star formation. this is because they are able to rapidly generate large masses of dense gas. observationally, cloud collisions are often identified in position - velocity ( pv ) space through bridging features between intensity peaks, usually of co emission. using a combination of hydrodynamical simulations, time - dependent chemistry, and radiative transfer, we produce synthetic molecular line observations of overlapping clouds that are genuinely colliding, and overlapping clouds that are just chance superpositions. molecules tracing denser material than co, such as nh $ _ 3 $ and hcn, reach peak intensity ratios of $ 0. 5 $ and $ 0. 2 $ with respect to co in the ` bridging feature ' region of pv space for genuinely colliding clouds. for overlapping clouds that are just chance superpositions, the peak nh $ _ 3 $ and hcn intensities are co - located with the co intensity peaks. this represents a way of confirming cloud collisions observationally, and distinguishing them from chance alignments of unrelated material. | arxiv:2106.10298 |
randomized coordinate descent ( rcd ) methods are state - of - the - art algorithms for training linear predictors via minimizing regularized empirical risk. when the number of examples ( $ n $ ) is much larger than the number of features ( $ d $ ), a common strategy is to apply rcd to the dual problem. on the other hand, when the number of features is much larger than the number of examples, it makes sense to apply rcd directly to the primal problem. in this paper we provide the first joint study of these two approaches when applied to l2 - regularized erm. first, we show through a rigorous analysis that for dense data, the above intuition is precisely correct. however, we find that for sparse and structured data, primal rcd can significantly outperform dual rcd even if $ d \ ll n $, and vice versa, dual rcd can be much faster than primal rcd even if $ n \ ll d $. moreover, we show that, surprisingly, a single sampling strategy minimizes both the ( bound on the ) number of iterations and the overall expected complexity of rcd. note that the latter complexity measure also takes into account the average cost of the iterations, which depends on the structure and sparsity of the data, and on the sampling strategy employed. we confirm our theoretical predictions using extensive experiments with both synthetic and real data sets. | arxiv:1605.08982 |
we provide, in the r - parity violating supersymmetric standard model, a comprehensive analysis for sneutrino minimization from the one - loop effective scalar potential, and also for one - loop renormalized neutrino masses and mixing by calculating the effective neutrino mass matrix in the weak basis. applying our results to theories with gauge mediated supersymmetry breaking, we show how atmospheric and solar neutrino oscillations can be accommodated simultaneously in this framework. it is observed that the one - loop correction to sneutrino vacuum expectation values leads to a significant effect on the determination of the neutrino masses and mixing. | arxiv:hep-ph/9909429 |
in this paper, we shall prove that all actions of lerf groups on sets are sofic. as a corollary, we obtain that a large class of generalized wreath products are sofic. | arxiv:2402.17150 |
we extend the theory of non - thermal fixed points to the case of anomalously slow universal scaling dynamics according to the sine - gordon model. this entails the derivation of a kinetic equation for the momentum occupancy of the scalar field from a non - perturbative two - particle irreducible effective action, which re - sums a series of closed loop chains akin to a large - $ n $ expansion at next - to - leading order. the resulting kinetic equation is analyzed for possible scaling solutions in space and time that are characterized by a set of universal scaling exponents and encode self - similar transport to low momenta. assuming the momentum occupancy distribution to exhibit a scaling form we can determine the exponents by identifying the dominating contributions to the scattering integral and power counting. if the field exhibits strong variations across many wells of the cosine potential, the scattering integral is dominated by the scattering of many quasiparticles such that the momentum of each single participating mode is only weakly constrained. remarkably, in this case, in contrast to wave turbulent cascades, which correspond to local transport in momentum space, our results suggest that kinetic scattering here is dominated by rather non - local processes corresponding to a spatial containment in position space. the corresponding universal correlation functions in momentum and position space corroborate this conclusion. numerical simulations performed in accompanying work yield scaling properties close to the ones predicted here. | arxiv:2212.01163 |
a new class of methods is introduced for solving the kohn - sham equations of density functional theory, based on constructing a mapping dynamically between the kohn - sham system and an auxiliary system. the resulting auxiliary density functional equation is solved implicitly for the density response, eliminating the instabilities that arise in conventional techniques for simulations of large, metallic or inhomogeneous systems. the auxiliary system is not required to be fermionic, and an example bosonic auxiliary density functional is presented which captures the key aspects of the fermionic kohn - sham behaviour. this bosonic auxiliary scheme is shown to provide good performance for a range of bulk materials, and a substantial improvement in the scaling of the calculation with system size for a variety of simulation systems. | arxiv:1503.01420 |
water segmentation is critical to disaster response and water resource management. authorities may employ high - resolution photography to monitor rivers, lakes, and reservoirs, allowing for more proactive management in agriculture, industry, and conservation. deep learning has improved flood monitoring by allowing models like cnns, u - nets, and transformers to handle large volumes of satellite and aerial data. however, these models usually have significant processing requirements, limiting their usage in real - time applications. this research proposes upgrading the segformer model for water segmentation by data augmentation with datasets such as ade20k and riwa to boost generalization. we examine how inductive bias affects attention - based models and discover that segformer performs better on bigger datasets. to further demonstrate the function of data augmentation, low - rank adaptation ( lora ) is used to lower processing complexity while preserving accuracy. we show that the suggested habaek model outperforms current models in segmentation, with an intersection over union ( iou ) ranging from 0. 91986 to 0. 94397. in terms of f1 - score, recall, accuracy, and precision, habaek performs better than rival models, indicating its potential for real - world applications. this study highlights the need to enhance structures and include datasets for effective water segmentation. | arxiv:2410.15794 |
we introduce information bearing systems ( ibrs ) as an abstraction of many logical systems. we define a general semantics for ibrs, and show that ibrs generalize in a natural way preferential semantics and solve open representation problems. | arxiv:0808.3075 |
in this work, we investigate a multi - source multi - cast network with the aid of an arbitrary number of relays, where it is assumed that no direct link is available at each s - d pair. the aim is to find the fundamental limit on the maximal common multicast throughput of all source nodes if resource allocations are available. a transmission protocol employing the relaying strategy, namely, compute - and - forward ( cpf ), is proposed. { we also adjust the methods in the literature to obtain the integer network - constructed coefficient matrix ( a naive method, a local optimal method as well as a global optimal method ) to fit for the general topology with an arbitrary number of relays. two transmission scenarios are addressed. the first scenario is delay - stringent transmission where each message must be delivered within one slot. the second scenario is delay - tolerant transmission where no delay constraint is imposed. the associated optimization problems to maximize the short - term and long - term common multicast throughput are formulated and solved, and the optimal allocation of power and time slots are presented. performance comparisons show that the cpf strategy outperforms conventional decode - and - forward ( df ) strategy. it is also shown that with more relays, the cpf strategy performs even better due to the increased diversity. finally, by simulation, it is observed that for a large network in relatively high snr regime, cpf with the local optimal method for the network - constructed matrix can perform close to that with the global optimal method. | arxiv:1406.1081 |
the production mechanism of light ( anti ) nuclei in heavy - ion collisions has been extensively studied experimentally and theoretically. two competing ( anti ) nucleosynthesis models are typically used to describe light ( anti ) nuclei yields and their ratios to other hadrons in heavy - ion collisions : the statistical hadronization model ( shm ) and the nucleon coalescence model. the possibility to distinguish these phenomenological models calls for new experimental observables. given their large baryon number, light ( anti ) nuclei have a high sensitivity to the baryon chemical potential ( $ \ mu _ { \ rm b } $ ) of the system created in the collision. in this talk, the first measurement of event - by - event antideuteron number fluctuations in heavy - ion collisions is presented and compared with expectations of the shm and coalescence model. in addition, the antinuclei - to - nuclei ratios are used to obtain a measurement of $ \ mu _ { \ rm b } $ in heavy - ion collisions with unprecedented precision. | arxiv:2209.05369 |
the largest structures in the cosmic web probe the dynamical nature of dark energy through their integrated sachs - wolfe imprints. in the strength of the signal, typical cosmic voids have shown good consistency with expectation $ a _ { \ rm isw } = \ delta t ^ { \ rm data } / \ delta t ^ { \ rm theory } = 1 $, given the substantial cosmic variance. discordantly, large - scale hills in the gravitational potential, or supervoids, have shown excess signals. in this study, we mapped out 87 new supervoids in the total 5000 deg $ ^ 2 $ footprint of the dark energy survey at $ 0. 2 < z < 0. 9 $ to probe these anomalous claims. we found an excess imprinted profile with $ a _ { \ rm isw } \ approx4. 1 \ pm2. 0 $ amplitude. the combination with independent boss data reveals an isw imprint of supervoids at the $ 3. 3 \ sigma $ significance level with an enhanced $ a _ { \ rm isw } \ approx5. 2 \ pm1. 6 $ amplitude. the tension with $ \ lambda $ cdm predictions is equivalent to $ 2. 6 \ sigma $ and remains unexplained. | arxiv:1811.07812 |
we present a comprehensive theoretical study of the magnetic field dependence of the near - field radiative heat transfer ( nfrht ) between two parallel plates. we show that when the plates are made of doped semiconductors, the near - field thermal radiation can be severely affected by the application of a static magnetic field. we find that irrespective of its direction, the presence of a magnetic field reduces the radiative heat conductance, and dramatic reductions up to 700 % can be found with fields of about 6 t at room temperature. we show that this striking behavior is due to the fact that the magnetic field radically changes the nature of the nfrht. the field not only affects the electromagnetic surface waves ( both plasmons and phonon polaritons ) that normally dominate the near - field radiation in doped semiconductors, but it also induces hyperbolic modes that progressively dominate the heat transfer as the field increases. in particular, we show that when the field is perpendicular to the plates, the semiconductors become ideal hyperbolic near - field emitters. more importantly, by changing the magnetic field, the system can be continuously tuned from a situation where the surface waves dominate the heat transfer to a situation where hyperbolic modes completely govern the near - field thermal radiation. we show that this high tunability can be achieved with accessible magnetic fields and very common materials like n - doped insb or si. our study paves the way for an active control of nfrht and it opens the possibility to study unique hyperbolic thermal emitters without the need to resort to complicated metamaterials. | arxiv:1506.06060 |
let $ a $ be a finite rank torsion - - free abelian group. then there exist direct decompositions $ a = b \ oplus c $ where $ b $ is completely decomposable and $ c $ has no rank 1 direct summand. in such a decomposition $ b $ is unique up to isomorphism and $ c $ unique up to near - isomorphism. | arxiv:1701.02460 |
large language models have gained significant popularity because of their ability to generate human - like text and potential applications in various fields, such as software engineering. large language models for code are commonly trained on large unsanitised corpora of source code scraped from the internet. the content of these datasets is memorised and can be extracted by attackers with data extraction attacks. in this work, we explore memorisation in large language models for code and compare the rate of memorisation with large language models trained on natural language. we adopt an existing benchmark for natural language and construct a benchmark for code by identifying samples that are vulnerable to attack. we run both benchmarks against a variety of models, and perform a data extraction attack. we find that large language models for code are vulnerable to data extraction attacks, like their natural language counterparts. from the training data that was identified to be potentially extractable we were able to extract 47 % from a codegen - mono - 16b code completion model. we also observe that models memorise more, as their parameter count grows, and that their pre - training data are also vulnerable to attack. we also find that data carriers are memorised at a higher rate than regular code or documentation and that different model architectures memorise different samples. data leakage has severe outcomes, so we urge the research community to further investigate the extent of this phenomenon using a wider range of models and extraction techniques in order to build safeguards to mitigate this issue. | arxiv:2312.11658 |
we present an efficient computational framework to quantify the impact of individual observations in four dimensional variational data assimilation. the proposed methodology uses first and second order adjoint sensitivity analysis, together with matrix - free algorithms to obtain low - rank approximations of ob - servation impact matrix. we illustrate the application of this methodology to important applications such as data pruning and the identification of faulty sensors for a two dimensional shallow water test system. | arxiv:1307.5076 |
the discovery of ultra - high - energy neutrinos, with energies above 100 pev, may soon be within reach of upcoming neutrino telescopes. we present a robust framework to compute the statistical significance of point - source discovery via the detection of neutrino multiplets. we apply it to the radio array component of icecube - gen2. to identify a source with $ 3 \ sigma $ significance, icecube - gen2 will need to detect a triplet, at best, and an octuplet, at worst, depending on whether the source is steady - state or transient, and on its position in the sky. the discovery, or absence, of sources significantly constrains the properties of the source population. | arxiv:2207.11940 |
we consider regulated curves in a banach bundle whose projection on the basis is continuous with regulated derivative. we build a banach manifold structure on the set of such curves. this result was previously obtained for the case of strong riemannian banach manifold and absolutely continuous curves in arxiv : 1612. 02604. the essential argument used was the existence of a " local addition " on such a manifold. our proof is true for any banach manifold. in the second part of the paper the problems of controllability will be discussed. | arxiv:2112.14690 |
script knowledge is critical for humans to understand the broad daily tasks and routine activities in the world. recently researchers have explored the large - scale pre - trained language models ( plms ) to perform various script related tasks, such as story generation, temporal ordering of event, future event prediction and so on. however, it ' s still not well studied in terms of how well the plms capture the script knowledge. to answer this question, we design three probing tasks : inclusive sub - event selection, starting sub - event selection and temporal ordering to investigate the capabilities of plms with and without fine - tuning. the three probing tasks can be further used to automatically induce a script for each main event given all the possible sub - events. taking bert as a case study, by analyzing its performance on script induction as well as each individual probing task, we conclude that the stereotypical temporal knowledge among the sub - events is well captured in bert, however the inclusive or starting sub - event knowledge is barely encoded. | arxiv:2204.10176 |
we apply methods from algebraic geometry to study uniform matrix product states. our main results concern the topology of the locus of tensors expressed as umps, their defining equations and identifiability. by an interplay of theorems from algebra, geometry and quantum physics we answer several questions and conjectures posed by critch, morton and hackbusch. | arxiv:1904.07563 |
we present spatially resolved photometric and spectroscopic observations of two wide brown dwarf binaries uncovered by the simp near - infrared proper motion survey. the first pair ( simp j1619275 + 031350ab ) has a separation of 0. 691 " ( 15. 2 au ) and components t2. 5 + t4. 0, at the cooler end of the ill - understood j - band brightening. the system is unusual in that the earlier - type primary is bluer in j - ks than the later - type secondary, whereas the reverse is expected for binaries in the late - l to t dwarf range. this remarkable color reversal can possibly be explained by very different cloud properties between the two components. the second pair ( simp j1501530 - 013506ab ) consists of an l4. 5 + l5. 5 ( separation 0. 96 ", 30 - 47 au ) with a surprisingly large flux ratio ( delta j = 1. 79 mag ) considering the similar spectral types of its components. the large flux ratio could be explained if the primary is itself an equal - luminosity binary, which would make it one of the first known triple brown dwarf systems. adaptive optics observations could not confirm this hypothesis, but it remains a likely one, which may be verified by high - resolution near - infrared spectroscopy. these two systems add to the handful of known brown dwarf binaries amenable to resolved spectroscopy without the aid of adaptive optics and constitute prime targets to test brown dwarf atmosphere models. | arxiv:1107.0768 |
vapor condensation is extensively used in applications that demand the exchange of a substantial amount of heat energy or the vapor - liquid phase conversion. in conventional condensers, the condensate removal from a subcooled surface is caused by gravity force. this restricts the use of such condensers in space applications or in horizontal orientations. the current study demonstrates proof - of - concept of a novel plate - type condenser platform for passively removing condensate from a horizontally oriented surface to the surrounded wicking reservoir without gravity. the condensing surface is engineered with patterned wettabilities, which enables the continuous migration of condensate from the inner region of the condenser surface to the side edges via surface energy gradient. the surrounding wicking reservoir facilitates the continuous absorption of condensate from the side edges. the condensation dynamics on different substrates with patterned wettabilities are investigated, and their condensation heat transfer performance is compared. the continuous migration of condensate drops from a superhydrophobic to a superhydrophilic area can rejuvenate the nucleation sites in the superhydrophobic area, resulting in increased heat transport. we can use the condenser design with engineered wettability mentioned above for temperature and humidity management applications in space. | arxiv:2305.19070 |
classically, b \ ' ezout ' s theorem says that an intersection of hypersurfaces in a projective space is rationally equivalent to a number of copies of a smaller projective space, the number depending on the degrees of the hypersurfaces. we give a generalization of that result to the context of $ c _ 2 $ - equivariant hypersurfaces in $ c _ 2 $ - equivariant linear projective space, expressing the intersection as a linear combination of equivariant schubert varieties. | arxiv:2312.00559 |
high - dimensional count data poses significant challenges for statistical analysis, necessitating effective methods that also preserve explainability. we focus on a low rank constrained variant of the poisson log - normal model, which relates the observed data to a latent low - dimensional multivariate gaussian variable via a poisson distribution. variational inference methods have become a golden standard solution to infer such a model. while computationally efficient, they usually lack theoretical statistical properties with respect to the model. to address this issue we propose a projected stochastic gradient scheme that directly maximizes the log - likelihood. we prove the convergence of the proposed method when using importance sampling for estimating the gradient. specifically, we obtain a rate of convergence of $ o ( t ^ { - 1 / 2 } + n ^ { - 1 } ) $ with $ t $ the number of iterations and $ n $ the number of monte carlo draws. the latter follows from a novel descent lemma for non convex $ l $ - smooth objective functions, and random biased gradient estimate. we also demonstrate numerically the efficiency of our solution compared to its variational competitor. our method not only scales with respect to the number of observed samples but also provides access to the desirable properties of the maximum likelihood estimator. | arxiv:2410.00476 |
this study extends a prior investigation of limit shapes for partitions of integers, which was based on analysis of sums of geometric random variables. here we compute limit shapes for grand canonical gibbs ensembles of partitions of sets, which lead to the sums of poisson random variables. under mild monotonicity assumptions, we study all possible scenarios arising from different asymptotic behaviors of the energy, and also compute local limit shape profiles for cases in which the limit shape is a step function. | arxiv:2008.01851 |
high dimensional piecewise stationary graphical models represent a versatile class for modelling time varying networks arising in diverse application areas, including biology, economics, and social sciences. there has been recent work in offline detection and estimation of regime changes in the topology of sparse graphical models. however, the online setting remains largely unexplored, despite its high relevance to applications in sensor networks and other engineering monitoring systems, as well as financial markets. to that end, this work introduces a novel scalable online algorithm for detecting an unknown number of abrupt changes in the inverse covariance matrix of sparse gaussian graphical models with small delay. the proposed algorithm is based upon monitoring the conditional log - likelihood of all nodes in the network and can be extended to a large class of continuous and discrete graphical models. we also investigate asymptotic properties of our procedure under certain mild regularity conditions on the graph size, sparsity level, number of samples, and pre - and post - changes in the topology of the network. numerical works on both synthetic and real data illustrate the good performance of the proposed methodology both in terms of computational and statistical efficiency across numerous experimental settings. | arxiv:1806.07870 |
we have measured the column density distribution function, f ( n ), at z = 0 using 21 - cm hi emission from galaxies selected from a blind hi survey. f ( n ) is found to be smaller and flatter at z = 0 than indicated by high - redshift measurements of damped lyman - alpha ( dla ) systems, consistent with the predictions of hierarchical galaxy formation. the derived dla number density per unit redshift, dn / dz = 0. 058, is in moderate agreement with values calculated from low - redshift qso absorption line studies. we use two different methods to determine the types of galaxies which contribute most to the dla cross - section : comparing the power law slope of f ( n ) to theoretical predictions and analysing contributions to dn / dz. we find that comparison of the power law slope cannot rule out spiral discs as the dominant galaxy type responsible for dla systems. analysis of dn / dz however, is much more discriminating. we find that galaxies with log m _ hi < 9. 0 make up 34 % of dn / dz ; irregular and magellanic types contribute 25 % ; galaxies with surface brightness > 24 mag arcsec ^ { - 2 } account for 22 % and sub - l * galaxies contribute 45 % to dn / dz. we conclude that a large range of galaxy types give rise to dla systems, not just large spiral galaxies as previously speculated. | arxiv:astro-ph/0305010 |
echoing recent calls to counter reliability and robustness concerns in machine learning via multiverse analysis, we present presto, a principled framework for mapping the multiverse of machine - learning models that rely on latent representations. although such models enjoy widespread adoption, the variability in their embeddings remains poorly understood, resulting in unnecessary complexity and untrustworthy representations. our framework uses persistent homology to characterize the latent spaces arising from different combinations of diverse machine - learning methods, ( hyper ) parameter configurations, and datasets, allowing us to measure their pairwise ( dis ) similarity and statistically reason about their distributions. as we demonstrate both theoretically and empirically, our pipeline preserves desirable properties of collections of latent representations, and it can be leveraged to perform sensitivity analysis, detect anomalous embeddings, or efficiently and effectively navigate hyperparameter search spaces. | arxiv:2402.01514 |
privacy - preservation policies are guidelines formulated to protect data providers private data. previous privacy - preservation methodologies have addressed privacy in which data are permanently stored in repositories and disconnected from changing data provider privacy preferences. this occurrence becomes evident as data moves to another data repository. hence, the need for data providers to control and flexibly update their existing privacy preferences due to changing data usage continues to remain a problem. this paper proposes a blockchain - based methodology for preserving data providers private and sensitive data. the research proposes to tightly couple data providers private attribute data element to privacy preferences and data accessor data element into a privacy tuple. the implementation presents a framework of tightly - coupled relational database and blockchains. this delivers secure, tamper - resistant, and query - efficient platform for data management and query processing. the evaluation analysis from the implementation validates efficient query processing of privacy - aware queries on the privacy infrastructure. | arxiv:2408.11263 |
in an imaginary conversation with guido altarelli, i express my views on the status of particle physics beyond the standard model and its future prospects. | arxiv:1710.07663 |
this paper considers the physical realizability condition for multi - level quantum systems having polynomial hamiltonian and multiplicative coupling with respect to several interacting boson fields. specifically, it generalizes a recent result the authors developed for two - level quantum systems. for this purpose, the algebra of su ( n ) was incorporated. as a consequence, the obtained condition is given in terms of the structure constants of su ( n ). | arxiv:1208.3516 |
self - propelled bacteria are marvels of nature with a potential to power dynamic materials and microsystems of the future. the challenge is in commanding their chaotic behavior. by dispersing swimming bacillus subtilis in a liquid - crystalline environment with spatially - varying orientation of the anisotropy axis, we demonstrate control over the distribution of bacteria, geometry and polarity of their trajectories. bacteria recognize subtle differences in liquid crystal deformations, engaging in bipolar swimming in regions of pure splay and bend but switching to unipolar swimming in mixed splay - bend regions. they differentiate topological defects, heading towards defects of positive topological charge and avoiding negative charges. sensitivity of bacteria to pre - imposed orientational patterns represents a new facet of the interplay between hydrodynamics and topology of active matter. | arxiv:1611.06286 |
benefiting from cloud computing, today ' s early - stage quantum computers can be remotely accessed via the cloud services, known as quantum - as - a - service ( qaas ). however, it poses a high risk of data leakage in quantum machine learning ( qml ). to run a qml model with qaas, users need to locally compile their quantum circuits including the subcircuit of data encoding first and then send the compiled circuit to the qaas provider for execution. if the qaas provider is untrustworthy, the subcircuit to encode the raw data can be easily stolen. therefore, we propose a co - design framework for preserving the data security of qml with the qaas paradigm, namely pristiq. by introducing an encryption subcircuit with extra secure qubits associated with a user - defined security key, the security of data can be greatly enhanced. and an automatic search algorithm is proposed to optimize the model to maintain its performance on the encrypted quantum data. experimental results on simulation and the actual ibm quantum computer both prove the ability of pristiq to provide high security for the quantum data while maintaining the model performance in qml. | arxiv:2404.13475 |
in this paper, we give the sharp estimates for the degree of symmetry and the semi - simple degree of symmetry of certain four dimensional fiber bundles by virtue of the rigidity theorem of harmonic maps due to schoen and yau. as a corollary of this estimate, we compute the degree of symmetry and the semi - simple degree of symmetry of $ { \ bbb c } p ^ 2 \ times v $, where $ v $ is closed smooth manifold admitting a real analytic riemannian metric of non - positive curvature. in addition, by the albanese map, we obtain the sharp estimate of the degree of symmetry of a compact smooth manifold with some restrictions on its one dimensional cohomology. | arxiv:math/0505646 |
in this article we review the observation, due originally to dwork, that the zeta - function of an arithmetic variety, defined originally over the field with p elements, is a superdeterminant. we review this observation in the context of a one parameter family of quintic threefolds, and study the zeta - function as a function of the parameter \ phi. owing to cancellations, the superdeterminant of an infinite matrix reduces to the ( ordinary ) determinant of a finite matrix, u ( \ phi ), corresponding to the action of the frobenius map on certain cohomology groups. the parameter - dependence of u ( \ phi ) is given by a relation u ( \ phi ) = e ^ { - 1 } ( \ phi ^ p ) u ( 0 ) e ( \ phi ) with e ( \ phi ) a wronskian matrix formed from the periods of the manifold. the periods are defined by series that converge for $ | \ phi | _ p < 1 $. the values of \ phi that are of interest are those for which \ phi ^ p = \ phi so, for nonzero \ phi, we have | \ vph | _ p = 1. we explain how the process of p - adic analytic continuation applies to this case. the matrix u ( \ phi ) breaks up into submatrices of rank 4 and rank 2 and we are able from this perspective to explain some of the observations that have been made previously by numerical calculation. | arxiv:0705.2056 |
we study relativistic hydrodynamics in the linear regime, based on mori ' s projection operator method. in relativistic hydrodynamics, it is considered that ambiguity about the fluid velocity occurs from a choice of a local rest frame : the landau and eckart frames. we find that the difference of the frames is not the choice of the local rest frame, but rather that of dynamic variables in the linear regime. we derive hydrodynamic equations in the both frames by the projection operator method. we show that natural derivation gives the linearized landau equation. also, we find that, even for the eckart frame, the slow dynamics is actually described by the dynamic variables for the landau frame. | arxiv:1210.1313 |
we explore the possibility to calibrate massive cluster ellipticals as cosmological standard rods. the method is based on the fundamental plane relation combined with a correction for luminosity evolution which is derived from the mg $ - \ sigma $ relation. principle caveats and sources of major errors are briefly discussed. we apply the described procedure to nine elliptical galaxies in two clusters at $ z = 0. 375 $ and derive constraints on the cosmological model. for the best fitting $ \ lambda $ - free cosmological model we obtain : $ q _ o \ approx 0. 1 $, with 90 % confidence limits being $ 0 < q _ o < 0. 7 $ ( the lower limit being due to the presence of matter in the universe ). if the inflationary scenario applies ( i. e., space has flat geometry ), then, for the best fitting model, matter and $ \ lambda $ contribute about equally to the critical cosmic density ( i. e. $ \ omega _ m \ approx \ omega _ \ lambda \ approx 0. 5 $ ). with 90 % confidence $ \ omega _ \ lambda $ should be smaller than 0. 9. | arxiv:astro-ph/9711278 |
given a sequence $ ( t _ 1, t _ 2,... ) $ of random $ d \ times d $ matrices with nonnegative entries, suppose there is a random vector $ x $ with nonnegative entries, such that $ \ sum _ { i \ ge 1 } t _ i x _ i $ has the same law as $ x $, where $ ( x _ 1, x _ 2,... ) $ are i. i. d. copies of $ x $, independent of $ ( t _ 1, t _ 2,... ) $. then ( the law of ) $ x $ is called a fixed point of the multivariate smoothing transform. similar to the well - studied one - dimensional case $ d = 1 $, a function $ m $ is introduced, such that the existence of $ \ alpha \ in ( 0, 1 ] $ with $ m ( \ alpha ) = 1 $ and $ m ' ( \ alpha ) \ le 0 $ guarantees the existence of nontrivial fixed points. we prove the uniqueness of fixed points in the critical case $ m ' ( \ alpha ) = 0 $ and describe their tail behavior. this complements recent results for the non - critical multivariate case. moreover, we introduce the multivariate analogue of the derivative martingale and prove its convergence to a non - trivial limit. | arxiv:1409.7220 |
we report a systematic study of transport properties of nanosytems with charge density waves. we demonstrate, how the presence of density waves modifies the current - voltage characteristics. on the other hand hand, we show that the density waves themselves are strongly affected by the applied voltage. this self - consistent problem is solved within the formalism of the nonequilibrium green functions. the conventional charge density waves occur only for specific, periodically distributed ranges of the voltage. apart from the low voltage regime, they are incommensurate and the corresponding wave vectors decrease discontinuously when the voltage increases. | arxiv:cond-mat/0605142 |
we report on the first fully differential calculation for double higgs boson production through gluon fusion in hadron collisions up to next - to - next - to - leading order ( nnlo ) in qcd perturbation theory. the calculation is performed in the heavy - top limit of the standard model, and in the phenomenological results we focus on pp collisions at 14 tev. we present differential distributions through nnlo for various observables including the transverse - momentum and rapidity distributions of the two higgs bosons. nnlo corrections are at the level of 10 % - 25 % with respect to the next - to - leading order ( nlo ) prediction with a residual scale uncertainty of 5 % - 15 % and an overall mild phase - space dependence. only at nnlo the perturbative expansion starts to converge yielding overlapping scale uncertainty bands between nnlo and nlo in most of the phase - space. the calculation includes nlo predictions for pp - > hh + jet + x. corrections to the corresponding distributions exceed 50 % with a residual scale dependence of 20 % - 30 %. | arxiv:1606.09519 |
this paper proposes a new class of mass or energy conservative numerical schemes for the generalized benjamin - ono ( bo ) equation on the whole real line with arbitrarily high - order accuracy in time. the spatial discretization is achieved by the pseudo - spectral method with the rational basis functions, which can be implemented by the fast fourier transform ( fft ) with the computational cost $ \ mathcal { o } ( n \ log ( n ) ) $. by reformulating the spatial discretized system into the different equivalent forms, either the spatial semi - discretized mass or energy can be preserved exactly under the continuous time flow. combined with the symplectic runge - kutta, with or without the scalar auxiliary variable reformulation, the fully discrete energy or mass conservative scheme can be constructed with arbitrarily high - order temporal accuracy, respectively. our numerical results show the conservation of the proposed schemes, and also the superior accuracy and stability to the non - conservative ( leap - frog ) scheme. | arxiv:2108.12975 |
artificial intelligence ( ai ) models deployed in production frequently face challenges in maintaining their performance in non - stationary environments. this issue is particularly noticeable in medical settings, where temporal dataset shifts often occur. these shifts arise when the distributions of training data differ from those of the data encountered during deployment over time. further, new labeled data to continuously retrain ai is not typically available in a timely manner due to data access limitations. to address these challenges, we propose a proactive self - adaptive ai approach, or pro - adaptive, where we model the temporal trajectory of ai parameters, allowing us to short - term forecast parameter values. to this end, we use polynomial spline bases, within an extensible functional data analysis framework. we validate our methodology with a logistic regression model addressing prior probability shift, covariate shift, and concept shift. this validation is conducted on both a controlled simulated dataset and a publicly available real - world covid - 19 dataset from mexico, with various shifts occurring between 2020 and 2024. our results indicate that this approach enhances the performance of ai against shifts compared to baseline stable models trained at different time distances from the present, without requiring updated training data. this work lays the foundation for pro - adaptive ai research against dynamic, non - stationary environments, being compatible with data protection, in resilient ai production environments for health. | arxiv:2504.21565 |
the technology acceptance model is essential to analyze the factors affecting customers ’ behavior towards online food delivery services. it is also a widely adopted theoretical model to demonstrate the acceptance of new technology fields. the foundation of tam is a series of concepts that clarifies and predicts people ’ s behaviors with their beliefs, attitudes, and behavioral intention. in tam, perceived ease of use and perceived usefulness, considered general beliefs, play a more vital role than salient beliefs in attitudes toward utilizing a particular technology. = = alternative models = = the mpt model : independent of tam, scherer developed the matching person and technology model in 1986 as part of her national science foundation - funded dissertation research. the mpt model is fully described in her 1993 text, " living in the state of stuck ", now in its 4th edition. the mpt model has accompanying assessment measures used in technology selection and decision - making, as well as outcomes research on differences among technology users, non - users, avoiders, and reluctant users. the hmsam : tam has been effective for explaining many kinds of systems use ( i. e. e - learning, learning management systems, webportals, etc. ) ( fathema, shannon, ross, 2015 ; fathema, ross, witte, 2014 ). however, tam is not ideally suited to explain adoption of purely intrinsic or hedonic systems ( e. g., online games, music, learning for pleasure ). thus, an alternative model to tam, called the hedonic - motivation system adoption model ( hmsam ) was proposed for these kinds of systems by lowry et al. hmsam is designed to improve the understanding of hedonic - motivation systems ( hms ) adoption. hms are systems used primarily to fulfill users ' intrinsic motivations, such for online gaming, virtual worlds, online shopping, learning / education, online dating, digital music repositories, social networking, only pornography, gamified systems, and for general gamification. instead of a minor tam extension, hmsam is an hms - specific system acceptance model based on an alternative theoretical perspective, which is in turn grounded in flow - based cognitive absorption ( ca ). hmsam may be especially useful in understanding gamification elements of systems use. extended tam : several studies proposed extension of original tam ( davis, 1989 ) by adding external variables in it with an aim of exploring the effects of external factors on users ' attitude, behavioral intention and actual use of technology. several factors have been examined so far. | https://en.wikipedia.org/wiki/Technology_acceptance_model |
we summarize the main results from our scuba survey of lyman - break galaxies ( lbgs ) at z ~ 3. analysis of our sample of lbgs reveals a mean flux of s850 = 0. 6 $ \ pm $ 0. 2 mjy, while simple models of emission based on the uv properties predict a mean flux about twice as large. known populations of lbgs are expected to contribute flux to the weak sub - mm source portion of the far - ir background, but are not likely to comprise the bright source ( s850 > 5 mjy ) end of the scuba - detected source count. the detection of the lbg, westphal - mm8, at 1. 9 mjy suggests that deeper observations of individual lbgs in our sample could uncover detections at similar levels, consistent with our uv - based predictions. by the same token, many sub - mm selected sources with s850 < 2 mjy could be lbgs. the data are also consistent with the farir / $ \ beta $ relation holding at z = 3. | arxiv:astro-ph/0009152 |
an important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. we focus on the non - energy - weighted sum rule ( newsr ), or total strength, and the energy - weighted sum rule ( ewsr ) ; the ratio of the ewsr to the newsr is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. these sum rules can be expressed as expectation values of operators, in the case of the ewsr a double commutator. while most prior applications of the double - commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework ( occupation space ), given the input matrix elements for the nuclear hamiltonian and for the transition operator. with these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. we apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole ( e1 ) sum rule against the famous thomas - reiche - kuhn version. we also find surprising systematic behaviors for ground state electric quadrupole ( e2 ) centroids in the $ sd $ - shell. | arxiv:1710.03187 |
in this work, we introduce the law of closest approach which is derived from the properties of conic orbits and can be considered an addendum to the laws of kepler. it states that on the closest approach, the distance between the objects is minimal and the velocity vector is perpendicular to the position vector with maximum speed. the ratio of twice the kinetic energy to the negative potential energy is equal to the eccentricity plus one. the advantage of this law is that both speed and position are at extremum making the calculation of the eccentricity more robust. | arxiv:2409.09097 |
many structured prediction and reasoning tasks can be framed as program synthesis problems, where the goal is to generate a program in a domain - specific language ( dsl ) that transforms input data into the desired output. unfortunately, purely neural approaches, such as large language models ( llms ), often fail to produce fully correct programs in unfamiliar dsls, while purely symbolic methods based on combinatorial search scale poorly to complex problems. motivated by these limitations, we introduce a hybrid approach, where llm completions for a given task are used to learn a task - specific, context - free surrogate model, which is then used to guide program synthesis. we evaluate this hybrid approach on three domains, and show that it outperforms both unguided search and direct sampling from llms, as well as existing program synthesizers. | arxiv:2405.15880 |
the alqueva multi - purpose project ( efma ) is a massive abduction and storage infrastructure system in the alentejo, which has a water quality monitoring network with almost thousands of water quality stations distributed across three subsystems : alqueva, pedrog \ ~ ao, and ardila. identification of pollution sources in complex infrastructure systems, such as the efma, requires recognition of water flow direction and delimitation of areas being drained to specific sampling points. the transfer channels in the efma infrastructure artificially connect several water bodies that do not share drainage basins, which further complicates the interpretation of water quality data because the water does not flow exclusively downstream and is not restricted to specific basins. the existing user - friendly gis tools do not facilitate the exploration and visualisation of water quality data in spatial - temporal dimensions, such as defining temporal relationships between monitoring campaigns, nor do they allow the establishment of topological and hydrological relationships between different sampling points. this thesis work proposes a framework capable of aggregating many types of information in a gis environment, visualising large water quality - related datasets and, a graph data model to integrate and relate water quality between monitoring stations and land use. the graph model allows to exploit the relationship between water quality in a watercourse and reservoirs associated with infrastructures. the graph data model and the developed framework demonstrated encouraging results and has proven to be preferred when compared to relational databases. | arxiv:2402.04884 |
we explore possibilities of collapse and star formation in population iii objects exposed to the external ultraviolet background ( uvb ) radiation. assuming spherical symmetry, we solve self - consistently radiative transfer of photons, non - equilibrium h2 chemistry, and gas hydrodynamics. although the uvb does suppress the formation of low mass objects, the negative feedback turns out to be weaker than previously suggested. in particular, the cut - off scale of collapse drops significantly below the virial temperature 10 ^ 4 k at weak uv intensities, due to both self - shielding of the gas and h2 cooling. clouds above this cut - off tend to contract highly dynamically, further promoting self - shielding and h2 formation. for plausible radiation intensities and spectra, the collapsing gas can cool efficiently to temperatures well below 10 ^ 4 k before rotationally supported and the final h2 fraction reaches 10 ^ { - 3 }. our results imply that star formation can take place in low mass objects collapsing in the uvb. the threshold baryon mass for star formation is \ sim 10 ^ 9 solar mass for clouds collapsing at redshifts z \ simlt 3, but drops significantly at higher redshifts. in a conventional cold dark matter universe, the latter coincides roughly with that of the 1 \ sigma density fluctuations. objects near and above this threshold can thus constitute ` building blocks ' of luminous structures, and we discuss their links to dwarf spheroidal / elliptical galaxies and faint blue objects. these results suggest that the uvb can play a key role in regulating the star formation history of the universe. | arxiv:astro-ph/0105293 |
within a simple model context, the sensitivity and stability of the thermohaline circulation to finite amplitude perturbations is studied. a new approach is used to tackle this nonlinear problem. the method is based on the computation of the so - called conditional nonlinear optimal perturbation ( cnop ) which is a nonlinear generalization of the linear singular vector approach ( lsv ). it is shown that linearly stable thermohaline circulation states can become nonlinearly unstable and the properties of the perturbations with optimal nonlinear growth are determined. an asymmetric nonlinear response to perturbations exists with respect to the sign of finite amplitude freshwater perturbations, on both thermally dominated and salinity dominated thermohaline flows. this asymmetry is due to the nonlinear interaction of the perturbations through advective processes. | arxiv:physics/0702083 |
a novel method for robust estimation, called graph - cut ransac, gc - ransac in short, is introduced. to separate inliers and outliers, it runs the graph - cut algorithm in the local optimization ( lo ) step which is applied when a so - far - the - best model is found. the proposed lo step is conceptually simple, easy to implement, globally optimal and efficient. gc - ransac is shown experimentally, both on synthesized tests and real image pairs, to be more geometrically accurate than state - of - the - art methods on a range of problems, e. g. line fitting, homography, affine transformation, fundamental and essential matrix estimation. it runs in real - time for many problems at a speed approximately equal to that of the less accurate alternatives ( in milliseconds on standard cpu ). | arxiv:1706.00984 |
markov models have been widely used to represent and analyse user web navigation data. in previous work we have proposed a method to dynamically extend the order of a markov chain model and a complimentary method for assessing the predictive power of such a variable length markov chain. herein, we review these two methods and propose a novel method for measuring the ability of a variable length markov model to summarise user web navigation sessions up to a given length. while the summarisation ability of a model is important to enable the identification of user navigation patterns, the ability to make predictions is important in order to foresee the next link choice of a user after following a given trail so as, for example, to personalise a web site. we present an extensive experimental evaluation providing strong evidence that prediction accuracy increases linearly with summarisation ability. | arxiv:cs/0606115 |
the richness of the universe teaches us modesty and guides us to search for both primitive and intelligent forms of life elsewhere without prejudice. | arxiv:1706.05959 |
in this paper, we concern the isolated singular solutions for semi - linear elliptic equations involving the hardy - leray potentials \ begin { equation } \ label { 0 } - \ delta u + \ frac { \ mu } { | x | ^ 2 } u = u ^ p \ quad { \ rm in } \ quad \ omega \ setminus \ { 0 \ }, \ qquad u = 0 \ quad { \ rm on } \ quad \ partial \ omega. \ end { equation } we classify the isolated singularities and obtain the existence, the stability of positive solutions of ( \ ref { 0 } ). our results are based on the study of nonhomogeneous hardy problem in a new distributional sense. | arxiv:1706.01793 |
the concept of an exciton as a quasiparticle that represents collective excited states was originally adapted from solid - state physics and has been successfully applied to molecular aggregates by relying on the well - established limits of the wannier exciton and the frenkel exciton. however, the study of excitons in more complex chemical systems and solid materials over the past two decades has made it clear that simple concepts based on wannier or frenkel excitons are not sufficient to describe detailed excitonic behavior, especially in nano - structured solid materials, multichromophoric macromolecules, and complex molecular aggregates. in addition, important effects such as vibronic coupling, the influence of charge - transfer ( ct ) components, spin - state interconversion, and electronic correlation, which had long been studied but not fully understood, have turned out to play a central role in many systems. this has motivated new experimental approaches and theoretical studies of increasing sophistication. this article provides an overview of works addressing these issues that were published for a special topic of the journal of chemical physics on " excitons : energetics and spatio - temporal dynamics " and discusses their implications. | arxiv:2111.06460 |
the present paper provides exact mathematical expressions for the high - order moments of spiking activity in a recurrently - connected network of linear hawkes processes. it extends previous studies that have explored the case of a ( linear ) hawkes network driven by deterministic intensity functions to the case of a stimulation by external inputs ( rate functions or spike trains ) with arbitrary correlation structure. our approach describes the spatio - temporal filtering induced by the afferent and recurrent connectivities ( with arbitrary synaptic response kernels ) using operators acting on the input moments. this algebraic viewpoint provides intuition about how the network ingredients shape the input - output mapping for moments, as well as cumulants. we also show using numerical simulation that our results hold for neurons with refractoriness implemented by self - inhibition, provided the corresponding negative feedback for each neuron only mildly alters its mean firing probability. | arxiv:1810.09520 |
we explore two saddle point inflationary scenarios in the context of higher order corrections related to different generalisations of general relativity. firstly, we deal with jordan frame starobinsky potential, for which we identify a portion of a parameter space of inflection point inflation, which can accommodate all the experimental results. secondly, we analyse higgs inflation and more specifically the influence of non - renormalisible terms on the standard quartic potential. all results were verified with the planck 2015 data. | arxiv:1509.00031 |
we conjecture a topology changing transition in m - theory on a non - compact asymptotically conical spin ( 7 ) manifold, where a 5 - sphere collapses and a cp ( 2 ) bolt grows. we argue that the transition may be understood as the condensation of m5 - branes wrapping the 5 - sphere. upon reduction to ten dimensions, it has a physical interpretation as a transition of d6 - branes lying on calibrated submanifolds of flat space. in yet another guise, it may be seen as a geometric transition between two phases of type iia string theory on a g _ 2 holonomy manifold with either wrapped d6 - branes, or background ramond - ramond flux. this is the first non - trivial example of a topology changing transition with only 1 / 16 supersymmetry. | arxiv:hep-th/0207244 |
we propose to use quantum interferences to improve the accuracy of the measurement of the free fall acceleration g of antihydrogen in the gbar experiment. this method uses most antiatoms prepared in the experiment and it is simple in its principle as interferences between gravitational quantum states are readout without transitions between them. we use a maximum likelihood method for estimating the value of g and assess the accuracy of this estimation by a monte - carlo simulation. we find that the accuracy is improved by approximately three orders of magnitude with respect to the classical timing technique planned for the current design of the experiment. | arxiv:1903.10788 |
we show that apparently conflicting results on the time - variation of the measured cl - 36 decay rates can be readily understood as arising from solar variability beyond the simple earth - sun distance model, and hence are not an indication of instrumental effects. | arxiv:1210.3334 |
negative sampling ( ns ) is widely used in knowledge graph embedding ( kge ), which aims to generate negative triples to make a positive - negative contrast during training. however, existing ns methods are unsuitable when multi - modal information is considered in kge models. they are also inefficient due to their complex design. in this paper, we propose modality - aware negative sampling ( mans ) for multi - modal knowledge graph embedding ( mmkge ) to address the mentioned problems. mans could align structural and visual embeddings for entities in kgs and learn meaningful embeddings to perform better in multi - modal kge while keeping lightweight and efficient. empirical results on two benchmarks demonstrate that mans outperforms existing ns methods. meanwhile, we make further explorations about mans to confirm its effectiveness. | arxiv:2304.11618 |
in the present work, we consider weakly - singular integral equations arising from linear second - order strongly - elliptic pde systems with constant coefficients, including, e. g., linear elasticity. we introduce a general framework for optimal convergence of adaptive galerkin bem. we identify certain abstract properties for the underlying meshes, the corresponding mesh - refinement strategy, and the ansatz spaces that guarantee convergence at optimal algebraic rate of an adaptive algorithm driven by the weighted - residual error. these properties are satisfied, e. g., for discontinuous piecewise polynomials on simplicial meshes as well as certain ansatz spaces used for isogeometric analysis. technical contributions include local inverse estimates for the ( non - local ) boundary integral operators associated to the pde system. | arxiv:2004.07762 |
in the previous works of the authors, a step - by - step algorithm fop which uses any fixed order of points in the projective plane $ \ mathrm { pg } ( 2, q ) $ is proposed to construct small complete arcs. in each step, the algorithm adds to a current arc the first point in the fixed order not lying on the bisecants of the arc. the algorithm is based on the intuitive postulate that $ \ mathrm { pg } ( 2, q ) $ contains a sufficient number of relatively small complete arcs. also, in the previous papers, it is shown that the type of order on the points of $ \ mathrm { pg } ( 2, q ) $ is not relevant. a complete lexiarc in $ \ mathrm { pg } ( 2, q ) $ is a complete arc obtained by the algorithm fop using the lexicographical order of points. in this work, we collect and analyze the sizes of complete lexiarcs in the following regions : \ begin { align * } & \ textbf { all } q \ le321007, ~ q \ mbox { prime power } ; & 15 \ mbox { sporadic $ q $ ' s in the interval } [ 323761 \ ldots430007 ], \ mbox { see ( 1. 10 ) }. \ end { align * } in the work [ 9 ], the smallest known sizes of complete arcs in $ \ mathrm { pg } ( 2, q ) $ are collected for all $ q \ leq160001 $, $ q $ prime power. the sizes of complete arcs, collected in this work and in [ 9 ], provide the following upper bounds on the smallest size $ t _ { 2 } ( 2, q ) $ of a complete arc in the projective plane $ \ mathrm { pg } ( 2, q ) $ : \ begin { align * } t _ { 2 } ( 2, q ) & < 0. 998 \ sqrt { 3q \ ln q } < 1. 729 \ sqrt { q \ ln q } & \ mbox { for } & & 7 & \ le q \ le160001 ; \ \ t _ { 2 } ( 2, q ) & < 1. 05 \ sqrt { 3q \ ln q } < 1. 819 \ sqrt { q \ ln q } & \ mbox { for } & & 7 | arxiv:1404.0469 |
we calculate the first relativistic corrections to the kompaneets equation for the evolution of the photon frequency distribution brought about by compton scattering. the lorentz invariant boltzmann equation for electron - photon scattering is first specialized to isotropic electron and photon distributions, the squared scattering amplitude and the energy - momentum conserving delta function are each expanded to order v ^ / c ^ 4, averages over the directions of the electron and photon momenta are then carried out, and finally an integration over the photon energy yields our fokker - planck equation. the kompaneets equation, which involves only first - and second - order derivatives with respect to the photon energy, results from the order v ^ 2 / c ^ 2 terms, while the first relativistic corrections of order v ^ 4 / c ^ 4 introduce third - and fourth - order derivatives. we emphasize that our result holds when neither the electrons nor the photons are in thermal equilibrium ; two effective temperatures characterize a general, non - thermal electron distribution. when the electrons are in thermal equilibrium our relativistic fokker - planck equation is in complete agreement with the most recent published results, but we both disagree with older work. | arxiv:1201.5606 |
in this paper, the squared eigenfunction symmetries for the btl and ctl hierarchies are explicitly constructed with the suitable modification of the ones for the tl hierarchy, by considering the btl and ctl constraints. also the connections with the corresponding additional symmetries are investigated : the squared eigenfunction symmetry generated by the wave function can be viewed as the generating function for the additional symmetries. | arxiv:1302.3070 |
in this work, we investigate the problem of model - agnostic zero - shot classification ( ma - zsc ), which refers to training non - specific classification architectures ( downstream models ) to classify real images without using any real images during training. recent research has demonstrated that generating synthetic training images using diffusion models provides a potential solution to address ma - zsc. however, the performance of this approach currently falls short of that achieved by large - scale vision - language models. one possible explanation is a potential significant domain gap between synthetic and real images. our work offers a fresh perspective on the problem by providing initial insights that ma - zsc performance can be improved by improving the diversity of images in the generated dataset. we propose a set of modifications to the text - to - image generation process using a pre - trained diffusion model to enhance diversity, which we refer to as our $ \ textbf { bag of tricks } $. our approach shows notable improvements in various classification architectures, with results comparable to state - of - the - art models such as clip. to validate our approach, we conduct experiments on cifar10, cifar100, and eurosat, which is particularly difficult for zero - shot classification due to its satellite image domain. we evaluate our approach with five classification architectures, including resnet and vit. our findings provide initial insights into the problem of ma - zsc using diffusion models. all code will be available on github. | arxiv:2302.03298 |
text as an abbreviation of " for all " or " for every ". 1. denotes existential quantification and is read " there exists... such that ". if e is a logical predicate, x e { \ displaystyle \ exists x \ ; e } means that there exists at least one value of x for which e is true. 2. often used in plain text as an abbreviation of " there exists ".! denotes uniqueness quantification, that is,! x p { \ displaystyle \ exists! x \ ; p } means " there exists exactly one x such that p ( is true ) ". in other words,! x p ( x ) { \ displaystyle \ exists! x \ ; p ( x ) } is an abbreviation of x ( p ( x ) ∧ ¬ y ( p ( y ) ∧ y = x ) ) { \ displaystyle \ exists x \, ( p ( x ) \, \ wedge \ neg \ exists y \, ( p ( y ) \ wedge y \ neq x ) ) }. ⇒ 1. denotes material conditional, and is read as " implies ". if p and q are logical predicates, p ⇒ q { \ displaystyle p \ rightarrow q } means that if p is true, then q is also true. thus, p ⇒ q { \ displaystyle p \ rightarrow q } is logically equivalent with q ∨ ¬ p { \ displaystyle q \ lor \ neg p }. 2. often used in plain text as an abbreviation of " implies ". 1. denotes logical equivalence, and is read " is equivalent to " or " if and only if ". if p and q are logical predicates, p q { \ displaystyle p \ leftrightarrow q } is thus an abbreviation of ( p ⇒ q ) ∧ ( q ⇒ p ) { \ displaystyle ( p \ rightarrow q ) \ land ( q \ rightarrow p ) }, or of ( p ∧ q ) ∨ ( ¬ p ∧ ¬ q ) { \ displaystyle ( p \ land q ) \ lor ( \ neg p \ land \ neg q ) }. 2. often used in plain text as an abbreviation of " if and only if ". ( tee ) 1. { \ displaystyle \ top } denotes the logical predicate always true. 2. denotes also the truth value | https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbols |
in a series of recent works, boyd, diaconis, and their co - authors have introduced a semidefinite programming approach for computing the fastest mixing markov chain on a graph of allowed transitions, given a target stationary distribution. in this paper, we show that standard mixing - time analysis techniques - - variational characterizations, conductance, canonical paths - - can be used to give simple, nontrivial lower and upper bounds on the fastest mixing time. to test the applicability of this idea, we consider several detailed examples including the glauber dynamics of the ising model - - and get sharp bounds. | arxiv:math/0409429 |
recognising continuous emotions and action unit ( au ) intensities from face videos requires a spatial and temporal understanding of expression dynamics. existing works primarily rely on 2d face appearances to extract such dynamics. this work focuses on a promising alternative based on parametric 3d face shape alignment models, which disentangle different factors of variation, including expression - induced shape variations. we aim to understand how expressive 3d face shapes are in estimating valence - arousal and au intensities compared to the state - of - the - art 2d appearance - based models. we benchmark four recent 3d face alignment models : expnet, 3ddfa - v2, deca, and emoca. in valence - arousal estimation, expression features of 3d face models consistently surpassed previous works and yielded an average concordance correlation of. 739 and. 574 on sewa and avec 2019 ces corpora, respectively. we also study how 3d face shapes performed on au intensity estimation on bp4d and disfa datasets, and report that 3d face features were on par with 2d appearance features in aus 4, 6, 10, 12, and 25, but not the entire set of aus. to understand this discrepancy, we conduct a correspondence analysis between valence - arousal and aus, which points out that accurate prediction of valence - arousal may require the knowledge of only a few aus. | arxiv:2207.01113 |
the analogy between yetter ' s deformation theory form ( lax ) monoidal functors and gerstenahaber ' s deformation theory for associative algebras is solidified by shown that under reasonable conditions the category of functors with an action of a lax monoidal functor is abelian, that an analogue of the hochschild cohomology of an algebra with coefficients in a bimodule exists for monoidal functors, and is given by right derived functors. the deformation cohmology of a monoidal natural transformation is shown to be a special case. | arxiv:math/0112307 |
cascaded phase shifters ( cpss ) based on silicon photonics integrated chips play important roles in quantum information processing tasks. owing to an increase in the scale of silicon photonics chips, the time required to calibrate various cpss has increased. we propose a pairwise scan method for rapidly calibrating cpss by introducing equivalent mach zehnder interferometer structures and a reasonable constraint of the initial relative phase. the calibration can be nearly completed when the scanning process is finished, and only a little calculation is required. to achieve better performance, the key components, thermal optical phase shifter and 2 * 2 50 / 50 multimode interference coupler, were simulated and optimized to prevent thermal crosstalk and ensure a good balance. a 6 - cpss structure in a packaged silicon photonics chip under different temperature was used to verify the rapid pairwise scan method, and a fidelity of 99. 97 % was achieved. | arxiv:2412.03951 |
pi ^ + \ pi ^ - ) _ { \ sigma ( f _ 0 ) } ( k ^ + k ^ - ) _ { f _ 0 } $, and $ b _ s ^ 0 \ to ( k ^ + k ^ - ) _ { f _ 0 } ( k ^ + k ^ - ) _ { f _ 0 } $ could also be detected at the relevant experiments, if the $ f _ 0 \ to k ^ + k ^ - $ could be identified from the $ \ phi \ to k ^ + k ^ - $ clearly. | arxiv:2110.01217 |
interpretability of deep learning ( dl ) systems is gaining attention in medical imaging to increase experts ' trust in the obtained predictions and facilitate their integration in clinical settings. we propose a deep visualization method to generate interpretability of dl classification tasks in medical imaging by means of visual evidence augmentation. the proposed method iteratively unveils abnormalities based on the prediction of a classifier trained only with image - level labels. for each image, initial visual evidence of the prediction is extracted with a given visual attribution technique. this provides localization of abnormalities that are then removed through selective inpainting. we iteratively apply this procedure until the system considers the image as normal. this yields augmented visual evidence, including less discriminative lesions which were not detected at first but should be considered for final diagnosis. we apply the method to grading of two retinal diseases in color fundus images : diabetic retinopathy ( dr ) and age - related macular degeneration ( amd ). we evaluate the generated visual evidence and the performance of weakly - supervised localization of different types of dr and amd abnormalities, both qualitatively and quantitatively. we show that the augmented visual evidence of the predictions highlights the biomarkers considered by experts for diagnosis and improves the final localization performance. it results in a relative increase of 11. 2 + / - 2. 0 % per image regarding sensitivity averaged at 10 false positives / image on average, when applied to different classification tasks, visual attribution techniques and network architectures. this makes the proposed method a useful tool for exhaustive visual support of dl classifiers in medical imaging. | arxiv:1910.07373 |
rental assistance programs provide individuals with financial assistance to prevent housing instabilities caused by evictions and avert homelessness. since these programs operate under resource constraints, they must decide who to prioritize. typically, funding is distributed by a reactive or first - come - first serve allocation process that does not systematically consider risk of future homelessness. we partnered with allegheny county, pa to explore a proactive allocation approach that prioritizes individuals facing eviction based on their risk of future homelessness. our ml system that uses state and county administrative data to accurately identify individuals in need of support outperforms simpler prioritization approaches by at least 20 % while being fair and equitable across race and gender. furthermore, our approach would identify 28 % of individuals who are overlooked by the current process and end up homeless. beyond improvements to the rental assistance program in allegheny county, this study can inform the development of evidence - based decision support tools in similar contexts, including lessons about data needs, model design, evaluation, and field validation. | arxiv:2403.12599 |
while solid - state devices offer naturally reliable hardware for modern classical computers, thus far quantum information processors resemble vacuum tube computers in being neither reliable nor scalable. strongly correlated many body states stabilized in topologically ordered matter offer the possibility of naturally fault tolerant computing, but are both challenging to engineer and coherently control and cannot be easily adapted to different physical platforms. we propose an architecture which achieves some of the robustness properties of topological models but with a drastically simpler construction. quantum information is stored in the symmetry - protected degenerate ground states of spin - 1 chains, while quantum gates are performed by adiabatic non - abelian holonomies using only single - site fields and nearest - neighbor couplings. gate operations respect the symmetry, and so inherit some protection from noise and disorder from the symmetry - protected ground states. | arxiv:1103.5076 |
we study a propositional variant of hoare logic that can be used for reasoning about programs that exhibit both angelic and demonic nondeterminism. we work in an uninterpreted setting, where the meaning of the atomic actions is specified axiomatically using hypotheses of a certain form. our logical formalism is entirely compositional and it subsumes the non - compositional formalism of safety games on finite graphs. we present sound and complete hoare - style calculi that are useful for establishing partial - correctness assertions, as well as for synthesizing implementations. the computational complexity of the hoare theory of dual nondeterminism is investigated using operational models, and it is shown that the theory is complete for exponential time. | arxiv:1606.09110 |
we characterize, for the first time, the average extended emission in multiple lines ( [ oii ], [ oiii ], and hbeta ) around a statistical sample of 560 galaxies at z ~ 0. 25 - 0. 85. by stacking the multi unit spectroscopic explorer ( muse ) 3d data from two large surveys, the muse analysis of gas around galaxies ( magg ) and the muse ultra deep field ( mudf ), we detect significant [ oii ] emission out to ~ 40 kpc, while [ oiii ] and hbeta emission is detected out to ~ 30 kpc. via comparisons with the nearby average stellar continuum emission, we find that the line emission at 20 - 30 kpc likely arises from the disk - halo interface. combining our results with that of our previous study at z ~ 1, we find that the average [ oii ] surface brightness increases independently with redshift over z ~ 0. 4 - 1. 3 and with stellar mass over m * ~ 10 ^ { 6 - 12 } msun, which is likely driven by the star formation rate as well as the physical conditions of the gas. by comparing the observed line fluxes with photoionization models, we find that the ionization parameter declines with distance, going from log q ( cm / s ) ~ 7. 7 at < = 5 kpc to ~ 7. 3 at 20 - 30 kpc, which reflects a weaker radiation field in the outer regions of galaxies. the gas - phase metallicity shows no significant variation over 30 kpc, with a metallicity gradient of ~ 0. 003 dex / kpc, which indicates an efficient mixing of metals on these scales. alternatively, there could be a significant contribution from shocks and diffuse ionized gas to the line emission in the outer regions. | arxiv:2409.02182 |
let p be a prime, k a p - adic field, g a nilpotent, uniform pro - p group. we prove that all faithful, primitive ideals in the iwasawa algebra kg are controlled by the centraliser of the second term in the upper central series for g. | arxiv:1909.07857 |
survival analysis ( sa ) models have been widely studied in mining electronic health records ( ehrs ), particularly in forecasting the risk of critical conditions for prioritizing high - risk patients. however, their vulnerability to adversarial attacks is much less explored in the literature. developing black - box perturbation algorithms and evaluating their impact on state - of - the - art survival models brings two benefits to medical applications. first, it can effectively evaluate the robustness of models in pre - deployment testing. also, exploring how subtle perturbations would result in significantly different outcomes can provide counterfactual insights into the clinical interpretation of model prediction. in this work, we introduce survattack, a novel black - box adversarial attack framework leveraging subtle clinically compatible, and semantically consistent perturbations on longitudinal ehrs to degrade survival models ' predictive performance. we specifically develop a greedy algorithm to manipulate medical codes with various adversarial actions throughout a patient ' s medical history. then, these adversarial actions are prioritized using a composite scoring strategy based on multi - aspect perturbation quality, including saliency, perturbation stealthiness, and clinical meaningfulness. the proposed adversarial ehr perturbation algorithm is then used in an efficient sa - specific strategy to attack a survival model when estimating the temporal ranking of survival urgency for patients. to demonstrate the significance of our work, we conduct extensive experiments, including baseline comparisons, explainability analysis, and case studies. the experimental results affirm our research ' s effectiveness in illustrating the vulnerabilities of patient survival models, model interpretation, and ultimately contributing to healthcare quality. | arxiv:2412.18706 |
due to the fractal nature of retinal blood vessels, the retinal fractal dimension is a natural parameter for researchers to explore and has garnered interest as a potential diagnostic tool. this review aims to summarize the current scientific evidence regarding the relationship between fractal dimension and retinal pathology and thus assess the clinical value of retinal fractal dimension. following the prisma guidelines, a literature search for research articles was conducted in several internet databases ( embase, pubmed, web of science, scopus ). this led to a result of 28 studies included in the final review, which were analyzed via meta - analysis to determine whether the fractal dimension changes significantly in retinal disease versus normal individuals | arxiv:2101.08815 |
this paper introduces function alignment, a novel theory of mind and intelligence that is both intuitively compelling and structurally grounded. it explicitly models how meaning, interpretation, and analogy emerge from interactions among layered representations, forming a coherent framework capable not only of modeling minds but also of serving as a blueprint for building them. one of the key theoretical insights derived from function alignment is bounded interpretability, which provides a unified explanation for previously fragmented ideas in cognitive science, such as bounded rationality, symbol grounding, and analogy - making. beyond modeling, the function alignment framework bridges disciplines often kept apart, linking computational architecture, psychological theory, and even contemplative traditions such as zen. rather than building on any philosophical systems, it offers a structural foundation upon which multiple ways of understanding the mind may be reconstructed. | arxiv:2503.21106 |
classical elasticity is concerned with bodies that can be modeled as smooth manifolds endowed with a reference metric that represents local equilibrium distances between neighboring material elements. the elastic energy associated with a configuration of a body in classical elasticity is the sum of local contributions that arise from a discrepancy between the actual metric and the reference metric. in contrast, the modeling of defects in solids has traditionally involved extra structure on the material manifold, notably torsion to quantify the density of dislocations and non - metricity to represent the density of point defects. we show that all the classical defects can be described within the framework of classical elasticity using tensor fields that only assume a metric structure. specifically, bodies with singular defects can be viewed as affine manifolds ; both disclinations and dislocations are captured by the monodromy that maps curves that surround the loci of the defects into affine transformations. finally, we show that two dimensional defects with trivial monodromy are purely local in the sense that if we remove from the manifold a compact set that contains the locus of the defect, the punctured manifold can be isometrically embedded in euclidean space. | arxiv:1306.1624 |
in general, a similarity threshold ( i. e., a vigilance parameter ) for a node learning process in adaptive resonance theory ( art ) - based algorithms has a significant impact on clustering performance. in addition, an edge deletion threshold in a topological clustering algorithm plays an important role in adaptively generating well - separated clusters during a self - organizing process. in this paper, we propose a new parameter - free art - based topological clustering algorithm capable of continual learning by introducing parameter estimation methods. experimental results with synthetic and real - world datasets show that the proposed algorithm has superior clustering performance to the state - of - the - art clustering algorithms without any parameter pre - specifications. | arxiv:2305.01507 |
the linear motor driving the target for the muon ionisation cooling experiment has been redesigned to improve its reliability and performance. a new coil - winding technique is described which produces better magnetic alignment and improves heat transport out of the windings. improved field - mapping has allowed the more precise construction to be demonstrated, and an enhanced controller exploits the full features of the hardware, enabling increased acceleration and precision. the new user interface is described and analysis of performance data to monitor friction is shown to allow quality control of bearings and a measure of the ageing of targets during use. | arxiv:1603.07143 |
this paper applied the functional structural model greenlab to adult chinese pine trees ( pinus tabulaeformis carr. ). basic hypotheses of the model were validated such as constant allometry rules, relative sink relationships and topology simplification. to overcome the limitations raised by the complexity of tree structure for collecting experimental data, a simplified pattern of tree description was introduced and compared with the complete pattern for the computational time and the parameter accuracy. the results showed that this simplified pattern was well adapted to fit adult trees with greenlab. | arxiv:1012.3277 |
we address questions of logic and expressibility in the context of random rooted trees. infiniteness of a rooted tree is not expressible as a first order sentence, but is expressible as an existential monadic second order sentence ( emso ). on the other hand, finiteness is not expressible as an emso. for a broad class of random tree models, including galton - watson trees with offspring distributions that have full support, we prove the stronger statement that finiteness does not agree up to a null set with any emso. we construct a finite tree and a non - null set of infinite trees that cannot be distinguished from each other by any emso of given parameters. this is proved via set - pebble ehrenfeucht games ( where an initial colouring round is followed by a given number of pebble rounds ). | arxiv:1706.06192 |
dust constitutes only about one percent of the mass of circumstellar disks, yet it is of crucial importance for the modeling of planet formation, disk chemistry, radiative transfer and observations. the initial growth of dust from sub - micron sized grains to planetesimals and also the radial transport of dust in disks around young stars is the topic of this thesis. circumstellar dust is subject to radial drift, vertical settling, turbulent mixing, collisional growth, fragmentation and erosion. we approach this subject from three directions : analytical calculations, numerical simulations, and comparison to observations. we describe the physical and numerical concepts that go into a model which is able to simulate the radial and size evolution of dust in a gas disk which is viscously evolving over several million years. the resulting dust size distributions are compared to our analytical predictions and a simple recipe for obtaining steady - state dust size distributions is derived. with the numerical model at hand, we show that grain fragmentation can explain the fact that circumstellar disks are observed to be dust - rich for several million years. finally, we investigate the challenges that observations present to the theory of grain evolution, namely that grains of millimeter sizes are observed at large distances from the star. we have found that under the assumption that radial drift is ineffective, we can reproduce some of the observed spectral indices and fluxes. fainter objects point towards a reduced dust - to - gas ratio or lower dust opacities. | arxiv:1107.3466 |
we consider partial theta series associated with periodic sequences of coefficients, of the form $ \ theta ( \ tau ) : = \ sum _ { n > 0 } n ^ \ nu f ( n ) e ^ { i \ pi n ^ 2 \ tau / m } $, with $ \ nu $ non - negative integer and an $ m $ - periodic function $ f : \ mathbb { z } \ rightarrow \ mathbb { c } $. such a function is analytic in the half - plane $ \ { im ( \ tau ) > 0 \ } $ and as $ \ tau $ tends non - tangentially to any $ \ alpha \ in \ mathbb { q } $, a formal power series appears in the asymptotic behaviour of $ \ theta ( \ tau ) $, depending on the parity of $ \ nu $ and $ f $. we discuss the summability and resurgence properties of these series by means of explicit formulas for their formal borel transforms, and the consequences for the modularity properties of $ \ theta $, or its ` ` quantum modularity ' ' properties in the sense of zagier ' s recent theory. the discrete fourier transform of $ f $ plays an unexpected role and leads to a number - theoretic analogue of \ ' ecalle ' s ` ` bridge equations ' '. the motto is : ( quantum ) modularity = stokes phenomenon + discrete fourier transform. | arxiv:2112.15223 |
an interlock is a feature that makes the state of two mechanisms or functions mutually dependent. it may consist of any electrical or mechanical devices, or systems. in most applications, an interlock is used to help prevent any damage to the machine or to the operator handling the machine. for example, elevators are equipped with an interlock that prevents the moving elevator from opening its doors and prevents the stationary elevator ( with open doors ) from moving. interlocks may include sophisticated elements such as curtains of infrared beams, photodetectors, simple switches, and locks. it can also be a computer containing an interlocking computer program with digital or analogue electronics. = = trapped - key interlocking = = trapped - key interlocking is a method of ensuring safety in industrial environments by forcing the operator through a predetermined sequence using a defined selection of keys, locks and switches. it is called trapped key as it works by releasing and trapping keys in a predetermined sequence. after the control or power has been isolated, a key is released that can be used to grant access to individual or multiple doors. below is an example of what a trapped key interlock transfer block would look like. this is a part of a trapped key interlocking system. in order to obtain the keys in this system, a key must be inserted and turned ( like the key at the bottom of the system of the picture ). once the key is turned, the operator may retrieve the remaining keys that will be used to open other doors. once all keys are returned, then the operator will be allowed to take out the original key from the beginning. the key will not turn unless the remaining keys are put back in place. another example is an electric kiln. to prevent access to the inside of an electric kiln, a trapped key system may be used to interlock a disconnecting switch and the kiln door. while the switch is turned on, the key is held by the interlock attached to the disconnecting switch. to open the kiln door, the switch is first opened, which releases the key. the key can then be used to unlock the kiln door. while the key is removed from the switch interlock, a plunger from the interlock mechanically prevents the switch from closing. power cannot be re - applied to the kiln until the kiln door is locked, releasing the key, and the key is then returned to the disconnecting switch interlock. a similar two - part interlock system | https://en.wikipedia.org/wiki/Interlock_(engineering) |
one - sided exact categories are obtained via a weakening of a quillen exact category. such one - sided exact categories are homologically similar to quillen exact categories : a one - sided exact category $ \ mathcal { e } $ can be ( essentially uniquely ) embedded into its exact hull $ { \ mathcal { e } } ^ { \ textrm { ex } } $ ; this embedding induces a derived equivalence $ \ textbf { d } ^ b ( \ mathcal { e } ) \ to \ textbf { d } ^ b ( { \ mathcal { e } } ^ { \ textrm { ex } } ) $. whereas it is well known that quillen ' s obscure axioms are redundant for exact categories, some one - sided exact categories are known to not satisfy the corresponding obscure axiom. in fact, we show that the failure of the obscure axiom is controlled by the embedding of $ \ mathcal { e } $ into its exact hull $ { \ mathcal { e } } ^ { \ textrm { ex } }. $ in this paper, we introduce three versions of the obscure axiom ( these versions coincide when the category is weakly idempotent complete ) and establish equivalent homological properties, such as the snake lemma and the nine lemma. we show that a one - sided exact category admits a closure under each of these obscure axioms, each of which preserves the bounded derived category up to triangle equivalence. | arxiv:2010.11293 |
a specially designed and produced edge filter with pronounced nonlinear effects is carefully characterized. the nonlinear effects are estimated at the intensities close to the laser - induced damage. | arxiv:1711.08192 |
the evaluation of a multifaceted program against extreme poverty in different developing countries gave encouraging results, but with important heterogeneity between countries. this master thesis proposes to study this heterogeneity with a bayesian hierarchical analysis. the analysis we carry out with two different hierarchical models leads to a very low amount of pooling of information between countries, indicating that this observed heterogeneity should be interpreted mostly as true heterogeneity, and not as sampling error. we analyze the first order behavior of our hierarchical models, in order to understand what leads to this very low amount of pooling. we try to give to this work a didactic approach, with an introduction of bayesian analysis and an explanation of the different modeling and computational choices of our analysis. | arxiv:2109.06759 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.