text
stringlengths
1
3.65k
source
stringlengths
15
79
there is wide observational evidence that electron velocity distribution functions ( evdf ) observed in the solar wind generally present enhanced tails and field - aligned skewness. these properties may induce the excitation of electromagnetic perturbations through the whistler heat - flux instability ( whfi ), that may contribute to a non - collisional regulation of the electron heat - flux values observed in the solar wind via wave - particle interactions. recently, a new way to model the solar wind evdf has been proposed : the core - strahlo model. this representation consist in a bi - maxwellian core plus a skew - kappa distribution, representing the halo and strahl electrons as a single skewed distribution. the core - strahlo model is able to reproduce the main features of the evdf in the solar wind ( thermal core, enhanced tails, and skewness ), with the advantage that the asymmetry is controlled by only one parameter. in this work we use linear kinetic theory to analyze the effect of solar wind electrons described by the core - strahlo model, over the excitation of the parallel propagating whfi. we use parameters relevant to the solar wind and focus our attention on the effect on the linear stability introduced by different values of the core - to - strahlo density and temperature ratios, which are known to vary throughout the heliosphere. we also obtain the stability threshold for this instability as a function of the electron beta and the skewness parameter, which is a better indicator of instability than the heat - flux macroscopic moment, and present a threshold conditions for the instability that can be compared with observational data.
arxiv:2204.08928
prediction models are traditionally optimized independently from their use in the asset allocation decision - making process. we address this shortcoming and present a framework for integrating regression prediction models in a mean - variance optimization ( mvo ) setting. closed - form analytical solutions are provided for the unconstrained and equality constrained mvo case. for the general inequality constrained case, we make use of recent advances in neural - network architecture for efficient optimization of batch quadratic - programs. to our knowledge, this is the first rigorous study of integrating prediction in a mean - variance portfolio optimization setting. we present several historical simulations using both synthetic and global futures data to demonstrate the benefits of the integrated approach.
arxiv:2102.09287
we consider ( $ d + n + 1 $ ) - dimensional solutions of einstein gravity with constant negative curvature. regular solutions of this type are expected to be dual to the ground states of ( $ d + n $ ) - dimensional holographic cfts on $ ads _ d \ times s ^ n $. their only dimensionless parameter is the ratio of radii of curvatures of $ ads _ d $ and $ s ^ n $. the same solutions may also be dual to $ ( d - 1 ) $ - dimensional conformal defects in holographic qft $ _ { d + n } $. we solve the gravity equations with an associated conifold ansatz, and we classify all solutions both singular and regular by a combination of analytical and numerical techniques. there are no solutions, regular or singular, with two boundaries along the holographic direction. out of the infinite class of regular solutions, only one is diffeomorphic to $ ads _ { d + n + 1 } $ and another to $ ads _ d \ times ads _ { n + 1 } $. for the regular solutions, we compute the on - shell action as a function of the relevant parameters.
arxiv:2309.04880
we prove a fixed point theorem for a family of banach spaces, notably l ^ 1 and its non - commutative analogues. several applications are given, e. g. the optimal solution to the " derivation problem " studied since the 1960s.
arxiv:1012.1488
this work investigates the capability of future lepton colliders to determine the sign of the gauge - higgs coupling through the vector boson fusion ( vbf ) $ zh $ process. this channel offers a model - independent way to probe the sign of the gauge - higgs coupling. its sensitivity to interference effects and universal coupling with new physics makes it particularly effective. we show that a high - energy lepton collider such as clic can fully determine the sign of the gauge - higgs coupling in a model - independent way and with high confidence.
arxiv:2408.14536
in collaborative machine learning, data valuation, i. e., evaluating the contribution of each client ' data to the machine learning model, has become a critical task for incentivizing and selecting positive data contributions. however, existing studies often assume that clients engage in data valuation truthfully, overlooking the practical motivation for clients to exaggerate their contributions. to unlock this threat, this paper introduces the first data overvaluation attack, enabling strategic clients to have their data significantly overvalued. furthermore, we propose a truthful data valuation metric, named truth - shapley. truth - shapley is the unique metric that guarantees some promising axioms for data valuation while ensuring that clients ' optimal strategy is to perform truthful data valuation. our experiments demonstrate the vulnerability of existing data valuation metrics to the data overvaluation attack and validate the robustness and effectiveness of truth - shapley.
arxiv:2502.00494
numbers are often approximated by floating - point numbers ( a sequence of some fixed number of digits of a given base, scaled by an integer exponent of that base ), thus it is common to store an expression that denotes the real number as to not lose precision. however, the equality of two real numbers given by an expression is known to be undecidable ( specifically, real numbers defined by expressions involving the integers, the basic arithmetic operations, the logarithm and the exponential function ). in other words, there cannot exist any algorithm for deciding such an equality ( see richardson ' s theorem ). = = = equivalence relation = = = an equivalence relation is a mathematical relation that generalizes the idea of similarity or sameness. it is defined on a set x { \ displaystyle x } as a binary relation { \ displaystyle \ sim } that satisfies the three properties : reflexivity, symmetry, and transitivity. reflexivity means that every element in x { \ displaystyle x } is equivalent to itself ( a a { \ displaystyle a \ sim a } for all a ∈ x { \ displaystyle a \ in x } ). symmetry requires that if one element is equivalent to another, the reverse also holds ( a b b a { \ displaystyle a \ sim b \ implies b \ sim a } ). transitivity ensures that if one element is equivalent to a second, and the second to a third, then the first is equivalent to the third ( a b { \ displaystyle a \ sim b } and b c a c { \ displaystyle b \ sim c \ implies a \ sim c } ). these properties are enough to partition a set into disjoint equivalence classes. conversely, every partition defines an equivalence class. the equivalence relation of equality is a special case, as, if restricted to a given set s, { \ displaystyle s, } it is the strictest possible equivalence relation on s { \ displaystyle s } ; specifically, equality partitions a set into equivalence classes consisting of all singleton sets. other equivalence relations, since they ' re less restrictive, generalize equality by identifying elements based on shared properties or transformations, such as congruence in modular arithmetic or similarity in geometry. = = = = congruence relation = = = = in abstract algebra, a congruence relation extends the idea of an equivalence relation to include the operation - application property. that is, given a set x, {
https://en.wikipedia.org/wiki/Equality_(mathematics)
with a significant increase in area throughput, massive mimo has become an enabling technology for fifth generation ( 5g ) wireless mobile communication systems. although prototypes were built, an openly available dataset for channel impulse responses to verify assumptions, e. g. regarding channel sparsity, is not yet available. in this paper, we introduce a novel channel sounder architecture, capable of measuring multiantenna and multi - subcarrier channel state information ( csi ) at different frequency bands, antenna geometries and propagation environments. the channel sounder has been verified by evaluation of channel data from first measurements. such datasets can be used to study various deep - learning ( dl ) techniques in different applications, e. g., for indoor user positioning in three dimensions, as is done in this paper. not only we do achieve an accuracy better than 75 cm for line of sight ( los ), as is comparable to state - of - the - art conventional positioning techniques, but also obtain similar precision for the more challenging case of non - line of sight ( nlos ). further extensive indoor / outdoor measurement campaigns will provide a more comprehensive open csi dataset, tagged with positions, for the scientific community to further test various algorithms.
arxiv:1810.04126
in this work, we have studied theoretically the effects of gold adsorption on the al ( 001 ) surface, using { \ it ab initio } pseudo - potential method in the framework of the density functional theory. having found the hollow sites at the al ( 001 ) surface as the most preferred adsorption sites, we have investigated the effects of the au adsorption with different coverages ( $ \ theta $ = 0. 11, 0. 25, 0. 50, 0. 75, 1. 00 ml ) on the geometry, adsorption energy, surface dipole moment, and the work - function of the al ( 001 ) surface. the results show that, even though the work - function of the al substrate increases with the au coverage, the surface dipole moment decreases with the changes in coverage from $ \ theta = 0. 11 $ ml to $ \ theta = 0. 25 $ ml. we have explained this behavior by analyzing the electronic and ionic charge distributions. furthermore, by studying the diffusion of au atoms in to the substrate, we have shown that at room temperature the diffusion rate of au atoms in to the substrate is negligible but, increasing the temperature to about 200 $ ^ \ circ $ c the au atoms significantly diffuse in to the substrate, in agreement with the experiment.
arxiv:0708.3200
large language models often necessitate grounding on external knowledge to generate faithful and reliable answers. yet even with the correct groundings in the reference, they can ignore them and rely on wrong groundings or their inherent biases to hallucinate when users, being largely unaware of the specifics of the stored information, pose questions that might not directly correlate with the retrieved groundings. in this work, we formulate this knowledge alignment problem and introduce mixalign, a framework that interacts with both the human user and the knowledge base to obtain and integrate clarifications on how the user question relates to the stored information. mixalign employs a language model to achieve automatic knowledge alignment and, if necessary, further enhances this alignment through human user clarifications. experimental results highlight the crucial role of knowledge alignment in boosting model performance and mitigating hallucination, with improvements noted up to 22. 2 % and 27. 1 % respectively. we also demonstrate the effectiveness of mixalign in improving knowledge alignment by producing high - quality, user - centered clarifications.
arxiv:2305.13669
a free industry - grade education tool is developed for bulk - power - system reliability assessment. the software architecture is illustrated using a high - level flowchart. three main algorithms of this tool, i. e., sequential monte carlo simulation, unit preventive maintenance schedule, and optimal - power - flow - based load shedding, are introduced. the input and output formats are described in detail, including the roles of different data cards and results categorization. finally, an example case study is conducted on a five - area system to demonstrate the effectiveness and efficiency of this tool.
arxiv:2301.09579
nitrogen - vacancy magnetic microscopy is employed in quenching mode as a non - invasive, high resolution tool to investigate the morphology of isolated skyrmions in ultrathin magnetic films. the skyrmion size and shape are found to be strongly affected by local pinning effects and magnetic field history. micromagnetic simulations including static disorder, based on a physical model of grain - to - grain thickness variations, reproduce all experimental observations and reveal the key role of disorder and magnetic history in the stabilization of skyrmions in ultrathin magnetic films. this work opens the way to an in - depth understanding of skyrmion dynamics in real, disordered media.
arxiv:1709.06027
in today ' s data - driven world, algorithms operating with vertically distributed datasets are crucial due to the increasing prevalence of large - scale, decentralized data storage. these algorithms enhance data privacy by processing data locally, reducing the need for data transfer and minimizing exposure to breaches. they also improve scalability, as they can handle vast amounts of data spread across multiple locations without requiring centralized access. top - k queries have been studied extensively under this lens, and are particularly suitable in applications involving healthcare, finance, and iot, where data is often sensitive and distributed across various sources. classical top - k algorithms are based on the availability of two kinds of access to sources : sorted access, i. e., a sequential scan in the internal sort order, one tuple at a time, of the dataset ; random access, which provides all the information available at a data source for a tuple whose id is known. however, in scenarios where data retrieval costs are high or data is streamed in real - time or, simply, data are from external sources that only offer sorted access, random access may become impractical or impossible, due to latency issues or data access constraints. fortunately, a long tradition of algorithms designed for the " no random access " ( nra ) scenario exists for classical top - k queries. yet, these do not cover the recent advances in ranking queries, proposing hybridizations of top - k queries ( which are preference - aware and control the output size ) and skyline queries ( which are preference - agnostic and have uncontrolled output size ). the non - dominated flexible skyline ( nd ) is one such proposal. we introduce an algorithm for computing nd in the nra scenario, prove its correctness and optimality within its class, and provide an experimental evaluation covering a wide range of cases, with both synthetic and real datasets.
arxiv:2412.15468
in this paper, we present the results of a study made for the effect of nuclear medium in the charged current induced quasielastic lepton production ( ccqe ) and the incoherent and coherent one pion production ( cc1 $ \ pi ^ + $ ) processes from $ ^ { 12 } c $ in the $ \ nu _ \ mu $ energy region of 0. 4 - 3gev. the theoretical results are compared with the recent experimental results for the ratio of charged current $ \ nu _ \ mu $ induced one pion production cross section to the quasielastic lepton production cross section reported by k2k collaboration. we also present the results for the angular and momentum distributions of leptons and pions produced in these processes.
arxiv:0808.2103
networks are designed with functionality, security, performance, and cost in mind. tools exist to check or optimize individual properties of a network. these properties may conflict, so it is not always possible to run these tools in series to find a configuration that meets all requirements. this leads to network administrators manually searching for a configuration. this need not be the case. in this paper, we introduce a layered framework for optimizing network configuration for functional and security requirements. our framework is able to output configurations that meet reachability, bandwidth, and risk requirements. each layer of our framework optimizes over a single property. a lower layer can constrain the search problem of a higher layer allowing the framework to converge on a joint solution. our approach has the most promise for software - defined networks which can easily reconfigure their logical configuration. our approach is validated with experiments over the fat tree topology, which is commonly used in data center networks. search terminates in between 1 - 5 minutes in experiments. thus, our solution can propose new configurations for short term events such as defending against a focused network attack.
arxiv:1902.05988
echocardiogram is the most commonly used imaging modality in cardiac assessment duo to its non - invasive nature, real - time capability, and cost - effectiveness. despite its advantages, most clinical echocardiograms provide only two - dimensional views, limiting the ability to fully assess cardiac anatomy and function in three dimensions. while three - dimensional echocardiography exists, it often suffers from reduced resolution, limited availability, and higher acquisition costs. to overcome these challenges, we propose a deep learning framework s2mnet that reconstructs continuous and high - fidelity 3d heart models by integrating six slices of routinely acquired 2d echocardiogram views. our method has three advantages. first, our method avoid the difficulties on training data acquasition by simulate six of 2d echocardiogram images from corresponding slices of a given 3d heart mesh. second, we introduce a deformation field - based method, which avoid spatial discontinuities or structural artifacts in 3d echocardiogram reconstructions. we validate our method using clinically collected echocardiogram and demonstrate that our estimated left ventricular volume, a key clinical indicator of cardiac function, is strongly correlated with the doctor measured glps, a clinical measurement that should demonstrate a negative correlation with lve in medical theory. this association confirms the reliability of our proposed 3d construction method.
arxiv:2505.06105
an approximate sparse recovery system in ell _ 1 norm formally consists of parameters n, k, epsilon an m - by - n measurement matrix, phi, and a decoding algorithm, d. given a vector, x, where x _ k denotes the optimal k - term approximation to x, the system approximates x by hat _ x = d ( phi. x ), which must satisfy | | hat _ x - x | | _ 1 < = ( 1 + epsilon ) | | x - x _ k | | _ 1. among the goals in designing such systems are minimizing m and the runtime of d. we consider the " forall " model, in which a single matrix phi is used for all signals x. all previous algorithms that use the optimal number m = o ( k log ( n / k ) ) of measurements require superlinear time omega ( n log ( n / k ) ). in this paper, we give the first algorithm for this problem that uses the optimum number of measurements ( up to a constant factor ) and runs in sublinear time o ( n ) when k = o ( n ), assuming access to a data structure requiring space and preprocessing o ( n ).
arxiv:1012.1886
in speaker verification, the extraction of voice representations is mainly based on the residual neural network ( resnet ) architecture. resnet is built upon convolution layers which learn filters to capture local spatial patterns along all the input, then generate feature maps that jointly encode the spatial and channel information. unfortunately, all feature maps in a convolution layer are learnt independently ( the convolution layer does not exploit the dependencies between feature maps ) and locally. this problem has first been tackled in image processing. a channel attention mechanism, called squeeze - and - excitation ( se ), has recently been proposed in convolution layers and applied to speaker verification. this mechanism re - weights the information extracted across features maps. in this paper, we first propose an original qualitative study about the influence and the role of the se mechanism applied to the speaker verification task at different stages of the resnet, and then evaluate several se architectures. we finally propose to improve the se approach with a new pool - ing variant based on the concatenation of mean - and standard - deviation - pooling. results showed that applying se only on the first stages of the resnet allows to better capture speaker information for the verification task, and that significant discrimination gains on voxceleb1 - e, voxceleb1 - h and sitw evaluation tasks have been noted using the proposed pooling variant.
arxiv:2109.05977
we developed a general method for evaluating the energy spectrum evolution of relativistic charged particles that have undergone small quantum losses, such as the ionization losses when the electrons pass through matter and the radiation losses in the periodic fields. these processes are characterized by a small magnitude of the recoil quantum as compared with the particle ' s initial energy. the ` detector ' function for arbitrary recoil spectrum is derived in addition to the straggling function. these functions are determined by the average number of scattering events undergone by the particle over the length of the detector ( ionization losses ) or the periodic field and the specific spectrum of the recoil. moments for both the straggling function and the detector function are derived. the minimum average number of recoils, above which both functions depend only upon the mean energy of recoil quantum, is estimated. the average of this number is about 10 recoil events.
arxiv:1711.09571
using the polarisation labelling spectroscopy, we performed the detailed analysis of the level structure of excited electronic states of the $ ^ { 39 } $ kcs molecule in the excitation energy interval between 17500 ~ cm $ ^ { - 1 } $ and 18600 ~ cm $ ^ { - 1 } $ above the $ v = 0 $ level of the $ x ^ 1 \ sigma ^ + $ ground state. we prove that the observed states are strongly coupled by spin - orbit interaction above 18200 ~ cm $ ^ { - 1 } $, as manifested by numerous perturbations in the recorded spectra. the spectra are interpreted with the guidance of accurate electronic structure calculations on kcs, including potential energy curves, transition electric dipole moments, and representation of the spin - orbit interaction with a quasi - diabatic effective hamiltonian approach. the agreement between theory and experiment is found remarkable, clearly discriminating among the available theoretical data. this study confirms the accuracy of the polarisation labelling spectroscopy to analyse highly - excited electronic molecular states which present a dense level structure.
arxiv:2207.02484
here, we present our modelling ( ryde & eriksson, 2002 a & a 386, 874 ) of the 2. 6 - 3. 7 micron spectrum of the red semiregular variable r doradus observed with the short - wavelength spectrometer on board the infrared space observatory. we will also present the entire spectrum of r dor up to 5 microns based on our model photosphere in order to show which molecules are important for the emergent spectrum.
arxiv:astro-ph/0210188
we report on the observation of a large anisotropy in the rethermalization dynamics of an ultracold dipolar fermi gas driven out of equilibrium. our system consists of an ultracold sample of strongly magnetic $ ^ { 167 } $ er fermions, spin - polarized in the lowest zeeman sublevel. in this system, elastic collisions arise purely from universal dipolar scattering. based on cross - dimensional rethermalization experiments, we observe a strong anisotropy of the scattering, which manifests itself in a large angular dependence of the thermal relaxation dynamics. our result is in very good agreement with recent theoretical predictions. furthermore, we measure the rethermalization rate as a function of temperature for different angles and find that the suppression of collisions by pauli blocking is not influenced by the dipole orientation.
arxiv:1405.1537
this document presents tls and how to make it secure enough as of 2014 spring. of course all the information given here will rot with time. protocols known as secure will be cracked and will be replaced with better versions. fortunately we will see that there are ways to assess the current security of your setup, but this explains why you may have to read further from this document to get the up to date knowledge on tls security. we will first introduce the tls protocol and its underlying components : x. 509 certificates, ciphers, and protocol versions. next we will have a look at tls hardening for web servers, and how to plug various vulnerabilities : crime, breach, beast, session renegotiation, heartbleed, and others. we will finally see how the know - how acquired on hardening web servers can be used for other protocols and tools such as dovecot, sendmail, squirrelmail, roundcube, and openvpn. we assume you already maintain services that use tls, and have basic tcp / ip network knowledge. some information will also be useful for the application developer.
arxiv:1407.2168
in this work, we discuss theoretical findings on the common feature describing nonradiating sources, based on equivalent sources from which it is possible to derive cloaking devices and anapole mode conditions. starting from the differential form of maxwell ' s equations expressed in terms of equivalent electromagnetic sources, we derive two unique compact conditions. by specifying the nature of these passively induced or actively impressed current density sources in certain volumes and ( or ) predefined surfaces, we derive theoretical results consistently with the literature about nonradiating particles, cloaking devices and anapole mode structures through peculiar destructive interactions between volumetric - volumetric, volumetric - surface and surface - surface equivalent sources.
arxiv:1806.04134
the effects of the residual proton - neutron interactions on bandcrossing features are studied by means of shell model calculations for nucleons in a high - j intruder orbital. the presence of an odd - nucleon shifts the frequency of the alignment of two nucleons of the other kind along the axis of rotation. it is shown that the anomalous delayed crossing observed in nuclei with aligning neutrons and protons occupying the same intruder subshell can be partly attributed to these residual interactions.
arxiv:nucl-th/9806094
universal quantum gates are the core elements in quantum information processing. we design two schemes to realize more general ( swap ) $ ^ { 1 / m } $ and controlled - - ( swap ) $ ^ { 1 / m } $ gates ( for integer $ m \ geq1 $ ) by directing flying single photons to solid - - state quantum dots. the parameter $ m $ is easily controlled by adjusting two quarter - - wave plates and one half - - wave plate. additional computational qubits are not required to construct the two gates. evaluations of the gates indicate that our proposals are feasible with current experimental technology.
arxiv:2001.03428
the cosine - $ \ lambda $ transform, denoted $ \ mathcal { c } ^ \ lambda $, is a family of integral transforms we can define on the sphere and on the grassmann manifolds $ \ textrm { gr } ( p, \ mathbb { k } ^ n ) = \ textrm { su } ( n, \ mathbb { k } ) / \ text { s } ( \ textrm { u } ( p, \ mathbb { k } ) \ times \ textrm { u } ( n - p, \ mathbb { k } ) ) $ where $ \ mathbb { k } $ is $ \ mathbb { r } $, $ \ mathbb { c } $ or the skew field $ \ mathbb { h } $ of quaternions. the family $ \ mathcal { c } ^ \ lambda $ extends meromorphically in $ \ lambda $ to the complex plane with poles at ( among other values ) $ \ lambda = - 1, \ ldots, - p $. in this paper we normalize $ \ mathcal { c } ^ \ lambda $ and evaluate at those poles. the result is a series of integral transforms on the grassmannians that we can view as partial cosine - funk transforms. the transform that arises at $ \ lambda = - p $ is the natural analog of the funk transform in this setting.
arxiv:1411.4089
an extremely high - resolution ( > 10 ^ 5 ) high - s / n ( > 10 ^ 3 ) solar spectrum has been used to measure 15 very weak first overtone ( delta v = 2 ) infrared oh lines, resulting in a low solar abundance of a ( o ) ~ 8. 6 when marcs, 3d, and spatially and temporally averaged 3d model atmospheres are used. a higher abundance is obtained with kurucz ( a ( o ) ~ 8. 7 ) and holweger & muller ( a ( o ) ~ 8. 8 ) model atmospheres. the low solar oxygen abundance obtained in this work is in good agreement with a recent 3d analysis of [ oi ], oi, oh fundamental ( delta v = 1 ) vibration - rotation and oh pure rotation lines ( asplund et al. 2004 ). the present result brings further support for a low solar metallicity, and although using a low solar abundance with opal opacities ruins the agreement between the calculated and the helioseismic measurement of the depth of the solar convection zone, recent results from the op project show that the opacities near the base of the solar convection zone are larger than previously thought, bringing further confidence for a low solar oxygen abundance.
arxiv:astro-ph/0407366
multi - uav systems are safety - critical, and guarantees must be made to ensure no unsafe configurations occur. hamilton - jacobi ( hj ) reachability is ideal for analyzing such safety - critical systems ; however, its direct application is limited to small - scale systems of no more than two vehicles due to an exponentially - scaling computational complexity. previously, the sequential path planning ( spp ) method, which assigns strict priorities to vehicles, was proposed ; spp allows multi - vehicle path planning to be done with a linearly - scaling computational complexity. however, the previous formulation assumed that there are no disturbances, and that every vehicle has perfect knowledge of higher - priority vehicles ' positions. in this paper, we make spp more practical by providing three different methods to account for disturbances in dynamics and imperfect knowledge of higher - priority vehicles ' states. each method has different assumptions about information sharing. we demonstrate our proposed methods in simulations.
arxiv:1603.05208
intelligent tutoring systems ( itss ) are effective in helping students learn ; further research could make them even more effective. particularly desirable is research into how students learn with these systems, how these systems best support student learning, and what learning sciences principles are key in itss. ctat + tutorshop provides a full stack integrated platform that facilitates a complete research lifecycle with itss, which includes using its data to discover learner challenges, to identify opportunities for system improvements, and to conduct experimental studies. the platform includes authoring tools to support and accelerate development of its, which provide automatic data logging in a format compatible with datashop, an independent site that supports the analysis of ed tech log data to study student learnings. among the many technology platforms that exist to support learning sciences research, ctat + tutorshop may be the only one that offers researchers the possibility to author elements of itss, or whole itss, as part of designing studies. this platform has been used to develop and conduct an estimated 147 research studies which have run in a wide variety of laboratory and real - world educational settings, including k - 12 and higher education, and have addressed a wide range of research questions. this paper presents five case studies of research conducted on the ctat + tutorshop platform, and summarizes what has been accomplished and what is possible for future researchers. we reflect on the distinctive elements of this platform that have made it so effective in facilitating a wide range of its research.
arxiv:2502.10395
quantisation on spaces with properties of curvature, multiple connectedness and non orientablility is obtained. the geodesic length spectrum for the laplacian operator is extended to solve the schroedinger operator. homotopy fundamental group representations are used to obtain a direct sum of hilbert spaces, with a holonomy method for the non simply connected manifolds. the covering spaces of isometric and hence isospectral manifolds are used to obtain the representation of states on orientable and non orientable spaces. problems of deformations of the operators and the domains are discussed. possible applications of the geometric and topological effects in physics are mentioned.
arxiv:quant-ph/0211039
this paper introduces a non - parametric estimation algorithm designed to effectively estimate the joint distribution of model parameters with application to population pharmacokinetics. our research group has previously developed the non - parametric adaptive grid ( npag ) algorithm, which while accurate, explores parameter space using an ad - hoc method to suggest new support points. in contrast, the non - parametric optimal design ( npod ) algorithm uses a gradient approach to suggest new support points, which reduces the amount of time spent evaluating non - relevant points and by this the overall number of cycles required to reach convergence. in this paper, we demonstrate that the npod algorithm achieves similar solutions to npag across two datasets, while being significantly more efficient in both the number of cycles required and overall runtime. given the importance of developing robust and efficient algorithms for determining drug doses quickly in pharmacokinetics, the npod algorithm represents a valuable advancement in non - parametric modeling. further analysis is needed to determine which algorithm performs better under specific conditions.
arxiv:2502.15848
this tutorial aims to establish connections between polynomial modular multiplication over a ring to circular convolution and discrete fourier transform ( dft ). the main goal is to extend the well - known theory of dft in signal processing ( sp ) to other applications involving polynomials in a ring such as homomorphic encryption ( he ). he allows any third party to operate on the encrypted data without decrypting it in advance. since most he schemes are constructed from the ring - learning with errors ( r - lwe ) problem, efficient polynomial modular multiplication implementation becomes critical. any improvement in the execution of these building blocks would have significant consequences for the global performance of he. this lecture note describes three approaches to implementing long polynomial modular multiplication using the number theoretic transform ( ntt ) : zero - padded convolution, without zero - padding, also referred to as negative wrapped convolution ( nwc ), and low - complexity nwc ( lc - nwc ).
arxiv:2306.12519
traditional process mining considers only one single case notion and discovers and analyzes models based on this. however, a single case notion is often not a realistic assumption in practice. multiple case notions might interact and influence each other in a process. object - centric process mining introduces the techniques and concepts to handle multiple case notions. so far, such event logs have been standardized and novel process model discovery techniques were proposed. however, notions for evaluating the quality of a model are missing. these are necessary to enable future research on improving object - centric discovery and providing an objective evaluation of model quality. in this paper, we introduce a notion for the precision and fitness of an object - centric petri net with respect to an object - centric event log. we give a formal definition and accompany this with an example. furthermore, we provide an algorithm to calculate these quality measures. we discuss our precision and fitness notion based on an event log with different models. our precision and fitness notions are an appropriate way to generalize quality measures to the object - centric setting since we are able to consider multiple case notions, their dependencies and their interactions.
arxiv:2110.05375
given a morse 2 - function $ f : x ^ 4 \ to s ^ 2 $, we give minimal conditions on the fold curves and fibers so that $ x ^ 4 $ and $ f $ can be reconstructed from a certain combinatorial diagram attached to $ s ^ 2 $. additional remarks are made in other dimensions.
arxiv:1202.3487
the advancement of large language models ( llms ) has significantly impacted biomedical natural language processing ( nlp ), enhancing tasks such as named entity recognition, relation extraction, event extraction, and text classification. in this context, the deepseek series of models have shown promising potential in general nlp tasks, yet their capabilities in the biomedical domain remain underexplored. this study evaluates multiple deepseek models ( distilled - deepseek - r1 series and deepseek - llms ) across four key biomedical nlp tasks using 12 datasets, benchmarking them against state - of - the - art alternatives ( llama3 - 8b, qwen2. 5 - 7b, mistral - 7b, phi - 4 - 14b, gemma - 2 - 9b ). our results reveal that while deepseek models perform competitively in named entity recognition and text classification, challenges persist in event and relation extraction due to precision - recall trade - offs. we provide task - specific model recommendations and highlight future research directions. this evaluation underscores the strengths and limitations of deepseek models in biomedical nlp, guiding their future deployment and optimization.
arxiv:2503.00624
a family t of digraphs is a complete set of obstructions for a digraph h if for an arbitrary digraph g the existence of a homomorphism from g to h is equivalent to the non - existence of a homomorphism from any member of t to g. a digraph h is said to have tree duality if there exists a complete set of obstructions t consisting of orientations of trees. we show that if h has tree duality, then its arc graph delta h also has tree duality, and we derive a family of tree obstructions for delta h from the obstructions for h. furthermore we generalise our result to right adjoint functors on categories of relational structures. we show that these functors always preserve tree duality, as well as polynomial csps and the existence of near - unanimity functions.
arxiv:0805.2978
the super - storm of november 20, 2003 was associated with a high speed coronal mass ejection which originated in the noaa ar 10501 on november 18. this coronal mass ejection had severe terrestrial consequences leading to a geomagnetic storm with dst index of - 472 nt, the strongest of the current solar cycle. in this paper, we attempt to understand the factors that led to the coronal mass ejection on november 18. we have also studied the evolution of the photospheric magnetic field of noaa ar 10501, the source region of this coronal mass ejection. for this purpose, the mdi line - of - sight magnetograms and vector magnetograms from solar flare telescope, mitaka, obtained during november, 17 - 19, 2003 were analysed. in particular, quantitative estimates of the temporal variation in magnetic flux, energy and magnetic field gradient were estimated for the source active region. the evolution of these quantities was studied for the 3 - day period with an objective to understand the pre - flare configuration leading up to the moderate flare which was associated with the geo - effective coronal mass ejection. we also examined the chromospheric images recorded in h - alpha from udaipur solar observatory to compare the flare location with regions of different magnetic field and energy. our observations provide evidence that the flare associated with the cme occurred at a location marked by high magnetic field gradient which led to release of free energy stored in the active region.
arxiv:0812.5046
building large - scale superconducting quantum computers requires two complimentary elements : scalable wiring techniques and multiplex architectures. in our previous work [ b \ ' ejanin et al., phys. rev. applied 6, 044010 ( 2016 ) ], we have introduced and characterized a truly vertical interconnect named the quantum socket. in this paper, we exercise the quantum socket using high - coherence flux - tunable xmon transmon qubits. in particular, we test potential qubit heating and one - qubit gate performance. we observe no heating effects and time - stable gate fidelities in excess of 99. 9 %. we then propose and experimentally characterize a demultiplexed gate technique based on flux pulses and a common continuous drive signal : demuxyz. we discuss demuxyz ' s working principle, show its operation, and perform quantum process tomography on a selection of one - qubit gates to confirm proper operation. we obtain fidelities around 93 % likely limited by flux - pulse imperfections. we finally discuss future solutions for wiring integration as well as improvements to the demuxyz technique.
arxiv:2211.00143
this project aims to produce the next volume of machine - generated poetry, a complex art form that can be structured and unstructured, and carries depth in the meaning between the lines. gpoet - 2 is based on fine - tuning a state of the art natural language model ( i. e. gpt - 2 ) to generate limericks, typically humorous structured poems consisting of five lines with a aabba rhyming scheme. with a two - stage generation system utilizing both forward and reverse language modeling, gpoet - 2 is capable of freely generating limericks in diverse topics while following the rhyming structure without any seed phrase or a posteriori constraints. based on the automated generation process, we explore a wide variety of evaluation metrics to quantify " good poetry, " including syntactical correctness, lexical diversity, and subject continuity. finally, we present a collection of 94 categorized limericks that rank highly on the explored " good poetry " metrics to provoke human creativity.
arxiv:2205.08847
the uniformity of the decomposition law, for a family f of lie algebras which includes the exceptional lie algebras, of the tensor powers ad ^ n of their adjoint representations ad is now well - known. this paper uses it to embark on the development of a unified tensor calculus for the exceptional lie algebras. it deals explicitly with all the tensors that arise at the n = 2 stage, obtaining a large body of systematic information about their properties and identities satisfied by them. some results at the n = 3 level are obtained, including a simple derivation of the the dimension and casimir eigenvalue data for all the constituents of ad ^ 3. this is vital input data for treating the set of all tensors that enter the picture at the n = 3 level, following a path already known to be viable for a _ 1. the special way in which the lie algebra d _ 4 conforms to its place in the family f alongside the exceptional lie algebras is described.
arxiv:math-ph/0212047
the nvidia gpu architecture has introduced new computing elements such as the \ textit { tensor cores }, which are special processing units dedicated to perform fast matrix - multiply - accumulate ( mma ) operations and accelerate \ textit { deep learning } applications. in this work we present the idea of using tensor cores for a different purpose such as the parallel arithmetic reduction problem, and propose a new gpu tensor - core based algorithm as well as analyze its potential performance benefits in comparison to a traditional gpu - based one. the proposed method, encodes the reduction of $ n $ numbers as a set of $ m \ times m $ mma tensor - core operations ( for nvidia ' s volta architecture $ m = 16 $ ) and takes advantage from the fact that each mma operation takes just one gpu cycle. when analyzing the cost under a simplified gpu computing model, the result is that the new algorithm manages to reduce a problem of $ n $ numbers in $ t ( n ) = 5 \ log _ { m ^ 2 } ( n ) $ steps with a speedup of $ s = \ frac { 4 } { 5 } \ log _ 2 ( m ^ 2 ) $.
arxiv:1903.03640
a weak mixed distributive law ( also called weak entwining structure ) in a 2 - category consists of a monad and a comonad, together with a 2 - cell relating them in a way which generalizes a mixed distributive law due to beck. we show that a weak mixed distributive law can be described as a compatible pair of a monad and a comonad, in 2 - categories extending, respectively, the 2 - category of comonads and the 2 - category of monads. based on this observation, we define a 2 - category whose 0 - cells are weak mixed distributive laws. in a 2 - category k which admits eilenberg - moore constructions both for monads and comonads, and in which idempotent 2 - cells split, we construct a fully faithful 2 - functor from this 2 - category of weak mixed distributive laws to k ^ { 2 x 2 }.
arxiv:1009.3454
[ abdriged ] the origin of the diffuse hard x - ray ( 2 - 10 kev ) emission from starburst galaxies is a long - standing problem. we suggest that synchrotron emission of 10 - 100 tev electrons and positrons ( e + / - ) can contribute to this emission, because starbursts have strong magnetic fields. we consider three sources of e + / - at these energies : ( 1 ) primary electrons directly accelerated by supernova remnants ; ( 2 ) pionic secondary e + / - created by inelastic collisions between cr protons and gas nuclei in the dense isms of starbursts ; ( 3 ) pair e + / - produced between the interactions between 10 - 100 tev gamma - rays and the intense far - infrared ( fir ) radiation fields of starbursts. we create one - zone steady - state models of the cr population in the galactic center ( r < = 112 pc ), ngc 253, m82, and arp 220 ' s nuclei, assuming a power law injection spectrum for electrons and protons. we compare these models to extant radio and gev and tev gamma - ray data for these starbursts, and calculate the diffuse synchrotron x - ray and inverse compton ( ic ) luminosities of these starbursts. if the primary electron spectrum extends to ~ pev energies and has a proton / electron injection ratio similar to the galactic value, we find that synchrotron contributes 2 - 20 % of their unresolved, diffuse hard x - ray emission. inverse compton emission is likewise a minority of the unresolved x - ray emission in these starbursts, from 0. 1 % in the galactic center to 10 % in arp 220 ' s nuclei. we also model generic starbursts, including submillimeter galaxies, in the context of the fir - - x - ray relation, finding that up to 2 % in the densest starbursts with our fiducial assumptions. neutrino and tev gamma - ray data can further constrain the synchrotron x - ray emission of starbursts. our models do not constrain hard synchrotron x - ray emission from any additional hard components of primary e + / - from sources like pulsars in starbursts.
arxiv:1010.3030
it has been argued that a certain large $ n $ matrix model may provide a non - perturbative definition of $ m $ - theory. this model is the truncation to $ 0 + 1 $ dimensions of ten - dimensional supersymmetric yang - mills theory. it is crucial to this identification that terms with four derivatives in the effective action for the quantum mechanics should not be renormalized. we offer a perturbative proof of this result.
arxiv:hep-th/9610246
we propose quasiperiodic heterostructures associated with the tessellations of the unit disk by regular hyperbolic triangles. we present explicit construction rules and explore some of the properties exhibited by these geometric - based systems.
arxiv:0807.2619
we describe electrical transport in ideal single - layer graphene at zero applied bias. there is a crossover from collisionless transport at frequencies larger than k _ b t / hbar ( t is the temperature ) to collision - dominated transport at lower frequencies. the d. c. conductivity is computed by the solution of a quantum boltzmann equation. due to a logarithmic singularity in the collinear scattering amplitude ( a consequence of relativistic dispersion in two dimensions ) quasi - particles and - holes moving in the same direction tend to an effective equilibrium distribution whose parameters depend on the direction of motion. this property allows us to find the non - equilibrium distribution functions and the quantum critical conductivity exactly to leading order in 1 / | ln ( alpha ) | where alpha is the coupling constant characterizing the coulomb interactions.
arxiv:0802.4289
we examined the transverse momentum spectra of various identified particles, across different multiplicity classes in proton - proton collisions at a center - of - mass energy of $ \ sqrt { s } $ = 7 tev. utilizing the tsallis and hagedorn models, parameters relevant to the bulk properties of nuclear matter were extracted. both models exhibit good agreement with experimental data. in our analyses, we observed a consistent decrease in the effective temperature for the tsallis model and the kinetic or thermal freeze - out temperature for the hagedorn model, as we transition from higher multiplicity ( class - i ) to lower multiplicity ( class - x ). additionally, the transverse flow velocity experiences a decline from class - i to class - x. the normalization constant which represents the multiplicity of produced particles is observed to decrease as we move towards higher multiplicity classes. while the effective and kinetic freeze - out temperatures, as well as the transverse flow velocity, show a mild dependency on multiplicity for lighter particles, this relationship becomes more pronounced for heavier particles. various particle species are observed to undergo decoupling from the fireball at distinct temperatures : lighter particles exhibit lower temperatures, while heavier ones show higher temperatures, thereby supporting the concept of multiple freeze - out scenarios. moreover, we identified a positive correlation between the kinetic freeze - out temperature and transverse flow velocity, a scenario where particles experience stronger collective motion at higher freeze - out temperature. the reason for this positive correlation is that as the multiplicity increases, more energy is transferred into the system. this heightened energy causes greater excitation and pressure within the system, leading to a quick expansion.
arxiv:2402.08535
in the current framework, the standard parametrization of our universe is the so - called lambda cold dark matter ( { \ lambda } cdm ) model. recently, risaliti & lusso ( 2019 ) have shown a ~ 4 { \ sigma } tension with the { \ lambda } cdm model through a model - independent parametrization of a hubble diagram of supernovae ia ( sne ia ) from the jla survey and quasars. model - independent approaches and independent samples over a wide redshift range are key to testing this tension and any possible systematics. here we present an analysis of a combined hubble diagram of sne ia, quasars, and gamma - ray bursts ( grbs ) to check the agreement of the quasar and grb cosmological parameters at high redshifts ( z > 2 ) and to test the concordance flat { \ lambda } cdm model with improved statistical accuracy. we build a hubble diagram with sne ia from the pantheon sample ( scolnic et al. 2018 ), quasars from the risaliti & lusso ( 2019 ) sample, and grbs from the demianski et al. ( 2017a ) sample, where quasars are standardised through the observed non - linear relation between their ultraviolet and x - ray emission and grbs through the correlation between the spectral peak energy and the isotropic - equivalent radiated energy ( the so - called " amati relation " ). we fit the data with cosmographic models consisting of a fourth - order logarithmic polynomial and a fifth - order linear polynomial, and compare the results with the expectations from a flat { \ lambda } cdm model. we confirm the tension between the best fit cosmographic parameters and the { \ lambda } cdm model at ~ 4 { \ sigma } with sne ia and quasars, at ~ 2 { \ sigma } with sne ia and grbs, and at > 4 { \ sigma } with the whole sne ia + quasars + grb data set. the completely independent high - redshift hubble diagrams of quasars and grbs are fully consistent with each other, strongly suggesting that the deviation from the standard model is not due to unknown systematic effects but to new physics.
arxiv:1907.07692
a key challenge of dialog systems research is to effectively and efficiently adapt to new domains. a scalable paradigm for adaptation necessitates the development of generalizable models that perform well in few - shot settings. in this paper, we focus on the intent classification problem which aims to identify user intents given utterances addressed to the dialog system. we propose two approaches for improving the generalizability of utterance classification models : ( 1 ) observers and ( 2 ) example - driven training. prior work has shown that bert - like models tend to attribute a significant amount of attention to the token, which we hypothesize results in diluted representations. observers are tokens that are not attended to, and are an alternative to the token as a semantic representation of utterances. example - driven training learns to classify utterances by comparing to examples, thereby using the underlying encoder as a sentence similarity model. these methods are complementary ; improving the representation through observers allows the example - driven model to better measure sentence similarities. when combined, the proposed methods attain state - of - the - art results on three intent prediction datasets ( \ textsc { banking77 }, \ textsc { clinc150 }, \ textsc { hwu64 } ) in both the full data and few - shot ( 10 examples per intent ) settings. furthermore, we demonstrate that the proposed approach can transfer to new intents and across datasets without any additional training.
arxiv:2010.08684
existing novel object 6d pose estimation methods typically rely on cad models or dense reference views, which are both difficult to acquire. using only a single reference view is more scalable, but challenging due to large pose discrepancies and limited geometric and spatial information. to address these issues, we propose a single - reference - based novel object 6d ( sinref - 6d ) pose estimation method. our key idea is to iteratively establish point - wise alignment in the camera coordinate system based on state space models ( ssms ). specifically, iterative camera - space point - wise alignment can effectively handle large pose discrepancies, while our proposed rgb and points ssms can capture long - range dependencies and spatial information from a single view, offering linear complexity and superior spatial modeling capability. once pre - trained on synthetic data, sinref - 6d can estimate the 6d pose of a novel object using only a single reference view, without requiring retraining or a cad model. extensive experiments on six popular datasets and real - world robotic scenes demonstrate that we achieve on - par performance with cad - based and dense reference view - based methods, despite operating in the more challenging single reference setting. code will be released at https : / / github. com / cnjianliu / sinref - 6d.
arxiv:2503.05578
we present a classification of asymptotically flat, supersymmetric black hole and soliton solutions of five - dimensional minimal supergravity that admit a single axial symmetry which ` commutes ' with the supersymmetry. this includes the first examples of five - dimensional black hole solutions with exactly one axial killing field that are smooth on and outside the horizon. the solutions have similar properties to the previously studied class with biaxial symmetry, in particular, they have a gibbons - hawking base and the harmonic functions must be of multi - centred type with the centres corresponding to the connected components of the horizon or fixed points of the axial symmetry. we find a large moduli space of black hole and soliton spacetimes with non - contractible 2 - cycles and the horizon topologies are $ s ^ 3 $, $ s ^ 1 \ times s ^ 2 $ and lens spaces $ l ( p, 1 ) $.
arxiv:2206.11782
easily computable lower and upper bounds are found for the sum of catalan numbers. the lower bound is proven to be tighter than the upper bound, which previously was declared to be only an asymptotic. the average of these bounds is proven to be also an upper bound, and empirically it is shown that the average is superior to the previous upper bound by a factor greater than ( 9 / 2 ).
arxiv:1601.04223
we show that superfluidity can be used to prevent thermalisation in a nonlinear floquet system. generically, periodic driving boils an interacting system to a featureless infinite temperature state. fast driving is a known strategy to postpone floquet heating with a large but always finite boiling time. in contrast, using a nonlinear periodically - driven system on a lattice, we show the existence of a continuous class of initial states which do not thermalise at all. this absence of thermalisation is associated to the existence and persistence of a stable superflow motion.
arxiv:2207.06951
the equitable coloring problem is a variant of the graph coloring problem where the sizes of two arbitrary color classes differ in at most one unit. this additional condition, called equity constraints, arises naturally in several applications. due to the hardness of the problem, current exact algorithms can not solve large - sized instances. such instances must be addressed only via heuristic methods. in this paper we present a tabu search heuristic for the equitable coloring problem. this algorithm is an adaptation of the dynamic tabucol version of galinier and hao. in order to satisfy equity constraints, new local search criteria are given. computational experiments are carried out in order to find the best combination of parameters involved in the dynamic tenure of the heuristic. finally, we show the good performance of our heuristic over known benchmark instances.
arxiv:1405.7020
anticancer peptides ( acps ) are a group of peptides that exhibite antineoplastic properties. the utilization of acps in cancer prevention can present a viable substitute for conventional cancer therapeutics, as they possess a higher degree of selectivity and safety. recent scientific advancements generate an interest in peptide - based therapies which offer the advantage of efficiently treating intended cells without negatively impacting normal cells. however, as the number of peptide sequences continues to increase rapidly, developing a reliable and precise prediction model becomes a challenging task. in this work, our motivation is to advance an efficient model for categorizing anticancer peptides employing the consolidation of word embedding and deep learning models. first, word2vec and fasttext are evaluated as word embedding techniques for the purpose of extracting peptide sequences. then, the output of word embedding models are fed into deep learning approaches cnn, lstm, bilstm. to demonstrate the contribution of proposed framework, extensive experiments are carried on widely - used datasets in the literature, acps250 and independent. experiment results show the usage of proposed model enhances classification accuracy when compared to the state - of - the - art studies. the proposed combination, fasttext + bilstm, exhibits 92. 50 % of accuracy for acps250 dataset, and 96. 15 % of accuracy for independent dataset, thence determining new state - of - the - art.
arxiv:2309.12058
if the universe has more than 4 - dimensions, the tev scale gravity theories predict formation of microscopic black holes due to interaction of ultra high energy neutrinos coming from some extragalactic origin with the nucleons present in the earth ' s atmosphere. the decay of these black holes can generate high multiplicity events which can be detected through neutrino telescopes. ultra high energy neutrinos can also produce events without the formation of black holes which can be distinguished from the black hole events depending on their topological structure. in this work we study the effects of non - standard interaction on the production of these shower events. we find that new physics has inconsequential impact on the number of events produced through the generation of black holes. for events produced without the formation of black holes, new physics can only provide a marginal deviation. therefore a large enhancement in the number of shower events over the standard model prediction can provide unambiguous signatures of tev scale gravity in the form of microscopic black hole production.
arxiv:2202.02775
we apply ideas of dijkgraaf and witten on three - dimensional topological quantum field theory to arithmetic curves, that is, the spectra of rings of integers in algebraic number fields. in the first three sections, we define classical chern - simons actions on spaces of galois representations. in the subsequent sections, we give formulas for computation in a small class of cases and point towards some arithmetic applications.
arxiv:1609.03012
the modulation of an optical lattice potential that breaks time - reversal symmetry enables the realization of complex tunneling amplitudes in the corresponding tight - binding model. for a superfluid fermi gas in a triangular lattice potential with complex tunnelings the pairing function acquires a complex phase, so the frustrated magnetism of fermions can be realized. bose - fermi mixture of bosonic molecules and unbound fermions in the lattice shows also an interesting behavior. due to boson - fermion coupling, the fermions become slaved by the bosons and the corresponding pairing function takes the complex phase determined by bosons. in the presence of bosons the fermi system can reveal both gap and gapless superfluidity.
arxiv:1112.0972
a review of the main phenomena related with the linear optical properties of isolated and supported metal nanoparticles is presented. the extinction, absorption and scattering efficiencies are calculated using the mie theory and the discrete dipole approximation. the origin of the optical spectra is discussed in terms of the size, shape and environment for each nanoparticle. the main optical features of each nanoparticle are identified, showing the tremendous potentiality of optical spectroscopy as a tool of characterization.
arxiv:cond-mat/0411570
using results of laboratory experiments, direct numerical simulations, geomagnetic and solar observations, it is shown that high moments of helicity distribution can dominate power spectra of the magnetic field generated by the magnetohydrodynamic ( mhd ) dynamo in swirling turbulence even for the cases with zero global helicity. the notion of helical distributed chaos has been used for this purpose.
arxiv:2112.14213
this paper presents an approach for grounding phrases in images which jointly learns multiple text - conditioned embeddings in a single end - to - end model. in order to differentiate text phrases into semantically distinct subspaces, we propose a concept weight branch that automatically assigns phrases to embeddings, whereas prior works predefine such assignments. our proposed solution simplifies the representation requirements for individual embeddings and allows the underrepresented concepts to take advantage of the shared representations before feeding them into concept - specific layers. comprehensive experiments verify the effectiveness of our approach across three phrase grounding datasets, flickr30k entities, referit game, and visual genome, where we obtain a ( resp. ) 4 %, 3 %, and 4 % improvement in grounding performance over a strong region - phrase embedding baseline.
arxiv:1711.08389
we study the metric corresponding to a three - dimensional coset space $ so ( 4 ) / so ( 3 ) $ in the lattice setting. with the use of three integers $ n _ 1, n _ 2 $, and $ n _ 3 $, and a length scale, $ l _ { \ mu } $, the continuous metric is transformed into a discrete space. the numerical outcomes are compared with the continuous ones. the singularity of the black hole is explored and different domains are studied.
arxiv:2311.00406
let $ ( h _ 0, h _ 1, \ ldots, h _ s ) $ with $ h _ s \ ne0 $ be the $ h $ - vector of the broken circuit complex of a series - parallel network $ m $. let $ g $ be a graph whose cycle matroid is $ m $. we give a formula for the difference $ h _ { s - 1 } - h _ 1 $ in terms of an ear decomposition of $ g $. a number of applications of this formula are provided, including several bounds for $ h _ { s - 1 } - h _ 1 $, a characterization of outerplanar graphs, and a solution to a conjecture on $ a $ - graphs posed by fenton. we also prove that $ h _ { s - 2 } \ geq h _ 2 $ when $ s \ geq 4 $.
arxiv:1404.1728
we have designed and built a large - throughput dual channel photometer, diabolo. this photometer is dedicated to the observation of millimetre continuum diffuse sources, and in particular, of the sunyaev - zel ' dovich effect and of anisotropies of the 3k background. we describe the optical layout and filtering system of the instrument, which uses two bolometric detectors for simultaneous observations in two frequency channels at 1. 2 and 2. 1 mm. the bolometers are cooled to a working temperature of 0. 1 k provided by a compact dilution cryostat. the photometric and angular responses of the instrument are measured in the laboratory. first astronomical light was detected in march 1995 at the focus of the new millimetre and infrared testa grigia observatory ( mito ) telescope. the established sensitivity of the system is of 7 mk _ rj s ^ 1 / 2 $. for a typical map of at least 10 beams, with one hour of integration per beam, one can achieve the rms values of y _ sz ~ 7 10 ^ - 5 and the 3k background anisotropy delta t / t ~ 7 10 ^ - 5, in winter conditions. we also report on a novel bolometer ac readout circuit which allows for the first time total power measurements on the sky. this technique alleviates ( but does not forbid ) the use of chopping with a secondary mirror. this technique and the dilution fridge concept will be used in future scan - - modulated space instrument like the esa planck mission project.
arxiv:astro-ph/9912385
in all finite coxeter types but $ i _ 2 ( 12 ) $, $ i _ 2 ( 18 ) $ and $ i _ 2 ( 30 ) $, we classify simple transitive $ 2 $ - rep \ - re \ - sen \ - ta \ - ti \ - ons for the quotient of the $ 2 $ - category of soergel bimodules over the coinvariant algebra which is associated to the two - sided cell that is the closest one to the two - sided cell containing the identity element. it turns out that, in most of the cases, simple transitive $ 2 $ - representations are exhausted by cell $ 2 $ - representations. however, in coxeter types $ i _ 2 ( 2k ) $, where $ k \ geq 3 $, there exist simple transitive $ 2 $ - representations which are not equivalent to cell $ 2 $ - representations.
arxiv:1605.01373
five - quark ( 5q ) picture of lambda ( 1405 ) is studied using quenched lattice qcd with an exotic 5q operator of n \ bar { k } type. to discreminate mere n \ bar { k } and \ sigma \ pi scattering states, hybrid boundary condition ( hbc ), a flavor - dependent boundary condition, is imposed on the quark fields along spatial direction. 5q mass m _ { 5q } \ simeq 1. 89 gev is obtained after the chiral extrapolation to the physical quark mass region, which is too heavy to be identified with lambda ( 1405 ). then, lambda ( 1405 ) seems neither a pure 3q state nor a pure 5q state, and therefore we present an interesting possibility that lambda ( 1405 ) is a mixed state of 3q and 5q states.
arxiv:0707.0079
proctor ' s work on staircase plane partitions yields an enumeration of lozenge tilings of a halved hexagon on the triangular lattice. rohatgi later extended this tiling enumeration to a halved hexagon with a triangle cut off from the boundary. in the previous paper, the author proved a common generalization of proctor ' s and rohatgi ' s results by enumerating lozenge tilings of a halved hexagon in the case an array of an arbitrary number of triangles has been removed from a non - staircase side. in this paper, we consider the other case when the array of triangles has been removed from the staircase side of the halved hexagon. our result also implies an explicit formula for the number of tilings of a hexagon with an array of triangles missing on the symmetry axis.
arxiv:1709.02071
by model independent way scalar and pseudoscalar neutral higgs boson production with photon in the tree process $ \ mu ^ + \ mu ^ - \ to h ^ 0 \ gamma $ are considered. for the standard model and minimal supersymmetric standard model cases numerical estimates are obtained. the model independent flavour changing higgs bosons production in the tree processes $ e ^ + e ^ -, \ mu ^ + e ^ - \ to h ^ 0 _ { fc } \ gamma $ is also considered.
arxiv:hep-ph/9709501
the tendencies to phase - separation and stripe formation of the t - j model on planes and four - leg ladders have been here reexamined including hole hopping terms t ', t ' ' beyond nearest - neighbor sites. the motivation for this study is the growing evidence that such terms are needed for a quantitative description of the cuprates. using a variety of computational techniques it is concluded that the stripe tendencies considerably weaken when experimentally realistic t ' < 0, t ' ' > 0 for hole - doped cuprates are considered. however, a small t ' > 0 actually enhances the stripe formation.
arxiv:cond-mat/9809411
our analysis is aimed at characterizing the properties of the integrated spectrum of active galactic nuclei ( agns ) such as the ubiquity of the fe k { \ alpha } emission in agns and the dependence of the spectral parameters on the x - ray luminosity and redshift. we selected 2646 point sources from the 2xmm catalogue at high galactic latitude ( | bii | > 25 degrees ) and with the sum of epic - pn and epic - mos 0. 2 - 12 kev counts greater than 1000. redshifts were obtained for 916 sources from the ned. the final sample consists of 507 agn. individual source spectra have been summed in the observed frame to compute the integrated spectra in different redshift and luminosity bins over the range 0 < z < 5. detailed analysis of these spectra has been performed. we find that the narrow fe k { \ alpha } line at 6. 4 kev is significantly detected up to z = 1. the line equivalent width decreases with increasing x - ray luminosity in the 2 - 10 kev band ( ' ' it effect ' ' ). the anti - correlation is characterized by the relation log ( ewfe ) = ( 1. 66 + / - 0. 09 ) + ( - 0. 43 + / - 0. 07 ) log ( lx, 44 ), where ewfe is the rest frame equivalent width of the neutral iron k { \ alpha } line in ev and lx, 44 is the 2 - 10 kev x - ray luminosity in units of 10 ^ { 44 } erg s ^ { - 1 }. the equivalent width is nearly independent of redshift up to z ~ 0. 8 with an average value of 101 + / - 40 ( rms dispersion ) ev in the luminosity range 43. 5 < = loglx < = 44. 5. our analysis also confirmed the hardening of the spectral indices at low luminosities implying a dependence of obscuration on luminosity. we confirm that the neutral narrow fe k { \ alpha } line is an almost ubiquitous feature of agns. we find compelling evidence for the ' ' it effect ' ' over a redshift interval larger than probed in any previous study. we detect no evolution of the average rest frame equivalent width of the fe k { \ alpha } line with redshift.
arxiv:1005.0289
in the paper we construct a fully convolutional gan model : locogan, which latent space is given by noise - like images of possibly different resolutions. the learning is local, i. e. we process not the whole noise - like image, but the sub - images of a fixed size. as a consequence locogan can produce images of arbitrary dimensions e. g. lsun bedroom data set. another advantage of our approach comes from the fact that we use the position channels, which allows the generation of fully periodic ( e. g. cylindrical panoramic images ) or almost periodic,, infinitely long " images ( e. g. wall - papers ).
arxiv:2002.07897
we study limited strategic leadership. a collection of subsets covering the leader ' s action space determines her commitment opportunities. we characterize the outcomes resulting from all possible commitment structures of this kind. if the commitment structure is an interval partition, then the leader ' s payoff is bounded by her stackelberg and cournot payoffs. under general commitment structures the leader may obtain a payoff that is less than her lowest cournot payoff. we apply our results to a textbook duopoly model and elicit the commitment structures leading to consumer - and producer - optimal outcomes.
arxiv:2205.05546
extending the functionality of the transformer model to accommodate longer sequence lengths has become a critical challenge. this extension is crucial not only for improving tasks such as language translation and long - context processing but also for enabling novel applications like chatbots, code generation, and multimedia content creation. the primary obstacle is the self - attention mechanism, which scales quadratically with sequence length in terms of computation time and memory requirements. longlora proposed shifted sparse attention ( s \ ( ^ 2 \ ) - attn ), effectively enabling context extension and leading to non - trivial computation savings with similar performance to fine - tuning with vanilla attention. however, longlora is still not as efficient as vanilla attention, reaching only 39 \ % of the perplexity improvement compared to full attention. this inefficiency is due to the cyclic shift applied within different attention head patterns, causing either chaos in the attention head structure or unnecessary information exchange between token groups. to address these issues, we propose \ textbf { sinklora }, which features better work partitioning. specifically, ( 1 ) we developed sf - attn with a segmentation and reassembly algorithm to proportionally return cyclically shifted groups of attention heads to their un - shifted state together with global attention of " sink attention tokens ", achieving 92 \ % of the perplexity improvement compared to full attention after fine tuning, and ( 2 ) applied a sota kv cache compression algorithm h $ _ 2 $ o to accelerate inference. furthermore, we conducted supervised fine - tuning with sinklora using a self collected longalpaca - plus dataset. all our code, models, datasets, and demos are available at \ url { https : / / github. com / dexter - gt - 86 / sinklora }.
arxiv:2406.05678
using the chandra / acis - i detector, we have identified an x - ray source ( cxou 132619. 7 - 472910. 8 ) in the globular cluster ngc 5139 with a thermal spectrum identical to that observed from transiently accreting neutron stars in quiescence. the absence of intensity variability on timescales as short as 4 seconds ( < 25 % rms variability ) and as long as 5 years ( < 50 % variability ) supports the identification of this source as a neutron star, most likely maintained at a high effective temperature ( approximately 1e6 k ) by transient accretion from a binary companion. the ability to spectrally identify quiescent neutron stars in globular clusters ( where the distance and interstellar column densities are known ) opens up new opportunities for precision neutron star radius measurements.
arxiv:astro-ph/0105405
falling continues to be a significant risk factor for older adults and other mobility limited individuals. monitoring and maintaining clear, tripping hazard free pathways in living spaces is invaluable in helping people live independently and safely in their home. this paper proposes and demonstrates a microsoft hololens - based technology for monitoring the open walking areas in living spaces. the system is based on the 3d mesh produced by the hololens augmented reality device. several algorithms were implemented and evaluated to handle the noisy raw mesh to help identify all possible floor spaces. the resulting 3d model of the home was then processed to find united states occupational safety and health administration ( osha ) approved pathways through the homes. the long term goals of these technologies are to monitor the living spaces clutter and clear walkways over time. the information it generates shall be provided to the residents and their caregivers during environmental home assessments. it will inform them about how well the home is being maintained so proactive interventions may be taken before a fall occurs.
arxiv:2209.12393
we describe an experimental protocol for introducing spin - dependent lattice structure in a cold atomic fermi gas using lasers. it can be used to realize hubbard models whose hopping parameters depend on spin and whose interaction strength can be controlled with an external magnetic field. we suggest that exotic superfluidities will arise in this framework. an especially interesting possibility is a class of states that support coexisting superfluid and normal components, even at zero temperature. the quantity of normal component varies with external parameters. we discuss some aspects of the quantum phase transition that arises at the point where it vanishes.
arxiv:cond-mat/0404478
we give branching formulas from $ so ( 7, \ mathbb { c } ) $ to $ \ mathfrak { g } _ 2 $ for generalized verma modules attached to $ \ mathfrak { g } _ 2 $ - compatible parabolic subalgebras of $ so ( 7, \ mathbb { c } ) $, and branching formulas from $ \ mathfrak { g } _ 2 $ to $ sl ( 3, \ mathbb { c } ) $ for generalized verma modules attached to $ sl ( 3, \ mathbb { c } ) $ - compatible parabolic subalgebras of $ \ mathfrak { g } _ 2 $ respectively, under some assumptions on the parameters of generalized verma modules.
arxiv:1310.5646
maxwell ' s demons work by rectifying thermal fluctuations. they are not expected to function at macroscopic scales where fluctuations become negligible and dynamics become deterministic. we propose an electronic implementation of an autonomous maxwell ' s demon that indeed stops working in the regular macroscopic limit as the dynamics becomes deterministic. however, we find that if the power supplied to the demon is scaled up appropriately, the deterministic limit is avoided and the demon continues to work. the price to pay is a decreasing thermodynamic efficiency. our work suggests that novel strategies may be found in nonequilibrium settings to bring to the macroscale non - trivial effects so far only observed at microscopic scales.
arxiv:2204.09466
experimental information on the trilinear higgs boson self - coupling $ \ kappa _ 3 $ and the quartic self - coupling $ \ kappa _ 4 $ will be crucial for gaining insight into the shape of the higgs potential and the nature of the electroweak phase transition. while higgs pair production processes provide access to $ \ kappa _ 3 $, triple higgs production processes, despite their small cross sections, will provide valuable complementary information on $ \ kappa _ 3 $ and first experimental constraints on $ \ kappa _ 4 $. we investigate triple higgs boson production at the hl - lhc, employing efficient graph neural network methodologies to maximise the statistical yield. we show that it will be possible to establish bounds on the variation of both couplings from the hl - lhc analyses that significantly go beyond the constraints from perturbative unitarity. we also discuss the prospects for the analysis of triple higgs production at future high - energy lepton colliders operating at the tev scale.
arxiv:2312.04646
we introduce a model to design reflectors that take into account the inverse square law for radiation. we prove existence of solutions, both in the near and far field cases, when the input and output energies are prescribed.
arxiv:1305.7024
a fundamental problem of cosmic ray ( cr ) physics is the determination of the average properties of galactic crs outside the solar system. starting from cos - b data in the 1980 ' s, gamma - ray observations of molecular clouds in the gould belt above the galactic plane have been used to deduce the galactic cr energy spectrum. we reconsider this problem in view of the improved precision of observational data which in turn require a more precise treatment of photon production in proton - proton scatterings. we show that the spectral shape $ dn / dp \ propto p ^ { - 2. 85 } $ of cr protons as determined by the pamela collaboration in the energy range 80 gev < pc < 230 gev is consistent with the photon spectra from molecular clouds observed with fermi - lat down to photon energies e \ sim 1 - 2 gev. adding a break of the cr flux at 3 gev, caused by a corresponding change of the diffusion coefficient, improves further the agreement in the energy range 0. 2 - 3 gev.
arxiv:1206.4705
detection of a signal hidden by noise within a time series is an important problem in many astronomical searches, i. e. for light curves containing the contributions of periodic / semi - periodic components due to rotating objects and all other astrophysical time - dependent phenomena. one of the most popular tools for use in such studies is the " periodogram ", whose use in an astronomical context is often not trivial. the " optimal " statistical properties of the periodogram are lost in the case of irregular sampling of signals, which is a common situation in astronomical experiments. parts of these properties are recovered by the " lomb - scargle " ( ls ) technique, but at the price of theoretical difficulties, that can make its use unclear, and of algorithms that require the development of dedicated software if a fast implementation is necessary. such problems would be irrelevant if the ls periodogram could be used to significantly improve the results obtained by approximated but simpler techniques. in this work we show that in many astronomical applications simpler techniques provide results similar to those obtainable with the ls periodogram. the meaning of the " nyquist frequency " is also discussed in the case of irregular sampling.
arxiv:1301.4826
we present a comparative computer simulation study of the phase diagrams and anomalous behavior of two - dimensional ( $ 2d $ ) and quasi - two - dimensional ( $ q2d $ ) classical particles interacting with each other through isotropic core - softened potential which is used for a qualitative description of the anomalous behavior of water and some other liquids. we have shown that at the low density part of the phase diagram an increase in the width of the confining slit pore leads to a considerable decrease in the melting temperature while at high densities the melting temperature is almost unchanged.
arxiv:1906.05780
in this letter we consider the problem of screening of external charge in graphene from the cyclic rg flow viewpoint. the analogy with conformal calogero model is used to suggest the interpretation of the tower of resonant states as tower of efimov states.
arxiv:1312.7399
slime mould physarum polycephalum is large single cell with intriguingly smart behaviour. the slime mould shows outstanding abilities to adapt its protoplasmic network to varying environmental conditions. the slime mould can solve tasks of computational geometry, image processing, logics and arithmetics when data are represented by configurations of attractants and repellents. we attempt to map behavioural patterns of slime onto the cognitive control versus schizotypy spectrum phase space and thus interpret slime mould ' s activity in terms of creativity.
arxiv:1304.2050
we propose a framework that can incrementally expand the explanatory temporal logic rule set to explain the occurrence of temporal events. leveraging the temporal point process modeling and learning framework, the rule content and weights will be gradually optimized until the likelihood of the observational event sequences is optimal. the proposed algorithm alternates between a master problem, where the current rule set weights are updated, and a subproblem, where a new rule is searched and included to best increase the likelihood. the formulated master problem is convex and relatively easy to solve using continuous optimization, whereas the subproblem requires searching the huge combinatorial rule predicate and relationship space. to tackle this challenge, we propose a neural search policy to learn to generate the new rule content as a sequence of actions. the policy parameters will be trained end - to - end using the reinforcement learning framework, where the reward signals can be efficiently queried by evaluating the subproblem objective. the trained policy can be used to generate new rules in a controllable way. we evaluate our methods on both synthetic and real healthcare datasets, obtaining promising results.
arxiv:2308.06094
let i = < f _ 1,..., f _ m > be a zero dimensional radical ideal q [ x _ 1,..., x _ n ]. assume that we are given approximations { z _ 1,..., z _ k } in c ^ n for the common roots v ( i ) = { xi _ 1,..., xi _ k }. in this paper we show how to construct and certify the rational entries of hermite matrices for i from the approximate roots { z _ 1,..., z _ k }. when i is non - radical, we give methods to construct and certify hermite matrices for the radical of i from approximate roots. furthermore, we use signatures of these hermite matrices to give rational certificates of non - negativity of a given polynomial over a ( possibly positive dimensional ) real variety, as well as certificates that there is a real root within an epsilon distance from a given point z in q ^ n.
arxiv:2110.10313
work done to uncover the knowledge encoded within pre - trained language models rely on annotated corpora or human - in - the - loop methods. however, these approaches are limited in terms of scalability and the scope of interpretation. we propose using a large language model, chatgpt, as an annotator to enable fine - grained interpretation analysis of pre - trained language models. we discover latent concepts within pre - trained language models by applying agglomerative hierarchical clustering over contextualized representations and then annotate these concepts using chatgpt. our findings demonstrate that chatgpt produces accurate and semantically richer annotations compared to human - annotated concepts. additionally, we showcase how gpt - based annotations empower interpretation analysis methodologies of which we demonstrate two : probing frameworks and neuron interpretation. to facilitate further exploration and experimentation in the field, we make available a substantial conceptnet dataset ( tcn ) comprising 39, 000 annotated concepts.
arxiv:2305.13386
we consider a ` color density matrix ' in gauge theory. we argue that it systematically resums large logarithms originating from wide - angle soft radiation, sometimes referred to as non - global logarithms, to all logarithmic orders. we calculate its anomalous dimension at leading - and next - to - leading order. combined with a conformal transformation known to relate this problem to shockwave scattering in the regge limit, this is used to rederive the next - to - leading order balitsky - fadin - kuraev - lipatov equation ( including its nonlinear generalization, the so - called balitsky - jimwlk equation ), finding perfect agreement with the literature. exponentiation of divergences to all logarithmic orders is demonstrated. the possibility of obtaining the evolution equation ( and bfkl ) to three - loop is discussed.
arxiv:1501.03754
we study the jet kinematics of the blazar s5 0716 + 714 during its active state in 2003 - 2004 with multi - epoch vlbi observations. aims. we present a kinematic analysis of the large - scale ( 0 - 12 mas ) jet of 0716 + 714, based on the results of six epochs of vlba monitoring at 5 ghz. additionally, we compare kinematic results obtained with two imaging methods based on different deconvolution algorithms. the blazar 0716 + 714 has a diffuse large - scale jet and a very faint bright compact core. experiments with simulated data showed that the conventional data reduction procedure based on the clean deconvolution algorithm does not perform well in restoring this type of structure. this might be the reason why previous kinematic studies of this source yielded ambiguous results. in order to obtain accurate kinematics of this source, we independently applied two imaging techniques to the raw data : the conventional method, based on difference mapping, which uses clean deconvolution, and the generalized maximum entropy method ( gmem ) realized in the vlbimager package developed at the pulkovo observatory in russia. the results of both methods give us a consistent kinematic scenario : the large - scale jet of 0716 + 714 is diffuse and stationary. differences between the inner ( 0 - 1 mas ) and outer ( 1 - 12 mas ) regions of the jet in brightness and velocity of the components could be explained by the bending of the jet, which causes the angle between the jet direction and the line of sight to change from ~ 5 deg to ~ 11 deg. for the source 0716 + 714 both methods worked at the limit of their capability.
arxiv:1102.0409
this paper presents a proximal bundle ( pb ) framework based on a generic bundle update scheme for solving the hybrid convex composite optimization ( hcco ) problem and establishes a common iteration - complexity bound for any variant belonging to it. as a consequence, iteration - complexity bounds for three pb variants based on different bundle update schemes are obtained in the hcco context for the first time and in a unified manner. while two of the pb variants are universal ( i. e., their implementations do not require parameters associated with the hcco instance ), the other newly ( as far as the authors are aware of ) proposed one is not but has the advantage that it generates simple, namely one - cut, bundle models. the paper also presents a universal adaptive pb variant ( which is not necessarily an instance of the framework ) based on one - cut models and shows that its iteration - complexity is the same as the two aforementioned universal pb variants.
arxiv:2110.01084
a recent family of techniques, dubbed lightweight fine - tuning methods, facilitates parameter - efficient transfer learning by updating only a small set of additional parameters while keeping the parameters of the pretrained language model frozen. while proven to be an effective method, there are no existing studies on if and how such knowledge of the downstream fine - tuning approach should affect the pretraining stage. in this work, we show that taking the ultimate choice of fine - tuning method into consideration boosts the performance of parameter - efficient fine - tuning. by relying on optimization - based meta - learning using maml with certain modifications for our distinct purpose, we prime the pretrained model specifically for parameter - efficient fine - tuning, resulting in gains of up to 1. 7 points on cross - lingual ner fine - tuning. our ablation settings and analyses further reveal that the tweaks we introduce in maml are crucial for the attained gains.
arxiv:2205.12453
mirroring the success of masked language models, vision - and - language counterparts like vilbert, lxmert and uniter have achieved state of the art performance on a variety of multimodal discriminative tasks like visual question answering and visual grounding. recent work has also successfully adapted such models towards the generative task of image captioning. this begs the question : can these models go the other way and generate images from pieces of text? our analysis of a popular representative from this model family - lxmert - finds that it is unable to generate rich and semantically meaningful imagery with its current training setup. we introduce x - lxmert, an extension to lxmert with training refinements including : discretizing visual representations, using uniform masking with a large range of masking ratios and aligning the right pre - training datasets to the right objectives which enables it to paint. x - lxmert ' s image generation capabilities rival state of the art generative models while its question answering and captioning abilities remains comparable to lxmert. finally, we demonstrate the generality of these training refinements by adding image generation capabilities into uniter to produce x - uniter.
arxiv:2009.11278
isomorphisms are constructed between generalized schur algebras in different degrees. the construction covers both the classical case ( of general linear groups over infinite fields of arbitrary characteristic ) and the quantized case ( in type $ a $, for any non - zero value of the quantum parameter $ q $ ). the construction does not depend on the characteristic of the underlying field or the choice of $ q \ neq 0 $. the proof combines a combinatorial construction with comodule structures and ringel duality. applications range from equivalences of categories to results on the structure and cohomology of schur algebras to identities of decomposition numbers and also of $ p $ - kostka numbers, in both cases reproving and generalizing row and column removal rules.
arxiv:0708.4019
two - band superconductors exhibit a distinct phase characterized by two correlation lengths, one smaller and the other larger than the magnetic field penetration length. this regime was coined type - 1. 5 superconductivity, with several unconventional properties, such as vortex clustering. however, a fully microscopic solution for vortex clusters has remained challenging due to computational complexities beyond quasiclassical models. this work presents numerical solutions obtained in a fully self - consistent two - band bogoliubov - de gennes model. we show the presence of discrepant correlation lengths leading to vortex clustering in two - band superconductors.
arxiv:2404.11491
spectral density of current fluctuations in a short ballistic superconducting quantum point contact is calculated for arbitrary bias voltages $ v $. contrary to a common opinion that the supercurrent flow in josephson junctions is coherent process with no fluctuations, we find extremely large current noise that is { \ em caused } by the supercurrent coherence. an unusual feature of the noise, besides its magnitude, is its voltage dependence : the noise decreases with increasing $ v $, despite the fact that the dc current grows steadily with $ v $. at finite voltages the noise can be qualitatively understood as the shot noise of the large charge quanta of magnitude $ 2 \ delta / v $ equal to the charge transferred during one period of josephson oscillations.
arxiv:cond-mat/9511128
measurements of multi - particle azimuthal correlations ( cumulants ) for charged particles in p - pb and pb - pb collisions are presented. they help address the question of whether there is evidence for global, flow - like, azimuthal correlations in the p - pb system. comparisons are made to measurements from the larger pb - pb system, where such evidence is established. in particular, the second harmonic two - particle cumulants are found to decrease with multiplicity, characteristic of a dominance of few - particle correlations in p - pb collisions. however, when a $ | \ delta \ eta | $ gap is placed to suppress such correlations, the two - particle cumulants begin to rise at high - multiplicity, indicating the presence of global azimuthal correlations. the pb - pb values are higher than the p - pb values at similar multiplicities. in both systems, the second harmonic four - particle cumulants exhibit a transition from positive to negative values when the multiplicity increases. the negative values allow for a measurement of $ v _ { 2 } \ { 4 \ } $ to be made, which is found to be higher in pb - pb collisions at similar multiplicities. the second harmonic six - particle cumulants are also found to be higher in pb - pb collisions. in pb - pb collisions, we generally find $ v _ { 2 } \ { 4 \ } \ simeq v _ { 2 } \ { 6 \ } \ neq 0 $ which is indicative of a bessel - gaussian function for the $ v _ { 2 } $ distribution. for very high - multiplicity pb - pb collisions, we observe that the four - and six - particle cumulants become consistent with 0. finally, third harmonic two - particle cumulants in p - pb and pb - pb are measured. these are found to be similar for overlapping multiplicities, when a $ | \ delta \ eta | > 1. 4 $ gap is placed.
arxiv:1406.2474
a genus one labeled circle tree is a tree with its vertices on a circle, such that together they can be embedded in a surface of genus one, but not of genus zero. we define an e - reduction process whereby a special type of subtree, called an e - graph, is collapsed to an edge. we show that genus is invariant under e - reduction. our main result is a classification of genus one labeled circle trees through e - reduction. using this we prove a modified version of a conjecture of david hough, namely, that the number of genus one labeled circle trees on $ n $ vertices is divisible by $ n $ or if it is not divisible by $ n $ then it is divisible by $ n / 2 $. moreover, we explicitly characterize when each of these possibilities occur.
arxiv:math/0509407
this study is about the iterated function system ( ifs ) of similarities on $ \ mathbb r $ that satisfies weak separation property ( wsp ). we explore if this implies finite type property. we look into the most simple case with condition that closed interval [ 0, 1 ] is an attractor of the ifs. the necessity of this condition is not strict but still unknown.
arxiv:2103.10364