text
stringlengths
1
3.65k
source
stringlengths
15
79
comet 157p is a faint object with a history of being prone to unfortunate situations, circumstances, and / or coincidences. several weeks after its 1978 discovery the comet disappeared and remained lost nonstop for twenty five years. rediscovered in 2003 as a new comet, it was about 500 times brighter than in 1978, caught apparently in one of its outbursts. the comet was not detected 200 days after its 2016 perihelion, being fainter than mag 20, but 80 days later it was mag 16 and gradually fading back to mag 20 over a period of four months. the comet did not miss the opportunity to have a close encounter with jupiter, having approached it to less than 0. 3 au on 2020 february 10. the 2017 outburst or surge of activity appears to have accompanied an event of nuclear fragmentation. the birth of a second companion is dated to the months following the jupiter encounter. the series of weird episodes culminated near the 2022 perihelion, when one companion brightened to become observable for two weeks and after another two weeks the other flared up to be seen for the next two weeks. unnoticed, this incredible coincidence fooled some experts into believing that a single object, designated 157p - b, was involved, even though its orbit left large residuals. i now offer representative fragmentation solutions for the two companions, the mean residuals amounting to + / - 0 ". 4 and + / - 1 ". 0, respectively.
arxiv:2309.01923
we study the structure of brownian loop - soup clusters in two dimensions. among other things, we obtain the following decomposition of the clusters with critical intensity : when one conditions a loop - soup cluster by its outer boundary $ \ gamma $ ( which is known to be an sle ( 4 ) - type loop ), then the union of all excursions away from $ \ gamma $ by all the brownian loops in the loop - soup that touch $ \ gamma $ is distributed exactly like the union of all excursions of a poisson point process of brownian excursions in the domain enclosed by $ \ gamma $. a related result that we derive and use is that the couplings of the gaussian free field ( gff ) with cle ( 4 ) via level - lines ( by miller - sheffield ), of the square of the gff with loop - soups via occupation times ( by le jan ), and of the cle ( 4 ) with loop - soups via loop - soup clusters ( by sheffield and werner ) can be made to coincide. an instrumental role in our proof of this fact is played by lupu ' s description of cle ( 4 ) as limits of discrete loop - soup clusters.
arxiv:1509.01180
the extraction of process models from text refers to the problem of turning the information contained in an unstructured textual process descriptions into a formal representation, i. e., a process model. several automated approaches have been proposed to tackle this problem, but they are highly heterogeneous in scope and underlying assumptions, i. e., differences in input, target output, and data used in their evaluation. as a result, it is currently unclear how well existing solutions are able to solve the model - extraction problem and how they compare to each other. we overcome this issue by comparing 10 state - of - the - art approaches for model extraction in a systematic manner, covering both qualitative and quantitative aspects. the qualitative evaluation compares the analysis of the primary studies on : 1 the main characteristics of each solution ; 2 the type of process model elements extracted from the input data ; 3 the experimental evaluation performed to evaluate the proposed framework. the results show a heterogeneity of techniques, elements extracted and evaluations conducted, that are often impossible to compare. to overcome this difficulty we propose a quantitative comparison of the tools proposed by the papers on the unifying task of process model entity and relation extraction so as to be able to compare them directly. the results show three distinct groups of tools in terms of performance, with no tool obtaining very good scores and also serious limitations. moreover, the proposed evaluation pipeline can be considered a reference task on a well - defined dataset and metrics that can be used to compare new tools. the paper also presents a reflection on the results of the qualitative and quantitative evaluation on the limitations and challenges that the community needs to address in the future to produce significant advances in this area.
arxiv:2110.03754
in this work, we give a novel general approach for distribution testing. we describe two techniques : our first technique gives sample - optimal testers, while our second technique gives matching sample lower bounds. as a consequence, we resolve the sample complexity of a wide variety of testing problems. our upper bounds are obtained via a modular reduction - based approach. our approach yields optimal testers for numerous problems by using a standard $ \ ell _ 2 $ - identity tester as a black - box. using this recipe, we obtain simple estimators for a wide range of problems, encompassing most problems previously studied in the tcs literature, namely : ( 1 ) identity testing to a fixed distribution, ( 2 ) closeness testing between two unknown distributions ( with equal / unequal sample sizes ), ( 3 ) independence testing ( in any number of dimensions ), ( 4 ) closeness testing for collections of distributions, and ( 5 ) testing histograms. for all of these problems, our testers are sample - optimal, up to constant factors. with the exception of ( 1 ), ours are the { \ em first sample - optimal testers for the corresponding problems. } moreover, our estimators are significantly simpler to state and analyze compared to previous results. as an application of our reduction - based technique, we obtain the first { \ em nearly instance - optimal } algorithm for testing equivalence between two { \ em unknown } distributions. moreover, our technique naturally generalizes to other metrics beyond the $ \ ell _ 1 $ - distance. our lower bounds are obtained via a direct information - theoretic approach : given a candidate hard instance, our proof proceeds by bounding the mutual information between appropriate random variables. while this is a classical method in information theory, prior to our work, it had not been used in distribution property testing.
arxiv:1601.05557
we explore the electronic ground states of bernal - stacked multilayer graphenes using the hartree - fock mean - field approximation and the full - parameter band model. we find that the electron - electron interaction tends to open a band gap in multilayer graphenes from bilayer to 8 - layer, while the nature of the insulating ground state sensitively depends on the band parameter $ \ gamma _ 2 $, which is responsible for the semimetallic nature of graphite. in 4 - layer graphene, particularly, the ground state assumes an odd - spatial - parity staggered phase at $ \ gamma _ 2 = 0 $, while an increasing, finite value of $ \ gamma _ 2 $ stabilizes a different state with even parity, where the electrons are attracted to the top layer and the bottom layer. the two phases are topologically distinct insulating states with different chern numbers, and they can be distinguished by spin or valley hall conductivity measurements. multilayers with more than five layers also exhibit similar ground states with potential minima at the outermost layers, although the opening of a gap in the spectrum as a whole is generally more difficult than in 4 - layer because of a larger number of energy bands overlapping at the fermi energy.
arxiv:1705.03725
how can individual agents coordinate their actions to achieve a shared objective in distributed systems? this challenge spans economic, technical, and sociological domains, each confronting scalability, heterogeneity, and conflicts between individual and collective goals. in economic markets, a common currency facilitates coordination, raising the question of whether such mechanisms can be applied in other contexts. this paper explores this idea within social media platforms, where social support ( likes, shares, comments ) acts as a currency that shapes content production and sharing. we investigate two key questions : ( 1 ) can social support serve as an effective coordination tool, and ( 2 ) what role do influencers play in content creation and dissemination? our formal analysis shows that social support can coordinate user actions similarly to money in economic markets. influencers serve dual roles, aggregating content and acting as information proxies, guiding content producers in large markets. while imperfections in information lead to a " price of influence " and suboptimal outcomes, this price diminishes as markets grow, improving social welfare. these insights provide a framework for understanding coordination in distributed environments, with applications in both sociological systems and multi - agent ai systems.
arxiv:2410.04619
we consider discrete subgroups of the group of orientation preserving isometries of the $ m $ - dimensional hyperbolic space, whose limit set is a $ ( m - 1 ) $ - dimensional real sphere, acting on the $ n $ - dimensional complex projective space for $ n \ geq m $, via an embedding from the group of orientation preserving isometries of the $ m $ - dimensional hyperbolic space to the group of holomorphic isometries of the $ n $ - dimensional complex hyperbolic space. we describe the kulkarni limit set of any of these subgroups under the embedding as a real semi - algebraic set. also, we show that the kulkarni region of discontinuity can only have one or three connected components. we use the sylvester ' s law of inertia when $ n = m $. in the other cases, we use some suitable projections of the the $ n $ - dimensional complex projective space to the $ m $ - dimensional complex projective space.
arxiv:2305.00153
we present time - resolved, far - ultraviolet ( fuv ) spectroscopy and photometry of the 1. 1 day eclipsing binary system ako 9 in the globular cluster 47 tucanae. the fuv spectrum of ako 9 is blue and exhibits prominent c iv and he ii emission lines. the spectrum broadly resembles that of long - period, cataclysmic variables in the galactic field. combining our time - resolved fuv data with archival optical photometry of 47 tuc, we refine the orbital period of ako 9 and define an accurate ephemeris for the system. we also place constraints on several other system parameters, using a variety of observational constraints. we find that all of the empirical evidence is consistent with ako 9 being a long - period dwarf nova in which mass transfer is driven by the nuclear expansion of a sub - giant donor star. we therefore conclude that ako 9 is the first spectroscopically confirmed cataclysmic variable in 47 tuc. we also briefly consider ako 9 ' s likely formation and ultimate evolution. regarding the former, we find that the system was almost certainly formed dynamically, either via tidal capture or in a 3 - body encounter. regarding the latter, we show that ako 9 will probably end its cv phase by becoming a detached, double wd system or by exploding in a type ia supernova.
arxiv:astro-ph/0309168
let $ n, k \ in \ mathbb { n } $ and let $ p _ { n } $ denote the $ n $ th prime number. we define $ p _ { n } ^ { ( k ) } $ recursively as $ p _ { n } ^ { ( 1 ) } : = p _ { n } $ and $ p _ { n } ^ { ( k ) } = p _ { p _ { n } ^ { ( k - 1 ) } } $, that is, $ p _ { n } ^ { ( k ) } $ is the $ p _ { n } ^ { ( k - 1 ) } $ th prime. in this note we give answers to some questions and prove a conjecture posed by miska and t \ ' { o } th in their recent paper concerning subsequences of the sequence of prime numbers. in particular, we establish explicit upper and lower bounds for $ p _ { n } ^ { ( k ) } $. we also study the behaviour of the counting functions of the sequences $ ( p _ { n } ^ { ( k ) } ) _ { k = 1 } ^ { \ infty } $ and $ ( p _ { k } ^ { ( k ) } ) _ { k = 1 } ^ { \ infty } $.
arxiv:1909.12139
quantum annealing ( qa ) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. here, we demonstrate how finite range tunneling can provide considerable computational advantage. for a crafted problem designed to have tall and narrow energy barriers separating local minima, the d - wave 2x quantum annealer achieves significant runtime advantages relative to simulated annealing ( sa ). for instances with 945 variables, this results in a time - to - 99 % - success - probability that is $ \ sim 10 ^ 8 $ times faster than sa running on a single processor core. we also compared physical qa with quantum monte carlo ( qmc ), an algorithm that emulates quantum tunneling on classical processors. we observe a substantial constant overhead against physical qa : d - wave 2x again runs up to $ \ sim 10 ^ 8 $ times faster than an optimized implementation of qmc on a single core. we note that there exist heuristic classical algorithms that can solve most instances of chimera structured problems in a timescale comparable to the d - wave 2x. however, we believe that such solvers will become ineffective for the next generation of annealers currently being designed. to investigate whether finite range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. for random instances of the number partitioning problem, we find numerically that qmc, as well as other algorithms designed to simulate qa, scale better than sa. we discuss the implications of these findings for the design of next generation quantum annealers.
arxiv:1512.02206
we present deep, near - infrared images of the circumbinary disk surrounding the pre - main - sequence binary star, gg tau a, obtained with nicmos aboard the hubble space telescope. the spatially resolved proto - planetary disk scatters roughly 1. 5 % of the stellar flux, with a near - to - far side flux ratio of ~ 1. 4, independent of wavelength, and colors that are comparable to the central source ; all of these properties are significantly different from the earlier ground - based observations. new monte carlo scattering simulations of the disk emphasize that the general properties of the disk, such as disk flux, near side to far side flux ratio and integrated colors, can be approximately reproduced using ism - like dust grains, without the presence of either circumstellar disks or large dust grains, as had previously been suggested. a single parameter phase function is fitted to the observed azimuthal variation in disk flux, providing a lower limit on the median grain size of 0. 23 micron. our analysis, in comparison to previous simulations, shows that the major limitation to the study of grain growth in t tauri disk systems through scattered light lies in the uncertain ism dust grain properties. finally, we use the 9 year baseline of astrometric measurements of the binary to solve the complete orbit, assuming that the binary is coplanar with the circumbinary ring. we find that the estimated 1 sigma range on disk inner edge to semi - major axis ratio, 3. 2 < rin / a < 6. 7, is larger than that estimated by previous sph simulations of binary - disk interactions.
arxiv:astro-ph/0204465
we present here a new robotic telescope called trappist ( transiting planets and planetesimals small telescope ). equipped with a high - quality ccd camera mounted on a 0. 6 meter light weight optical tube, trappist has been installed in april 2010 at the eso la silla observatory ( chile ), and is now beginning its scientific program. the science goal of trappist is the study of planetary systems through two approaches : the detection and study of exoplanets, and the study of comets. we describe here the objectives of the project, the hardware, and we present some of the first results obtained during the commissioning phase.
arxiv:1101.5807
data augmentation greatly increases the amount of data obtained based on labeled data to save on expenses and labor for data collection and labeling. we present a new approach for data augmentation called nine - dot mls ( nd - mls ). this approach is proposed based on the idea of image defor - mation. images are deformed based on control points, which are calculated by nd - mls. the method can generate over 2000 images for one exist - ing dataset in a short time. to verify this data augmentation method, extensive tests were performed covering 3 main tasks of computer vision, namely, classification, detection and segmentation. the results show that 1 ) in classification, 10 images per category were used for training, and vggnet can obtain 92 % top - 1 acc on the mnist dataset of handwritten digits by nd - mls. in the omniglot dataset, the few - shot accuracy usu - ally decreases with the increase in character categories. however, the nd - mls method has stable performance and obtains 96. 5 top - 1 acc in res - net on 100 different handwritten character classification tasks ; 2 ) in segmentation, under the premise of only ten original images, deeplab obtains 93. 5 %, 85 %, and 73. 3 % m _ iou ( 10 ) on the bottle, horse, and grass test datasets, respectively, while the cat test dataset obtains 86. 7 % m _ iou ( 10 ) with the segnet model ; 3 ) with only 10 original images from each category in object detection, yolo v4 obtains 100 % and 97. 2 % bottle and horse detection, respectively, while the cat dataset obtains 93. 6 % with yolo v3. in summary, nd - mls can perform well on classification, object detec - tion, and semantic segmentation tasks by using only a few data.
arxiv:2208.11532
we show explicitly, by using astrophysical data plus reasonable assumptions for the bulk viscosity in the cosmic fluid, how the magnitude of this viscosity may be high enough to drive the fluid from its position in the quintessence region at present time $ t = 0 $ across the barrier $ w = - 1 $ into the phantom region in the late universe. the phantom barrier is accordingly not a sharp mathematical divide, but rather a fuzzy concept. we also calculate the limiting forms of various thermodynamical quantities, including the rate of entropy production, for a dark energy fluid near the future big rip singularity.
arxiv:1509.03489
we study an explicit description of semibricks and 2 - term simple - minded collections over preprojective algebras of type $ a $ via arc diagrams. we provide a bijection between the set of noncrossoing arc diagrams ( resp. the set of double arc diagrams ), which is in bijective correspondence with elements of the symmetric group, and the set of semibricks ( resp. the set of 2 - term simple - minded collections ) over the algebra. moreover we define a mutation and a partial order on the set of double arc diagrams. in particular, we obtain a poset isomorphism between the symmetric group and the set of 2 - term simple - minded collections. as an application of our results, we study semibricks of some quotient algebras of the preprojective algebras of type $ a $ and we reprove some important results shown by the other authors.
arxiv:2010.04353
we show that extending an embedding of a graph $ \ gamma $ in a surface to an embedding of a hamiltonian supergraph can be blocked by certain planar subgraphs but, for some subdivisions of $ \ gamma $, hamiltonian extensions must exist.
arxiv:2303.08306
two discretizations of the vector nonlinear schrodinger ( nls ) equation are studied. one of these discretizations, referred to as the symmetric system, is a natural vector extension of the scalar integrable discrete nls equation. the other discretization, referred to as the asymmetric system, has an associated linear scattering pair. general formulae for soliton solutions of the asymmetric system are presented. formulae for a constrained class of solutions of the symmetric system may be obtained. numerical studies support the hypothesis that the symmetric system has general soliton solutions.
arxiv:solv-int/9810014
the biham - middleton - levine ( bml ) traffic model, a cellular automaton with east - bound and north - bound cars moving by turns on a square lattice, has been an underpinning model in the study of collective behaviour by cars, pedestrians and even internet packages. contrary to initial beliefs that the model exhibits a sharp phase transition from freely flowing to fully jammed, it has been reported that it shows intermediate stable phases, where jams and freely flowing traffic coexist, but there is no clear understanding of their origin. here, we analyze the model as an anisotropic system with a preferred fluid direction ( north - east ) and find that it exhibits two differentiated phase transitions : either if the system is longer in the flow direction ( longitudinal ) or perpendicular to it ( transversal ). the critical densities where these transitions occur enclose the density interval of intermediate states and can be approximated by mean - field analysis, all derived from the anisotropic exponent relating the longitudinal and transversal correlation lengths. thus, we arrive to the interesting result that the puzzling intermediate states in the original model are just a superposition of these two different behaviours of the phase transition, solving by the way most mysteries behind the bml model, which turns to be a paradigmatic example of such anisotropic critical systems.
arxiv:1502.04587
recent neutron scattering experiments in the superconducting state of ybco have been interpreted in terms of a magnetic collective mode whose dispersion relative to the commensurate wavevector has a curvature opposite in sign to a conventional magnon dispersion. the purpose of this article is to demonstrate that simple linear response calculations are in support of a collective mode interpretation, and to explain why the dispersion has the curvature it does.
arxiv:cond-mat/0010298
this research focuses on predicting the demand for air taxi urban air mobility ( uam ) services during different times of the day in various geographic regions of new york city using machine learning algorithms ( mlas ). several ride - related factors ( such as month of the year, day of the week and time of the day ) and weather - related variables ( such as temperature, weather conditions and visibility ) are used as predictors for four popular mlas, namely, logistic regression, artificial neural networks, random forests, and gradient boosting. experimental results suggest gradient boosting to consistently provide higher prediction performance. specific locations, certain time periods and weekdays consistently emerged as critical predictors.
arxiv:2103.14604
we study the quantum nature of non - bunch - davies states in de sitter space by evaluating chsh inequality on a localized two - atom system. we show that quantum nonlocality can be generated through the markovian evolution of two - atom, witnessed by a violation of chsh inequality on its final equilibrium state. we find that the upper bound of inequality violation is determined by different choices of de sitter - invariant vacua sectors. in particular, with growing gibbons - hawking temperature, the chsh bound degrades monotonously for bunch - davies vacuum sector. due to the intrinsic correlation of non - bunch - davies vacua, we find that the related violation of inequality can however drastically increase after certain turning point, and may persist for arbitrarily large environment decoherence. this implies that the chsh inequality is useful to classify the initial quantum state of the universe. finally, we clarify that the witnessed intrinsic correlation of non - bunch - davies vacua can be utilized for quantum information applications, e. g., surpassing the heisenberg uncertainty bound of quantum measurement in de sitter space.
arxiv:1806.08923
recently, along with the development of quantum information, quantum entanglemant became a hot topic of people. quantum entanglemant is one of the most amazing phenomenon in quantum mechanics that is totally different from classical physics. however, system would interact with environment in the practical quantum information process. the entanglement would be broken. in this paper, we study the disentanglement evolution of three spin qubits in an xy spin - chain environment. the dynamical process of the disentanglement is investigated. we found the exact expression of the coherence factor. we discuss the coherence factor and the parameters, and then we illustrate that the disentanglement of central spins is best enhanced by the quantum critical behavior of the environmental spin chain. furthermore, a scaling rule is obtained.
arxiv:1603.08555
it is known that any normed vector space which satisfies the parallelogram law is actually an inner product space. for finite dimensional normed vector spaces over r, we formulate an approximate version of this theorem : if a space approximately satisfies the parallelogram law, then it has a near isometry with euclidean space. in other words, a small von neumann jordan constant e + 1 for x yields a small banach - mazur distance with r ^ n, d ( x, r ^ n ) < 1 + b _ n e + o ( e ^ 2 ). finally, we examine how this estimate worsens as the dimension, n, of x increases, with the conclusion that b _ n grows quadratically with n.
arxiv:1305.3546
we consider the problem associated to recovering the block structure of an ising model given independent observations on the binary hypercube. this new model, called the ising blockmodel, is a perturbation of the mean field approximation of the ising model known as the curie - weiss model : the sites are partitioned into two blocks of equal size and the interaction between those of the same block is stronger than across blocks, to account for more order within each block. we study probabilistic, statistical and computational aspects of this model in the high - dimensional case when the number of sites may be much larger than the sample size.
arxiv:1612.03880
analysis of the data shows that hadron tags of the two standard delphi particle identification packages ribmean and hadsign are weakly correlated. this led to the idea of constructing a neural network for both kaon and proton identification using as input the existing tags from ribmean and hadsign, as well as preproccessed tpc and rich detector measurements together with additional de / dx information from the delphi vertex detector. it will be shown in this note that the net output is much more efficient at the same purity than the hadsign or ribmean tags alone. we present an easy - to - use routine performing the necessary calculations.
arxiv:hep-ex/0111081
in economic literature, economic complexity is typically approximated on the basis of an economy ' s gross export structure. however, in times of ever increasingly integrated global value chains, gross exports may convey an inaccurate image of a country ' s economic performance since they also incorporate foreign value - added and double - counted exports. thus, i introduce a new empirical approach approximating economic complexity based on a country ' s value - added export structure. this approach leads to substantially different complexity rankings compared to established metrics. moreover, the explanatory power of gdp per capita growth rates for a sample of 40 lower - middle - to high - income countries is considerably higher, even if controlling for typical growth regression covariates.
arxiv:2009.07599
the observation of electric dipole moments ( edms ) in atomic systems due to parity and time - reversal violating ( p, t - odd ) interactions can probe new physics beyond the standard model and also provide insights into the matter - antimatter asymmetry in the universe. the edms of open - shell atomic systems are sensitive to the electron edm and the p, t - odd scalar - pseudoscalar ( s - ps ) semi - leptonic interaction, but the dominant contributions to the edms of diamagnetic atoms come from the hadronic and tensor - pseudotensor ( t - pt ) semi - leptonic interactions. several diamagnetic atoms like $ ^ { 129 } $ xe, $ ^ { 171 } $ yb, $ ^ { 199 } $ hg, $ ^ { 223 } $ rn, and $ ^ { 225 } $ ra are candidates for the experimental search for the possible existence of edms, and among these $ ^ { 199 } $ hg has yielded the lowest limit till date. the t or cp violating coupling constants of the aforementioned interactions can be extracted from these measurements by combining with atomic and nuclear calculations. in this work, we report the calculations of the edms of the above atoms by including both the electromagnetic and p, t - odd violating interactions simultaneously. these calculations are performed by employing relativistic many - body methods based on the random phase approximation ( rpa ) and the singles and doubles coupled - cluster ( ccsd ) method starting with the dirac - hartree - fock ( dhf ) wave function in both cases. the differences in the results from both the methods shed light on the importance of the non - core - polarization electron correlation effects that are accounted for by the ccsd method. we also determine electric dipole polarizabilities of these atoms, which have computational similarities with edms and compare them with the available experimental and other theoretical results to assess the accuracy of our calculations.
arxiv:1710.10946
mid - level features based on visual dictionaries are today a cornerstone of systems for classification and retrieval of images. those state - of - the - art representations depend crucially on the choice of a codebook ( visual dictionary ), which is usually derived from the dataset. in general - purpose, dynamic image collections ( e. g., the web ), one cannot have the entire collection in order to extract a representative dictionary. however, based on the hypothesis that the dictionary reflects only the diversity of low - level appearances and does not capture semantics, we argue that a dictionary based on a small subset of the data, or even on an entirely different dataset, is able to produce a good representation, provided that the chosen images span a diverse enough portion of the low - level feature space. our experiments confirm that hypothesis, opening the opportunity to greatly alleviate the burden in generating the codebook, and confirming the feasibility of employing visual dictionaries in large - scale dynamic environments.
arxiv:1205.2663
face swapping technology used to create " deepfakes " has advanced significantly over the past few years and now enables us to create realistic facial manipulations. current deep learning algorithms to detect deepfakes have shown promising results, however, they require large amounts of training data, and as we show they are biased towards a particular ethnicity. we propose a deepfake detection methodology that eliminates the need for any real data by making use of synthetically generated data using stylegan3. this not only performs at par with the traditional training methodology of using real data but it shows better generalization capabilities when finetuned with a small amount of real data. furthermore, this also reduces biases created by facial image datasets that might have sparse data from particular ethnicities.
arxiv:2212.02571
in many real - world applications, sequential rule mining ( srm ) can provide prediction and recommendation functions for a variety of services. it is an important technique of pattern mining to discover all valuable rules that belong to high - frequency and high - confidence sequential rules. although several algorithms of srm are proposed to solve various practical problems, there are no studies on target sequential rules. targeted sequential rule mining aims at mining the interesting sequential rules that users focus on, thus avoiding the generation of other invalid and unnecessary rules. this approach can further improve the efficiency of users in analyzing rules and reduce the consumption of data resources. in this paper, we provide the relevant definitions of target sequential rule and formulate the problem of targeted sequential rule mining. furthermore, we propose an efficient algorithm, called targeted sequential rule mining ( tasrm ). several pruning strategies and an optimization are introduced to improve the efficiency of tasrm. finally, a large number of experiments are conducted on different benchmarks, and we analyze the results in terms of their running time, memory consumption, and scalability, as well as query cases with different query rules. it is shown that the novel algorithm tasrm and its variants can achieve better experimental performance compared to the existing baseline algorithm.
arxiv:2206.04728
in shotgun sequencing, the input string ( typically, a long dna sequence composed of nucleotide bases ) is sequenced as multiple overlapping fragments of much shorter lengths ( called \ textit { reads } ). modelling the shotgun sequencing pipeline as a communication channel for dna data storage, the capacity of this channel was identified in a recent work, assuming that the reads themselves are noiseless substrings of the original sequence. modern shotgun sequencers however also output quality scores for each base read, indicating the confidence in its identification. bases with low quality scores can be considered to be erased. motivated by this, we consider the \ textit { shotgun sequencing channel with erasures }, where each symbol in any read can be independently erased with some probability $ \ delta $. we identify achievable rates for this channel, using a random code construction and a decoder that uses typicality - like arguments to merge the reads.
arxiv:2401.16342
named entities are ubiquitous in text that naturally accompanies images, especially in domains such as news or wikipedia articles. in previous work, named entities have been identified as a likely reason for low performance of image - text retrieval models pretrained on wikipedia and evaluated on named entities - free benchmark datasets. because they are rarely mentioned, named entities could be challenging to model. they also represent missed learning opportunities for self - supervised models : the link between named entity and object in the image may be missed by the model, but it would not be if the object were mentioned using a more common term. in this work, we investigate hypernymization as a way to deal with named entities for pretraining grounding - based multi - modal models and for fine - tuning on open - vocabulary detection. we propose two ways to perform hypernymization : ( 1 ) a ` ` manual ' ' pipeline relying on a comprehensive ontology of concepts, and ( 2 ) a ` ` learned ' ' approach where we train a language model to learn to perform hypernymization. we run experiments on data from wikipedia and from the new york times. we report improved pretraining performance on objects of interest following hypernymization, and we show the promise of hypernymization on open - vocabulary detection, specifically on classes not seen during training.
arxiv:2304.13130
this paper addresses several questions of feng, gruenhage, and shen which arose from michael ' s theory of continuous selections from countable spaces. we construct an example of a space which is $ l $ - selective but not $ \ mathbb { q } $ - selective from $ \ mathfrak { d } = \ omega _ 1 $, and an $ l $ - selective space which is not selective for a $ p $ - point ultrafilter from the assumption of $ \ mathsf { ch } $. we also produce $ \ mathsf { zfc } $ examples of fr \ ' echet spaces where countable subsets are first countable which are not $ l $ - selective.
arxiv:1910.10634
we study a family of orthogonal polynomials which generalizes a sequence of polynomials considered by l. carlitz. we show that they are a special case of the sheffer polynomials and point out some interesting connections with certain sobolev orthogonal polynomials.
arxiv:math/0504476
imaging fluorescent disease biomarkers in tissues and skin is a non - invasive method to screen for health conditions. we report an automated process that combines intraoral fluorescent porphyrin biomarker imaging, clinical examinations and machine learning for correlation of systemic health conditions with periodontal disease. 1215 intraoral fluorescent images, from 284 consenting adults aged 18 - 90, were analyzed using a machine learning classifier that can segment periodontal inflammation. the classifier achieved an auc of 0. 677 with precision and recall of 0. 271 and 0. 429, respectively, indicating a learned association between disease signatures in collected images. periodontal diseases were more prevalent among males ( p = 0. 0012 ) and older subjects ( p = 0. 0224 ) in the screened population. physicians independently examined the collected images, assigning localized modified gingival indices ( mgis ). mgis and periodontal disease were then cross - correlated with responses to a medical history questionnaire, blood pressure and body mass index measurements, and optic nerve, tympanic membrane, neurological, and cardiac rhythm imaging examinations. gingivitis and early periodontal disease were associated with subjects diagnosed with optic nerve abnormalities ( p < 0. 0001 ) in their retinal scans. we also report significant co - occurrences of periodontal disease in subjects reporting swollen joints ( p = 0. 0422 ) and a family history of eye disease ( p = 0. 0337 ). these results indicate cross - correlation of poor periodontal health with systemic health outcomes and stress the importance of oral health screenings at the primary care level. our screening process and analysis method, using images and machine learning, can be generalized for automated diagnoses and systemic health screenings for other diseases.
arxiv:1810.10664
a theory describing the forces governing the self - assembly of nanoparticles at the solid - liquid interface is developed. in the process, new theoretical results are derived to describe the effect that the field penetration of a point - like particle, into an electrode, has on the image potential energy, and pair interaction energy profiles at the electrode - electrolyte interface. the application of the theory is demonstrated for gold and ito electrode systems, promising materials for novel colour - tuneable electrovariable smart mirrors and mirror - window devices respectively. model estimates suggest that electrovariability is attainable in both systems and will act as a guide for future experiments. lastly, the generalisability of the theory towards electrovariable, nanoplasmonic systems suggests that it may contribute towards the design of intelligent metamaterials with programmable properties.
arxiv:1709.05494
the adoption of battery electric vehicles ( bevs ) may significantly reduce greenhouse gas emissions caused by road transport. however, there is wide disagreement as to how soon battery electric vehicles will play a major role in overall transportation. focusing on battery electric passenger cars, we here analyze bev adoption across 17 individual countries, europe, and the world, and consistently find exponential growth trends. modeling - based estimates of future adoption given past trends suggests system - wide adoption substantially faster than typical economic analyses have proposed so far. for instance, we estimate the majority of passenger cars in europe to be electric by about 2031. within regions, the predicted times of mass adoption are largely insensitive to model details. despite significant differences in current electric fleet sizes across regions, their growth rates consistently indicate fast doubling times of approximately 15 months, hinting at radical economic and infrastructural consequences in the near future.
arxiv:2306.16152
we propose a randomized algorithm to compute isomorphisms between finite fields using elliptic curves. to compute an isomorphism between two fields of cardinality $ q ^ n $, our algorithm takes $ $ n ^ { 1 + o ( 1 ) } \ log ^ { 1 + o ( 1 ) } q + \ max _ { \ ell } \ left ( \ ell ^ { n _ \ ell + 1 + o ( 1 ) } \ log ^ { 2 + o ( 1 ) } q + o ( \ ell \ log ^ 5q ) \ right ) $ $ time, where $ \ ell $ runs through primes dividing $ n $ but not $ q ( q - 1 ) $ and $ n _ \ ell $ denotes the highest power of $ \ ell $ dividing $ n $. prior to this work, the best known run time dependence on $ n $ was quadratic. our run time dependence on $ n $ is at worst quadratic but is subquadratic if $ n $ has no large prime factor. in particular, the $ n $ for which our run time is nearly linear in $ n $ have natural density at least $ 3 / 10 $. the crux of our approach is finding a point on an elliptic curve of a prescribed prime power order or equivalently finding preimages under the lang map on elliptic curves over finite fields. we formulate this as an open problem whose resolution would solve the finite field isomorphism problem with run time nearly linear in $ n $.
arxiv:1604.03072
we introduce a notion of oriented dialgebra and develop a cohomology theory for oriented dialgebras based on the possibility to mix the standard chain complexes computing group cohomology and associative dialgebra cohomology. we also introduce a $ 1 $ - parameter formal deformation theory for oriented dialgebras and show that cohomology of oriented dialgebras controls such deformations.
arxiv:2001.02386
determinantal point processes ( dpps ) are a class of repulsive point processes, popular for their relative simplicity. they are traditionally defined via their marginal distributions, but a subset of dpps called " l - ensembles " have tractable likelihoods and are thus particularly easy to work with. indeed, in many applications, dpps are more naturally defined based on the l - ensemble formulation rather than through the marginal kernel. the fact that not all dpps are l - ensembles is unfortunate, but there is a unifying description. we introduce here extended l - ensembles, and show that all dpps are extended l - ensembles ( and vice - versa ). extended l - ensembles have very simple likelihood functions, contain l - ensembles and projection dpps as special cases. from a theoretical standpoint, they fix some pathologies in the usual formalism of dpps, for instance the fact that projection dpps are not l - ensembles. from a practical standpoint, they extend the set of kernel functions that may be used to define dpps : we show that conditional positive definite kernels are good candidates for defining dpps, including dpps that need no spatial scale parameter. finally, extended l - ensembles are based on so - called ` ` saddle - point matrices ' ', and we prove an extension of the cauchy - binet theorem for such matrices that may be of independent interest.
arxiv:2107.06345
the new numerical version of the wigner approach to quantum mechanics for treatment thermodynamic properties of strongly coupled systems of particles has been developed for extreme conditions, when analytical approximations obtained in different kind of perturbation theories can not be applied. explicit analytical expression of the wigner function has been obtained in linear and harmonic approximations. fermi statistical effects are accounted by effective pair pseudopotential depending on coordinates, momenta and degeneracy parameter of particles and taking into account pauli blocking of fermions. the new quantum monte - carlo method for calculations of average values of arbitrary quantum operators has been proposed. calculations of the momentum distribution function of the degenerate ideal fermi gas have been carried out for testing the developed approach. comparison of obtained momentum distribution function of strongly correlated coulomb systems of particles with maxwell - - boltzmann and fermi distributions shows the significant influence of interparticle interaction both at small momenta and in the high energy quantum ' tails '.
arxiv:1703.04448
with disks and networks providing gigabytes per second, parsing decimal numbers from strings becomes a bottleneck. we consider the problem of parsing decimal numbers to the nearest binary floating - point value. the general problem requires variable - precision arithmetic. however, we need at most 17 digits to represent 64 - bit standard floating - point numbers ( ieee 754 ). thus we can represent the decimal significand with a single 64 - bit word. by combining the significand and precomputed tables, we can compute the nearest floating - point number using as few as one or two 64 - bit multiplications. our implementation can be several times faster than conventional functions present in standard c libraries on modern 64 - bit systems ( intel, amd, arm and power9 ). our work is available as open source software used by major systems such as apache arrow and yandex clickhouse. the go standard library has adopted a version of our approach.
arxiv:2101.11408
we study the portfolio selection problem of a long - run investor who is maximising the asymptotic growth rate of her expected utility. we show that, somewhat surprisingly, it is essentially not affected by introduction of a floor constraint which requires the wealth process to dominate a given benchmark at all times. we further study the notion of long - run optimality of wealth processes via convergence of finite horizon value functions to the asymptotic optimal value. we characterise long - run optimality under floor and drawdown constraints.
arxiv:1305.6831
context : the popularity of cloud computing as the primary platform for developing, deploying, and delivering software is largely driven by the promise of cost savings. therefore, it is surprising that no empirical evidence has been collected to determine whether cost awareness permeates the development process and how it manifests in practice. objective : this study aims to provide empirical evidence of cost awareness by mining open source repositories of cloud - based applications. the focus is on infrastructure as code artifacts that automate software ( re ) deployment on the cloud. methods : a systematic search through 152, 735 repositories resulted in the selection of 2, 010 relevant ones. we then analyzed 538 relevant commits and 208 relevant issues using a combination of inductive and deductive coding. results : the findings indicate that developers are not only concerned with the cost of their application deployments but also take actions to reduce these costs beyond selecting cheaper cloud services. we also identify research areas for future consideration. conclusion : although we focus on a particular infrastructure as code technology ( terraform ), the findings can be applicable to cloud - based application development in general. the provided empirical grounding can serve developers seeking to reduce costs through service selection, resource allocation, deployment optimization, and other techniques.
arxiv:2304.07531
analyses of event shapes and forward jet production in deep inelastic scattering at the hera collider are described. the results are compared to qcd predictions.
arxiv:hep-ex/9911047
we present an effective model for timing - dependent synaptic plasticity ( stdp ) in terms of two interacting traces, corresponding to the fraction of activated nmda receptors and the ca2 + concentration in the dendritic spine of the postsynaptic neuron. this model intends to bridge the worlds of existing simplistic phenomenological rules and highly detailed models, constituting thus a practical tool for the study of the interplay between neural activity and synaptic plasticity in extended spiking neural networks. for isolated pairs of pre - and postsynaptic spikes the standard pairwise stdp rule is reproduced, with appropriate parameters determining the respective weights and time scales for the causal and the anti - causal contributions. the model contains otherwise only three free parameters which can be adjusted to reproduce triplet nonlinearities in both hippocampal culture and cortical slices. we also investigate the transition from time - dependent to rate - dependent plasticity occurring for both correlated and uncorrelated spike patterns.
arxiv:1410.0557
we propose a feasible scheme for teleporting an arbitrary polarization state or entanglement of photons by requiring only single - photon ( sp ) sources, simple linear optical elements and sp quantum non - demolition measurements. an unknown sp polarization state can be faithfully teleported either to a duplicate polarization state or to an entangled state. our proposal can be used to implement long - distance quantum communication in a simple way. the scheme is within the reach of current technology and significantly simplifies the realistc implementation of long - distance high - fidelity quantum communication with photon qubits.
arxiv:quant-ph/0202040
we analyse a linear lattice boltzmann ( lb ) formulation for simulation of linear acoustic wave propagation in heterogeneous media. we employ the single - relaxation - time bhatnagar - gross - krook ( bgk ) as well as the general multi - relaxation - time ( mrt ) collision operators. by calculating the dispersion relation for various 2d lattices, we show that the d2q5 lattice is the most suitable model for the linear acoustic problem. we also implement a grid - refinement algorithm for the lb scheme to simulate waves propagating in a heterogeneous medium with velocity contrasts. our results show that the lb scheme performance is comparable to the classical second - order finite - difference schemes. given its efficiency for parallel computation, the lb method can be a cost effective tool for the simulation of linear acoustic waves in complex geometries and multiphase media.
arxiv:1704.03172
electric interactions have a strong impact on the structure and dynamics of biomolecules in their native water environment. given the variety of water arrangements in hydration shells and the femto - to subnanosecond time range of structural fluctuations, there is a strong quest for sensitive noninvasive probes of local electric fields. the stretching vibrations of phosphate groups, in particular the asymmetric ( po2 ) - stretching vibration { \ nu } as ( po2 ) -, allow for a quantitative mapping of dynamic electric fields in aqueous environments via a field - induced redshift of their transition frequencies and concomitant changes of vibrational line shapes. we present a systematic study of { \ nu } as ( po2 ) - excitations in molecular systems of increasing complexity, including dimethyl phosphate ( dmp ), short dna and rna duplex structures, and transfer rna ( trna ) in water. a combination of linear infrared absorption, two - dimensional infrared ( 2d - ir ) spectroscopy, and molecular dynamics ( md ) simulations gives quantitative insight in electric - field tuning rates of vibrational frequencies, electric field and fluctuation amplitudes, and molecular interaction geometries. beyond neat water environments, the formation of contact ion pairs of phosphate groups with mg2 + ions is demonstrated via frequency upshifts of the { \ nu } as ( po2 ) - vibration, resulting in a distinct vibrational band. the frequency positions of contact geometries are determined by an interplay of attractive electric and repulsive exchange interactions.
arxiv:2108.11379
deep metric learning is essential for visual recognition. the widely used pair - wise ( or triplet ) based loss objectives cannot make full use of semantical information in training samples or give enough attention to those hard samples during optimization. thus, they often suffer from a slow convergence rate and inferior performance. in this paper, we show how to learn an importance - driven distance metric via optimal transport programming from batches of samples. it can automatically emphasize hard examples and lead to significant improvements in convergence. we propose a new batch - wise optimal transport loss and combine it in an end - to - end deep metric learning manner. we use it to learn the distance metric and deep feature representation jointly for recognition. empirical results on visual retrieval and classification tasks with six benchmark datasets, i. e., mnist, cifar10, shrec13, shrec14, modelnet10, and modelnet40, demonstrate the superiority of the proposed method. it can accelerate the convergence rate significantly while achieving a state - of - the - art recognition performance. for example, in 3d shape recognition experiments, we show that our method can achieve better recognition performance within only 5 epochs than what can be obtained by mainstream 3d shape recognition approaches after 200 epochs.
arxiv:1903.08923
multicellular cable bacteria display an exceptional form of biological conduction, channeling electrical currents across centimeter distances through a regular network of protein fibers embedded in the cell envelope. the fiber conductivity is among the highest recorded for biomaterials, providing a promising outlook for new bio - electronic technologies, but the underlying mechanism of electron transport remains elusive. here, we use detailed electrical characterization down to cryogenic temperatures, which reveals that long - range conduction in these bacterial protein wires is based on a unique type of quantum - assisted multistep hopping. the conductance near room temperature reveals thermally activated behavior, yet with a low activation energy, suggesting that substantial delocalization across charge carrier sites contributes to high conductivity. at cryogenic temperatures, the conductance becomes virtually independent of temperature, thus indicating that quantum vibrations couple to the charge transport. our results demonstrate that quantum effects can manifest themselves in biological systems over macroscopic length scales.
arxiv:2308.09560
ordinal regression is a fundamental problem within the field of computer vision, with customised well - trained models on specific tasks. while pre - trained vision - language models ( vlms ) have exhibited impressive performance on various vision tasks, their potential for ordinal regression has received less exploration. in this study, we first investigate clip ' s potential for ordinal regression, from which we expect the model could generalise to different ordinal regression tasks and scenarios. unfortunately, vanilla clip fails on this task, since current vlms have a well - documented limitation of encapsulating compositional concepts such as number sense. we propose a simple yet effective method called numclip to improve the quantitative understanding of vlms. we disassemble the exact image to number - specific text matching problem into coarse classification and fine prediction stages. we discretize and phrase each numerical bin with common language concept to better leverage the available pre - trained alignment in clip. to consider the inherent continuous property of ordinal regression, we propose a novel fine - grained cross - modal ranking - based regularisation loss specifically designed to keep both semantic and ordinal alignment in clip ' s feature space. experimental results on three general ordinal regression tasks demonstrate the effectiveness of numclip, with 10 % and 3. 83 % accuracy improvement on historical image dating and image aesthetics assessment task, respectively. code is publicly available at https : / / github. com / xmed - lab / numclip.
arxiv:2408.03574
we study abelian dominance for confinement in terms of the local gluon properties in the maximally abelian ( ma ) gauge in a semi - analytical manner with the help of the lattice qcd. the global weyl symmetry persistently remains as the relic of su ( $ n _ c $ ) in the ma gauge, and provides the ambiguity on the electric and magnetic charges. we derive the criterion on the su ( $ n _ c $ ) - gauge invariance in terms of the residual symmetry in the abelian gauge. in the lattice qcd, we find microscopic abelian dominance on the link variable for the whole region of $ \ beta $ in the ma gauge. the off - diagonal angle variable, which is not constrained by the ma - gauge fixing condition, tends to be random besides the residual gauge degrees of freedom. within the random - variable approximation for the off - diagonal angle variable, we prove that off - diagonal gluon contribution to the wilson loop obeys the perimeter law in the ma gauge, and show exact abelian dominance for the string tension, although small deviation is brought by the finite size effect of the wilson loop in the actual lattice qcd simulation.
arxiv:hep-lat/9807025
various supernovae ( sn ), compact object coalescences, and tidal disruption events are widely believed to occur embedded in active galactic nuclei ( agn ) accretion disks and generate detectable electromagnetic ( em ) signals. we collectively refer to them as \ emph { agn disk transients }. the inelastic hadronuclear ( $ pp $ ) interactions between shock - accelerated cosmic rays and agn disk materials shortly after the ejecta shock breaks out of the disk can produce high - energy neutrinos. however, the expected efficiency of neutrino production would decay rapidly by adopting a pure gaussian density atmosphere profile applicable for stable gas - dominated disks. on the other hand, agn outflows and disk winds are commonly found around agn accretion disks. in this paper, we present that the circum - disk medium would further consume the shock kinetic energy to more efficiently produce high - energy neutrinos, especially for $ \ sim $ \, tev $ - $ pev neutrinos that icecube detects. thanks to the existence of the circum - disk medium, we find that the neutrino production will be enhanced significantly and make a much higher contribution to the diffuse neutrino background. optimistically, $ \ sim20 \ % $ diffuse neutrino background can be contributed from agn disk transients.
arxiv:2211.13953
the redshifted 21 cm line is an emerging tool in cosmology, in principle permitting three - dimensional surveys of our universe that reach unprecedentedly large volumes, previously inaccessible length scales, and hitherto unexplored epochs of our cosmic timeline. large radio telescopes have been constructed for this purpose, and in recent years there has been considerable progress in transforming 21 cm cosmology from a field of considerable theoretical promise to one of observational reality. increasingly, practitioners in the field are coming to the realization that the success of observational 21cm cosmology will hinge on software algorithms and analysis pipelines just as much as it does on careful hardware design and telescope construction. this review provides a pedagogical introduction to state - of - the - art ideas in 21 cm data analysis, covering a wide variety of steps in a typical analysis pipeline, from calibration to foreground subtraction to mapmaking to power spectrum estimation to parameter estimation.
arxiv:1907.08211
we analyze the capability of prompt photon production in pp and pp ( bar ) collisions to constrain the gluon distribution of the proton, considering data from fixed - target experiments as well as collider measurements. combined fits are performed to these large - p _ t direct gamma cross sections and lepton - proton deep - inelastic scattering data in the framework of next - to - leading order perturbative qcd. special attention is paid to theoretical uncertainties originating from the scale dependence of the results and from the fragmentation contribution to the prompt photon cross section.
arxiv:hep-ph/9505404
this note is aimed at presenting a new algebraic approach to momentum - space correlators in conformal field theory. as an illustration we present a new lie - algebraic method to compute frequency - space two - point functions for charged scalar operators of cft $ _ { 1 } $ dual to ads $ _ { 2 } $ black hole with constant background electric field. our method is based on the real - time prescription of ads / cft correspondence, euclideanization of ads $ _ { 2 } $ black hole and projective unitary representations of the lie algebra $ \ mathfrak { sl } ( 2, \ mathbb { r } ) \ oplus \ mathfrak { sl } ( 2, \ mathbb { r } ) $. we derive novel recurrence relations for euclidean cft $ _ { 1 } $ two - point functions, which are exactly solvable and completely determine the frequency - and charge - dependences of two - point functions. wick - rotating back to lorentzian signature, we obtain retarded and advanced cft $ _ { 1 } $ two - point functions that are consistent with the known results.
arxiv:1309.2939
the problem of makespan optimal solving of cooperative path finding ( cpf ) is addressed in this paper. the task in cpf is to relocate a group of agents in a non - colliding way so that each agent eventually reaches its goal location from the given initial location. the abstraction adopted in this work assumes that agents are discrete items moving in an undirected graph by traversing edges. makespan optimal solving of cpf means to generate solutions that are as short as possi - ble in terms of the total number of time steps required for the execution of the solution. we show that reducing cpf to propositional satisfiability ( sat ) represents a viable option for obtaining makespan optimal solutions. several encodings of cpf into propositional formulae are suggested and experimentally evaluated. the evaluation indicates that sat based cpf solving outperforms other makespan optimal methods significantly in highly constrained situations ( environments that are densely occupied by agents ).
arxiv:1610.05452
oriented bounding box regression is crucial for oriented object detection. however, regression - based methods often suffer from boundary problems and the inconsistency between loss and evaluation metrics. in this paper, a modulated kalman iou loss of approximate skewiou is proposed, named mkiou. to avoid boundary problems, we convert the oriented bounding box to gaussian distribution, then use the kalman filter to approximate the intersection area. however, there exists significant difference between the calculated and actual intersection areas. thus, we propose a modulation factor to adjust the sensitivity of angle deviation and width - height offset to loss variation, making the loss more consistent with the evaluation metric. furthermore, the gaussian modeling method avoids the boundary problem but causes the angle confusion of square objects simultaneously. thus, the gaussian angle loss ( ga loss ) is presented to solve this problem by adding a corrected loss for square targets. the proposed ga loss can be easily extended to other gaussian - based methods. experiments on three publicly available aerial image datasets, dota, ucas - aod, and hrsc2016, show the effectiveness of the proposed method.
arxiv:2206.15109
we discuss plausible mechanisms to produce bullet - like ejecta from the precessing disk in the ss 433 system. we show that non - steady shocks in the sub - keplerian accretion flow can provide the basic timescale of the ejection interval while the magnetic rubber - band effect of the toroidal flux tubes in this disk can yield flaring events.
arxiv:astro-ph/0208148
working with anticommuting weyl ( or mayorana ) spinors in the framework of the van der waerden calculus is standard in supersymmetry. the natural frame for rigorous supersymmetric quantum field theory makes use of operator - valued superdistributions defined on supersymmetric test functions. in turn this makes necessary a van der waerden calculus in which the grassmann variables anticommute but the fermionic components are commutative instead of being anticommutative. we work out such a calculus in view of applications to the rigorous conceptual problems of the n = 1 supersymmetric quantum field theory.
arxiv:hep-th/0408195
we combine dark energy survey year 1 clustering and weak lensing data with baryon acoustic oscillations ( bao ) and big bang nucleosynthesis ( bbn ) experiments to constrain the hubble constant. assuming a flat $ \ lambda $ cdm model with minimal neutrino mass ( $ \ sum m _ \ nu = 0. 06 $ ev ) we find $ h _ 0 = 67. 2 ^ { + 1. 2 } _ { - 1. 0 } $ km / s / mpc ( 68 % cl ). this result is completely independent of hubble constant measurements based on the distance ladder, cosmic microwave background ( cmb ) anisotropies ( both temperature and polarization ), and strong lensing constraints. there are now five data sets that : a ) have no shared observational systematics ; and b ) each constrain the hubble constant with a few percent level precision. we compare these five independent measurements, and find that, as a set, the differences between them are significant at the $ 2. 1 \ sigma $ level ( $ \ chi ^ 2 / dof = 20. 1 / 11 $, probability to exceed = 4 % ). this difference is low enough that we consider the data sets statistically consistent with each other. the best fit hubble constant obtained by combining all five data sets is $ h _ 0 = 69. 1 ^ { + 0. 4 } _ { - 0. 6 } $ km / s / mpc.
arxiv:1711.00403
we study the higgs boson decay into dark matter ( dm ) in the framework of freeze - in at stronger coupling. even though the higgs - dm coupling is significant, up to order one, dm does not thermalize due to the boltzmann suppression of its production at low temperatures. we find that this mechanism leads to observable higgs decay into invisible final states with the branching fraction of 10 % and below, while producing the correct dm relic abundance. this applies to the dm masses down to the mev scale, which requires a careful treatment of the hadronic production modes. for dm masses below the muon threshold, the boltzmann suppression is not operative and the freeze - in nature of the production mechanism is instead guaranteed by the smallness of the electron yukawa coupling. as a result, mev dm with a significant coupling to the higgs boson remains non - thermal as long as the reheating temperature does not exceed $ \ mathcal { o } ( 100 ) \ ; $ mev. our findings indicate that there are good prospects for observing light non - thermal dm via invisible higgs decay at the lhc and fcc.
arxiv:2410.21874
we present a model of inflation based on a racetrack model without flux stabilization. the initial conditions are set automatically through topological inflation. this ensures that the dilaton is not swept to weak coupling through either thermal effects or fast roll. including the effect of non - dilaton fields we find that moduli provide natural candidates for the inflaton. the resulting potential generates slow - roll inflation without the need to fine tune parameters. the energy scale of inflation must be near the gut scale and the scalar density perturbation generated has a spectrum consistent with wmap data.
arxiv:hep-th/0503178
we present a stochastic epidemic model to study the effect of various preventive measures, such as uniform reduction of contacts and transmission, vaccination, isolation, screening and contact tracing, on a disease outbreak in a homogeneously mixing community. the model is based on an infectivity process, which we define through stochastic contact and infectiousness processes, so that each individual has an independent infectivity profile. in particular, we monitor variations of the reproduction number and of the distribution of generation times. we show that some interventions, i. e. uniform reduction and vaccination, affect the former while leaving the latter unchanged, whereas other interventions, i. e. isolation, screening and contact tracing, affect both quantities. we provide a theoretical analysis of the variation of these quantities, and we show that, in practice, the variation of the generation time distribution can be significant and that it can cause biases in the estimation of reproduction numbers. the framework, because of its general nature, captures the properties of many infectious diseases, but particular emphasis is on covid - 19, for which numerical results are provided.
arxiv:2201.09761
epidemiologists and social scientists have used the network scale - up method ( nsum ) for over thirty years to estimate the size of a hidden sub - population within a social network. this method involves querying a subset of network nodes about the number of their neighbours belonging to the hidden sub - population. in general, nsum assumes that the social network topology and the hidden sub - population distribution are well - behaved ; hence, the nsum estimate is close to the actual value. however, bounds on nsum estimation errors have not been analytically proven. this paper provides analytical bounds on the error incurred by the two most popular nsum estimators. these bounds assume that the queried nodes accurately provide their degree and the number of neighbors belonging to the hidden population. our key findings are twofold. first, we show that when an adversary designs the network and places the hidden sub - population, then the estimate can be a factor of $ \ omega ( \ sqrt { n } ) $ off from the real value ( in a network with $ n $ nodes ). second, we also prove error bounds when the underlying network is randomly generated, showing that a small constant factor can be achieved with high probability using samples of logarithmic size $ o ( \ log { n } ) $. we present improved analytical bounds for erdos - renyi and scale - free networks. our theoretical analysis is supported by an extensive set of numerical experiments designed to determine the effect of the sample size on the accuracy of the estimates in both synthetic and real networks.
arxiv:2407.10640
we investigate the influence of the measure in the path integral for euclidean quantum gravity in four dimensions within the regge calculus. the action is bounded without additional terms by fixing the average lattice spacing. we set the length scale by a parameter $ \ beta $ and consider a scale invariant and a uniform measure. in the low $ \ beta $ region we observe a phase with negative curvature and a homogeneous distribution of the link lengths independent of the measure. the large $ \ beta $ region is characterized by inhomogeneous link lengths distributions with spikes and positive curvature depending on the measure.
arxiv:hep-lat/9204010
in this paper we will demonstrate the use of feynman diagrams for one dimensional scattering in quantum mechanics. we will evaluate the s - matrix explicitly for the dirac delta and finite wall potentials by summing the full series of feynman diagrams, illustrating the spirit of perturbation theory. this technique may be useful in introductory quantum mechanics courses, and provides the student with intuition about conservation laws in the context of scattering problems by connecting feynman diagrams, free propagation, and conservation of the corresponding observable. it also provides a toy model for calculating s - matrix elements in quantum field theory.
arxiv:2207.13851
we study existence and uniqueness of solutions to a nonlinear elliptic boundary value problem with a general, and possibly singular, lower order term, whose model is $ $ \ begin { cases } - \ delta _ p u = h ( u ) \ mu & \ text { in } \ \ omega, \ \ u > 0 & \ text { in } \ \ omega, \ \ u = 0 & \ text { on } \ \ partial \ omega. \ end { cases } $ $ here $ \ omega $ is an open bounded subset of $ \ mathbb { r } ^ n $ ( $ n \ ge2 $ ), $ \ delta _ p u : = \ operatorname { div } ( | \ nabla u | ^ { p - 2 } \ nabla u ) $ ( $ 1 < p < n $ ) is the $ p $ - laplacian operator, $ \ mu $ is a nonnegative bounded radon measure on $ \ omega $ and $ h ( s ) $ is a continuous, positive and finite function outside the origin which grows at most as $ s ^ { - \ gamma } $, with $ \ gamma \ ge0 $, near zero.
arxiv:1709.06042
in this note we explicit the notion of hermite interpolant of a multivariate symmetric polynomial, generalizing the notion of lagrange interpolant to the case when there are roots coalescence, an extension of the results on the symmetric hermite interpolation basis by m. - f. roy and a. szpirglas.
arxiv:2501.18507
a fundamental challenge in calcium imaging has been to infer the timing of action potentials from the measured noisy calcium fluorescence traces. we systematically evaluate a range of spike inference algorithms on a large benchmark dataset recorded from varying neural tissue ( v1 and retina ) using different calcium indicators ( ogb - 1 and gcamp6 ). we show that a new algorithm based on supervised learning in flexible probabilistic models outperforms all previously published techniques, setting a new standard for spike inference from calcium signals. importantly, it performs better than other algorithms even on datasets not seen during training. future data acquired in new experimental conditions can easily be used to further improve its spike prediction accuracy and generalization performance. finally, we show that comparing algorithms on artificial data is not informative about performance on real population imaging data, suggesting that a benchmark dataset may greatly facilitate future algorithmic developments.
arxiv:1503.00135
displaystyle u _ { i } } such that for each pair i, j { \ displaystyle i, j } of indices, the restrictions of f i { \ displaystyle f _ { i } } and f j { \ displaystyle f _ { j } } to u i ∩ u j { \ displaystyle u _ { i } \ cap u _ { j } } are equal. then this defines a unique function f : x → y { \ displaystyle f : x \ to y } such that f | u i = f i { \ displaystyle f | _ { u _ { i } } = f _ { i } } for all i. this is the way that functions on manifolds are defined. an extension of a function f is a function g such that f is a restriction of g. a typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane. here is another classical example of a function extension that is encountered when studying homographies of the real line. a homography is a function h ( x ) = a x + b c x + d { \ displaystyle h ( x ) = { \ frac { ax + b } { cx + d } } } such that ad − bc = 0. its domain is the set of all real numbers different from − d / c, { \ displaystyle - d / c, } and its image is the set of all real numbers different from a / c. { \ displaystyle a / c. } if one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting h ( ∞ ) = a / c { \ displaystyle h ( \ infty ) = a / c } and h ( − d / c ) = ∞ { \ displaystyle h ( - d / c ) = \ infty }. = = in calculus = = the idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. at that time, only real - valued functions of a real variable were considered, and all functions were assumed to be smooth. but the definition was soon extended to functions of several variables and to functions of a complex variable. in the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions
https://en.wikipedia.org/wiki/Function_(mathematics)
we consider the load balancing problem in large - scale heterogeneous systems with multiple dispatchers. we introduce a general framework called local - estimation - driven ( led ). under this framework, each dispatcher keeps local ( possibly outdated ) estimates of queue lengths for all the servers, and the dispatching decision is made purely based on these local estimates. the local estimates are updated via infrequent communications between dispatchers and servers. we derive sufficient conditions for led policies to achieve throughput optimality and delay optimality in heavy - traffic, respectively. these conditions directly imply delay optimality for many previous local - memory based policies in heavy traffic. moreover, the results enable us to design new delay optimal policies for heterogeneous systems with multiple dispatchers. finally, the heavy - traffic delay optimality of the led framework directly resolves a recent open problem on how to design optimal load balancing schemes using delayed information.
arxiv:2002.08908
we introduce hk - legicost, a new three - way parallel corpus of cantonese - english translations, containing 600 + hours of cantonese audio, its standard traditional chinese transcript, and english translation, segmented and aligned at the sentence level. we describe the notable challenges in corpus preparation : segmentation, alignment of long audio recordings, and sentence - level alignment with non - verbatim transcripts. such transcripts make the corpus suitable for speech translation research when there are significant differences between the spoken and written forms of the source language. due to its large size, we are able to demonstrate competitive speech translation baselines on hk - legicost and extend them to promising cross - corpus results on the fleurs cantonese subset. these results deliver insights into speech recognition and translation research in languages for which non - verbatim or ` ` noisy ' ' transcription is common due to various factors, including vernacular and dialectal speech.
arxiv:2306.11252
this study systematically benchmarks several non - fault - tolerant quantum computing algorithms across four distinct optimization problems : max - cut, number partitioning, knapsack, and quantum spin glass. our benchmark includes noisy intermediate - scale quantum ( nisq ) algorithms, such as the variational quantum eigensolver, quantum approximate optimization algorithm, quantum imaginary time evolution, and imaginary time quantum annealing, with both ansatz - based and ansatz - free implementations, alongside tensor network methods and direct simulations of the imaginary - time schr \ " odinger equation. for comparative analysis, we also utilize classical simulated annealing and quantum annealing on d - wave devices. employing default configurations, our findings reveal that no single non - ftqc algorithm performs optimally across all problem types, underscoring the need for tailored algorithmic strategies. this work provides an objective performance baseline and serves as a critical reference point for advancing nisq algorithms and quantum annealing platforms.
arxiv:2410.22810
unlike their hermitian counterparts, non - hermitian ( nh ) systems may display an exponential sensitivity to boundary conditions and an extensive number of edge - localized states in systems with open boundaries, a phenomena dubbed the " non - hermitian skin effect. " the nh skin effect is one of the primary challenges to defining a topological theory of nh hamiltonians, as the sensitivity to boundary conditions invalidates the traditional bulk - boundary correspondence. the nh skin effect has recently been connected to the winding number, a topological invariant unique to nh systems. in this paper, we extend the definition of the winding number to disordered nh systems by generalizing established results on disordered hermitian topological insulators. our real - space winding number is self - averaging, continuous as a function of the parameters in the problem, and remains quantized even in the presence of strong disorder. we verify that our real - space formula still predicts the nh skin effect, allowing for the possibility of predicting and observing the nh skin effect in strongly disordered nh systems. as an application we apply our results to predict a nh anderson skin effect where a skin effect is developed as disorder is added to a clean system, and to explain recent results in optical funnels.
arxiv:2007.03738
the use of machine learning or artificial intelligence ( ml / ai ) holds substantial potential toward improving many functions and needs of the public sector. in practice however, integrating ml / ai components into public sector applications is severely limited not only by the fragility of these components and their algorithms, but also because of mismatches between components of ml - enabled systems. for example, if an ml model is trained on data that is different from data in the operational environment, field performance of the ml component will be dramatically reduced. separate from software engineering considerations, the expertise needed to field an ml / ai component within a system frequently comes from outside software engineering. as a result, assumptions and even descriptive language used by practitioners from these different disciplines can exacerbate other challenges to integrating ml / ai components into larger systems. we are investigating classes of mismatches in ml / ai systems integration, to identify the implicit assumptions made by practitioners in different fields ( data scientists, software engineers, operations staff ) and find ways to communicate the appropriate information explicitly. we will discuss a few categories of mismatch, and provide examples from each class. to enable ml / ai components to be fielded in a meaningful way, we will need to understand the mismatches that exist and develop practices to mitigate the impacts of these mismatches.
arxiv:1910.06136
in this study, we make non - invasive, remote, passive measurements of the heart beat frequency and determine the map of blood pulsation intensity in a region of interest ( roi ) of skin. the roi used was the forearm of a volunteer. the method employs a regular video camera and visible light, and the video acquisition takes less than 1 minute. the mean cardiac frequency found in our volunteer was within 1 bpm of the ground - truth value simultaneously obtained via earlobe plethysmography. using the signals extracted from the video images, we have determined an intensity map for the blood pulsation at the surface of the skin. in this paper we present the experimental and data processing details of the work and well as limitations of the technique. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - neste estudo medimos a frequ \ ^ encia card \ ' iaca de forma n \ ~ ao invasiva, remota e passiva e determinamos o mapa da atividade de pulsa \ c { c } \ ~ ao sangu \ ' inea numa regi \ ~ ao de interesse ( roi ) da pele. a roi utilizada foi o antebra \ c { c } o de um volunt \ ' ario. o m \ ' etodo utiliza uma c \ ^ amara de v \ ' ideo comum e luz vis \ ' ivel como processo de aquisi \ c { c } \ ~ ao do v \ ' ideo durando menos de 1 minuto. a frequ \ ^ encia card \ ' iaca m \ ' edia encontrada tem erro em torno de 1 batimento por minuto em rela \ c { c } \ ~ ao a uma medida simult \ ^ anea usando fotopletismografia de l \ ' obulo de orelha. a partir dos sinais extra \ ' idos das imagens de v \ ' ideo, determinamos um mapa da intensidade da pulsa \ c { c } \ ~ ao sangu \ ' inea na superf \ ' icie da pele. apresentamos detal
arxiv:1611.03527
we study the higher - spin gauge theory in six - dimensional anti - de sitter space $ ads _ 6 $ that is based on the exceptional lie superalgebra $ f ( 4 ) $. the relevant higher - spin algebra was constructed in arxiv : 1409. 2185 [ hep - th ]. we determine the spectrum of the theory and show that it contains the physical fields of the romans $ f ( 4 ) $ gauged supergravity. the full spectrum consists of an infinite tower of unitary supermultiplets of $ f ( 4 ) $ which extend the romans multiplet to higher spins plus a single short supermultiplet. motivated by applications to this novel supersymmetric higher - spin theory as well as to other theories, we extend the known one - loop tests of $ ads / cft $ duality in various directions. the spectral zeta - function is derived for the most general case of fermionic and mixed - symmetry fields, which allows one to test the type - a and b theories and supersymmetric extensions thereof in any dimension. we also study higher - spin doubletons and partially - massless fields. while most of the tests are successfully passed, the type - b theory in all even dimensional anti - de sitter spacetimes presents an interesting puzzle : the free energy as computed from the bulk is not equal to that of the free fermion on the cft side, though there is some systematics to the discrepancy.
arxiv:1608.07582
we consider the multi - armed bandit ( mab ) problem, where an agent sequentially chooses actions and observes rewards for the actions it took. while the majority of algorithms try to minimize the regret, i. e., the cumulative difference between the reward of the best action and the agent ' s action, this criterion might lead to undesirable results. for example, in large problems, or when the interaction with the environment is brief, finding an optimal arm is infeasible, and regret - minimizing algorithms tend to over - explore. to overcome this issue, algorithms for such settings should instead focus on playing near - optimal arms. to this end, we suggest a new, more lenient, regret criterion that ignores suboptimality gaps smaller than some $ \ epsilon $. we then present a variant of the thompson sampling ( ts ) algorithm, called $ \ epsilon $ - ts, and prove its asymptotic optimality in terms of the lenient regret. importantly, we show that when the mean of the optimal arm is high enough, the lenient regret of $ \ epsilon $ - ts is bounded by a constant. finally, we show that $ \ epsilon $ - ts can be applied to improve the performance when the agent knows a lower bound of the suboptimality gaps.
arxiv:2008.03959
we consider the prompt photon production at modern high energy colliders in the framework of $ k _ t - $ factorization approach. we compare our theoretical predictions with recent experimental data at hera and tevatron, empahasizing the distinction between our theoretical predictions and the results of nlo qcd calculations. finally, we extrapolate our predictions to lhc energies.
arxiv:hep-ph/0611384
we point out that supersymmetric warped geometry can provide a solution to the susy flavor problem, while generating hierarchical yukawa couplings. in supersymmetric theories in a slice of ads _ 5 with the kaluza - klein scale m _ kk much higher than the weak scale, if all visible fields originate from 5d bulk fields and supersymmetry breaking is mediated by the bulk radion superfield and / or some brane chiral superfields, potentially dangerous soft scalar masses and trilinear $ a $ parameters at m _ kk can be naturally suppressed compared to the gaugino masses by small warp factor. we present simple models yielding phenomenologically interesting patterns of soft parameters in this framework.
arxiv:hep-ph/0301131
reproducibility is a fundamental requirement of the scientific process since it enables outcomes to be replicated and verified. computational scientific experiments can benefit from improved reproducibility for many reasons, including validation of results and reuse by other scientists. however, designing reproducible experiments remains a challenge and hence the need for developing methodologies and tools that can support this process. here, we propose a conceptual model for reproducibility to specify its main attributes and properties, along with a framework that allows for computational experiments to be findable, accessible, interoperable, and reusable. we present a case study in ecological niche modeling to demonstrate and evaluate the implementation of this framework.
arxiv:1909.00271
we introduce algebraic structures known as psybrackets and use them to define invariants of pseudoknots and singular knots and links. psybrackets are niebrzydowski tribrackets with additional structure inspired by the reidemeister moves for pseudoknots and singular knots. examples and computations are provided.
arxiv:2006.02276
the compact muon solenoid ( cms ) detector at the cern large hadron collider ( lhc ) is undergoing an extensive phase ii upgrade program to prepare for the challenging conditions of the high - luminosity lhc ( hl - lhc ). a new timing layer is designed to measure minimum ionizing particles ( mips ) with a time resolution of 30 ps and a hermetic coverage up to a pseudo - rapidity of $ | \ eta | $ = 3. this mip timing detector ( mtd ) will consist of a central barrel region based on lyso : ce crystals read out with sipms and two end - caps instrumented with radiation - tolerant low gain avalanche diodes ( lgads ). the precision time information from the mtd will reduce the effects of the high levels of pile - up expected at the hl - lhc, and will bring new and unique capabilities to the cms detector. we present the current status and ongoing r & d of the mtd, including recent test beam results.
arxiv:1810.00350
we study an online contextual decision - making problem with resource constraints. at each time period, the decision - maker first predicts a reward vector and resource consumption matrix based on a given context vector and then solves a downstream optimization problem to make a decision. the final goal of the decision - maker is to maximize the summation of the reward and the utility from resource consumption, while satisfying the resource constraints. we propose an algorithm that mixes a prediction step based on the " smart predict - then - optimize ( spo ) " method with a dual update step based on mirror descent. we prove regret bounds and demonstrate that the overall convergence rate of our method depends on the $ \ mathcal { o } ( t ^ { - 1 / 2 } ) $ convergence of online mirror descent as well as risk bounds of the surrogate loss function used to learn the prediction model. our algorithm and regret bounds apply to a general convex feasible region for the resource constraints, including both hard and soft resource constraint cases, and they apply to a wide class of prediction models in contrast to the traditional settings of linear contextual models or finite policy spaces. we also conduct numerical experiments to empirically demonstrate the strength of our proposed spo - type methods, as compared to traditional prediction - error - only methods, on multi - dimensional knapsack and longest path instances.
arxiv:2206.07316
we present deep stellar photometry of the bo \ " otes i dwarf spheroidal galaxy in g and i band filters, taken with the dark energy camera at cerro tololo in chile. our analysis reveals a large, extended region of stellar substructure surrounding the dwarf, as well as a distinct over - density encroaching on its tidal radius. a radial profile of the bo \ " otes i stellar distribution shows a break radius indicating the presence of extra - tidal stars. these observations strongly suggest that bo \ " otes i is experiencing tidal disruption, although not as extreme as that exhibited by the hercules dwarf spheroidal. combined with revised velocity dispersion measurements from the literature, we see evidence suggesting the need to review previous theoretical models of the bo \ " otes i dwarf spheroidal galaxy.
arxiv:1607.00447
recently jarvis has proved a correspondence between su ( n ) monopoles and rational maps of the riemann sphere into flag manifolds. furthermore, he has outlined a construction to obtain the monopole fields from the rational map. in this paper we examine this construction in some detail and provide explicit examples for spherically symmetric su ( n ) monopoles with various symmetry breakings. in particular we show how to obtain these monopoles from harmonic maps into complex projective spaces. the approach extends in a natural way to monopoles in hyperbolic space and we use it to construct new spherically symmetric su ( n ) hyperbolic monopoles.
arxiv:hep-th/9903183
estimating 3d hand pose from 2d images is a difficult, inverse problem due to the inherent scale and depth ambiguities. current state - of - the - art methods train fully supervised deep neural networks with 3d ground - truth data. however, acquiring 3d annotations is expensive, typically requiring calibrated multi - view setups or labor intensive manual annotations. while annotations of 2d keypoints are much easier to obtain, how to efficiently leverage such weakly - supervised data to improve the task of 3d hand pose prediction remains an important open question. the key difficulty stems from the fact that direct application of additional 2d supervision mostly benefits the 2d proxy objective but does little to alleviate the depth and scale ambiguities. embracing this challenge we propose a set of novel losses. we show by extensive experiments that our proposed constraints significantly reduce the depth ambiguity and allow the network to more effectively leverage additional 2d annotated images. for example, on the challenging freihand dataset using additional 2d annotation without our proposed biomechanical constraints reduces the depth error by only $ 15 \ % $, whereas the error is reduced significantly by $ 50 \ % $ when the proposed biomechanical constraints are used.
arxiv:2003.09282
this paper proposed several methods to transplant the compound chaotic image encryption scheme with permutation based on 3d baker into image formats as joint photographic experts group ( jpeg ) and graphics interchange format ( gif ). the new method averts the lossy discrete cosine transform and quantization and can encrypt and decrypt jpeg images lossless. our proposed method for gif keeps the property of animation successfully. the security test results indicate the proposed methods have high security. since jpeg and gif image formats are popular contemporarily, this paper shows that the prospect of chaotic image encryption is promising.
arxiv:1208.0999
we present the broad - band 0. 6 - 150 kev suzaku and swift bat spectra of the low luminosity seyfert galaxy, ngc 7213. the time - averaged continuum emission is well fitted by a single powerlaw of photon index gamma = 1. 75 and from consideration of the fermi flux limit we constrain the high energy cutoff to be 350 kev < e < 25 mev. line emission from both near - neutral iron k _ alpha at 6. 39 kev and highly ionised iron, from fe _ ( xxv ) and fe _ ( xxvi ), is strongly detected in the suzaku spectrum, further confirming the results of previous observations with chandra and xmm - newton. we find the centroid energies for the fe _ ( xxv ) and fe _ ( xxvi ) emission to be 6. 60 kev and 6. 95 kev respectively, with the latter appearing to be resolved in the suzaku spectrum. we show that the fe _ ( xxv ) and fe _ ( xxvi ) emission can result from a highly photo - ionised plasma of column density n _ ( h ) ~ 3 x 10 ^ ( 23 ) cm ^ ( - 2 ). a compton reflection component, e. g., originating from an optically - thick accretion disc or a compton - thick torus, appears either very weak or absent in this agn, subtending < 1 sr to the x - ray source, consistent with previous findings. indeed the absence of either neutral or ionised compton reflection coupled with the lack of any relativistic fe k signatures in the spectrum suggests that an inner, optically - thick accretion disc is absent in this source. instead, the accretion disc could be truncated with the inner regions perhaps replaced by a compton - thin radiatively inefficient accretion flow. thus, the fe _ ( xxv ) and fe _ ( xxvi ) emission could both originate in ionised material perhaps at the transition region between the hot, inner flow and the cold, truncated accretion disc on the order of 10 ^ ( 3 ) - 10 ^ ( 4 ) gravitational radii from the black hole. the origin for the unresolved neutral fe k _ alpha emission is then likely to be further out, perhaps originating in the optical blr or a compton - thin pc - scale torus.
arxiv:1006.1318
the galactic source g2. 4 $ + $ 1. 4 is an optical and radio nebula containing an extreme wolf - - rayet star. at one time this source was regarded as a supernova remnant, because of its apparent non - thermal radio spectrum, although this was based on limited observations. subsequent observations instead supported a flat, optically thin thermal radio spectrum for g2. 4 $ + $ 1. 4, and it was identified as a photoionized, mass - loss bubble, not a supernova remnant. recently, however, it has been claimed that this source has a non - thermal integrated radio spectrum. i discuss the integrated radio flux densities available for g2. 4 $ + $ 1. 4 from a variety of surveys, and show that it has a flat spectrum at gigahertz frequencies ( with a spectral index $ \ alpha $ of $ 0. 02 \ pm 0. 08 $, where flux density $ s $ scales with frequency $ \ nu $ as $ s \ propto \ nu ^ { - \ alpha } $ ).
arxiv:2208.08694
large vision language models ( lvlms ) often suffer from object hallucination, producing objects not present in the given images. while current benchmarks for object hallucination primarily concentrate on the presence of a single object class rather than individual entities, this work systematically investigates multi - object hallucination, examining how models misperceive ( e. g., invent nonexistent objects or become distracted ) when tasked with focusing on multiple objects simultaneously. we introduce recognition - based object probing evaluation ( rope ), an automated evaluation protocol that considers the distribution of object classes within a single image during testing and uses visual referring prompts to eliminate ambiguity. with comprehensive empirical studies and analysis of potential factors leading to multi - object hallucination, we found that ( 1 ). lvlms suffer more hallucinations when focusing on multiple objects compared to a single object. ( 2 ). the tested object class distribution affects hallucination behaviors, indicating that lvlms may follow shortcuts and spurious correlations. ( 3 ). hallucinatory behaviors are influenced by data - specific factors, salience and frequency, and model intrinsic behaviors. we hope to enable lvlms to recognize and reason about multiple objects that often occur in realistic visual scenes, provide insights, and quantify our progress towards mitigating the issues.
arxiv:2407.06192
the current - driven domain wall motion along two exchange - coupled ferromagnetic layers with perpendicular anisotropy is studied by means of micromagnetic simulations and compared to the conventional case of a single ferromagnetic layer. our results, where only the lower ferromagnetic layer is subjected to the interfacial dzyaloshinskii - moriya interaction and to the spin hall effect, indicate that the domain walls can be synchronously driven in the presence of a strong interlayer exchange coupling, and that the velocity is significantly enhanced due to the antiferromagnetic exchange coupling as compared with the single - layer case. on the contrary, when the coupling is of ferromagnetic nature, the velocity is reduced. we provide a full micromagnetic characterization of the current - driven motion in these multilayers, both in the absence and in the presence of longitudinal fields, and the results are explained based on a one - dimensional model. the interfacial dzyaloshinskii - moriya interaction, only necessary in this lower layer, gives the required chirality to the magnetization textures, while the interlayer exchange coupling favors the synchronous movement of the coupled walls by a dragging mechanism, without significant tilting of the domain wall plane. finally, the domain wall dynamics along curved strips is also evaluated. these results indicate that the antiferromagnetic coupling between the ferromagnetic layers mitigates the tilting of the walls, which suggest these systems to achieve efficient and highly packed displacement of trains of walls for spintronics devices. a study, taking into account defects and thermal fluctuations, allows to analyze the validity range of these claims.
arxiv:1801.07432
we revisit watermarking techniques based on pre - trained deep networks, in the light of self - supervised approaches. we present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time. our method can operate at any resolution and creates watermarks robust to a broad range of transformations ( rotations, crops, jpeg, contrast, etc ). it significantly outperforms the previous zero - bit methods, and its performance on multi - bit watermarking is on par with state - of - the - art encoder - decoder architectures trained end - to - end for watermarking. the code is available at github. com / facebookresearch / ssl _ watermarking
arxiv:2112.09581
we present an expression for the covariance matrix for the set of state vectors describing a track fitted with a kalman filter. we demonstrate that this expression facilitates the use of a kalman filter track model in a minimum $ \ chi ^ 2 $ algorithm for the alignment of tracking detectors. we also show that it allows to incorporate vertex constraints in such a procedure without refitting the tracks.
arxiv:0810.2241
starting from a general equation for organism ( or cell system ) growth and attributing additional cell death rate ( besides the natural rate ) to therapy, we derive an equation for cell response to { \ alpha } radiation. different from previous models that are based on statistical theory, the present model connects the consequence of radiation with the growth process of a biosystem and each variable or parameter has meaning regarding the cell evolving process. we apply this equation to model the dose response for { \ alpha } - particle radiation. it interprets the results of both high and low linear energy transfer ( let ) radiations. when let is high, the additional death rate is a constant, which implies that the localized cells are damaged immediately and the additional death rate is proportional to the number of cells present. while at low let, the additional death rate includes a constant term and a linear term of radiation dose, implying that the damage to some cell nuclei has a time accumulating effect. this model indicates that the oxygen - enhancement ratio ( oer ) decreases while let increases consistently.
arxiv:1207.1001
the formalism of the linear response for the skyrme energy density functional including tensor terms derived in articles [ 1, 2 ] for nuclear matter is applied here to the case of pure neutron matter. as in article [ 2 ] we present analytical results for the response function in all channels, the landau parameters and the odd - power sum rules. special emphasis is given to the inverse energy weighted sum rule because it can be used to detect non physical instabilities. typical examples are discussed and numerical results shown. moreover, as a direct application, neutrino propagation in neutron matter is investigated through its neutrino mean free path at zero temperature. this quantity turns out to be very sensitive to the tensor terms of the skyrme energy density functional.
arxiv:1207.4006
a mirrored bilayer structure incorporating layers of graphene and mos2 is proposed here for surface plasmon resonance ( spr ) biosensing and its performance is evaluated numerically. starting from the basic configuration, the structure with graphene and mos2 layers is gradually developed for enhanced performance. reflectance is the main considered parameter for performance analysis. a theoretical framework based on fresnel ' s equations is presented and by measuring reflectance versus angle of incidence, sensitivity is calculated from the displacement of spr angle using finite - difference time - domain ( fdtd ) technique. our numerical analysis shows that using the proposed approach, about 4. 2 times enhanced sensitivity can be achieved compared to the basic kretschmann configuration. notably, the structure provides an enhancement for both angular and wavelength interrogations. furthermore, simulations have been repeated for various ligate - ligand pairs and consistent enhancement has been observed which proves the robustness of the structure. so, the proposed sensor architecture clearly provides pronounced improvement in sensitivity which can be significant in various practical applications.
arxiv:2002.02752
blockchains have shown great promise as peer - to - peer digital currency systems over the past 10 years. however, with increased popularity, the demand for processing transactions has also grown leading to increased costs, confirmation times, and limited blockchain utility. there have been a number of proposals on how to scale blockchains, such as plasma, polkadot, elastico, rapidchain, bitcoin - ng, and omniledger. these solutions all propose the segmentation of every function of a blockchain, namely consensus, permanent data storage, transaction processing, and consistency, which significantly increases the complexity and difficulty of implementation. blockreduce is a new blockchain structure which only segments consistency, allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization. moreover, blockreduce will significantly decrease node bandwidth requirements and network latency through incentives while simultaneously minimizing other resource demands in order to prevent centralization of nodes.
arxiv:1811.00125