text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Notice that the term {{formula:2af4c627-3b31-44e1-90de-da8918258e87}} is the extremal length of the annulus (REF ), see e.g. {{cite:e073eeed68b83f543315b664691d75e4b7949767}}.
It is worth pointing out that a characteristic difference between the expansions (REF ) and (REF ) is the appearance of the logarithmic potential {{formula:e5cb1fb8-b258-4dd4-9721-088e45a2be70}} in the {{formula:9f21c8af-100b-440f-8752-a5b587f1fba8}} -term of (REF ).
This additional term can be interpreted as
{{formula:181347b6-090f-471d-a00f-efb75cb0347f}}
| r | 31b77c7bdb5d7c7963c0f92806c40580 |
More recently, Prompt Tuning {{cite:b318bee75a2a7ebc0bbe58a51e154ecc562761f7}} has received much attention. With large frozen language models (say, {{formula:385b7f15-801a-4055-b1dc-85b4d6f20534}} 10 billion parameters), Prompt Tuning simply adds a tunable soft prompt to the input of the encoder, achieving results that are comparable to full-model tuning. Yet, our empirical results, in Section , demonstrate that Prompt Tuning for abstractive summarization yields simply abysmal performance.
Prefix-Tuning {{cite:d2d87d5863e9b6be55011afe98c45df4f0387cb1}} extends the use of prompt learning in the natural language generation area. With this technique, continuous prompts are applied to every layer of the pre-trained model and even shows increase in few-shot generation tasks over fine-tuning.
Yet the training process is not stable and updates are required that add to the memory and training costs.See more related work in Section F of the supplementary file.
{{figure:becea8e6-13f5-4b4b-ac63-29e9bc8585dc}} | i | f57b20ce04eeae7c7a609fe53019d62b |
The earliest known spatiotemporal gait recognition techniques started in the late 80s. {{cite:eefc19939b219717d90edf4c8ba18b08bf850870}} proposed to recognize gait at a sagittal angle with the subject walking frontoparallel. It modelled the human gait in the form of a set of spatiotemporal snakes {{cite:1231893ffcf6dd59f030f3693db2c08c118ba48d}} from the slices of the moving parts of the human contour along the time domain. The recognition accuracy they have obtained with 26 image sequences across five human subjects reaches up to 83%. {{cite:0f5495c3b4aeba2e3c7190b16818fe6de4194222}} introduced the concept of optical flow in gait recognition. In principle, the points in the image sequence that vary with time tend to oscillate periodically during the subject's gait. By observing the optical flow of these points, the {{formula:423215fa-c138-4c0a-853a-42760c6cfbe6}} time varying scalars can be produced. The phases of oscillations from these scalars are used to represent the gait instance. They reached a higher recognition rate of up to 95% by testing their technique with seven instances for each of six human subjects.
| m | dc4cbe48ef8e6914e150a45aba7f41ad |
Monte Carlo (MC) simulations are performed with Metropolis algorithm for
Heisenberg model {{cite:cdd5a03c2b847fffd1eaafa7aa518917a8c45b9f}}, {{cite:58bb2eb5ce327bee71e6d21ce88efd94fe6d571f}}, {{cite:64d0d011fcb68ca5b80a6c2f8d77c687ae51d490}}. The size of the cell
in the MC simulation are 16{{formula:aec085c4-dd56-4698-b2d3-65a37787b19e}} 16{{formula:71dda109-e5ca-412b-9e25-276524684c42}} 16-unit cells with periodic
boundary conditions. At each temperature we carry out 400000 sweeps to
prepare the system, and sample averages are accumulated over 800000 sweeps.
| m | 891d33daf92e42340ff6297dca8854f3 |
In simulations, we consider the geometric channel model with {{formula:649ab144-488b-4ae1-bbc0-73917b3e8548}} propagation paths between the transmitter and each receiver {{cite:ec7bb4f949f89eb7894892899539dfdc575e6951}}, {{cite:33d512bc6b156d5573f7a56f3ec693e15301c8fa}}, considering the high density of reflecting surfaces in an industrial environment. The receivers are divided equally among all the multicast groups. The numerical results show the average performance over 100 channel realizations. Table REF summarizes the parameters setting.
{{figure:c25df596-e239-482c-bdaf-935c5e691ca7}}{{table:7dfd792d-ae1b-4841-8869-f52844fa3712}} | r | 5be3907291731a12a5dc9dd49fa3e108 |
In this section, we visualize additional visual results of our method in Fig. REF , Fig. REF , Fig. REF , Fig. REF , Fig. REF , Fig. REF , Fig. REF on Animal Faces-10, Food-10, CelebA-HQ, MetFace, Anime, LSUN-car and LSUN-church datasets {{cite:fb9b3cb3198e5aa453340358b1563f8a54d07b09}}, {{cite:3f252982d8cecb98a71f302a94e4a28c5def0fce}}, {{cite:917befde216b310242f8132279862ed946e10310}}, {{cite:6cf70f86965cdab771aa5282f33045ae77397619}}, {{cite:6af16a74cacd03aa7c2acf3b613598e71ea4fc7e}}, {{cite:ccf40693c273b71fa65e5b1f03653e8fb6815979}}, including
latent-guided diverse image synthesis and reference-guided image translation results.
Thanks to our multi-hot labeling setting with proposed prompt learning technique and {{formula:6d32b1bb-9277-4b53-8a57-bef97846294f}} domain, our mapping network and style encoder can produce the style vectors faithfully representing target multiple domain styles.
{{figure:e7ea1151-b174-447f-becf-31160032748f}}{{figure:9ac02618-329c-4669-bd00-47901169e78c}}{{figure:cb3f7a97-5667-4b94-8508-a2b0dd015c86}}{{figure:92511312-aa96-428b-afc0-b9754c4921f6}}{{figure:da36b289-4cb4-46ba-87f8-abe57720365b}}{{figure:9851f24c-9947-4f5e-b8dc-8d804963beea}}{{figure:5302f1aa-fa89-4cc6-aeb6-3ca95491c119}}{{figure:77b7f14d-eb47-47ae-9c3e-3eaf01008ec3}}{{figure:729b8896-54f5-4518-a1ad-d3bcd63b80e2}}{{figure:14ccd6ec-a24e-4857-a0e2-385ecceb8313}} | r | 7c85f28993801544793a5b23294775ca |
In the framework of theoretical works, we will start with the iconic papers in Refs. {{cite:30049bae6638b2ab7b720adab8f40fd894d69596}}, {{cite:e768d46d986bcb3a420d291184d111e62f87ded4}}, Brodsky-Farrar and Matveev-Muradian-Tavkhelidze, respectively, predicted a scaling law at large transverse momentum meaning for large {{formula:74861628-6d4d-49c9-9a5c-155c5ed11d54}} regime, pion form factor should behave as {{formula:9c6f7756-d279-4f8b-9e50-989362a975d3}} . Another important prediction related to the pion form factor at at moderately large {{formula:02d0d479-c40c-4ed1-b9ed-dc3888bb865c}} can be seen in Ref. {{cite:c4bcc1d7257c643e6ac438a27b047a5b8c68d8d0}} where Efremov and Radyushkin, taking into account quark counting rule (QCR) argue the the pion form factor behaves as {{formula:0c1b50a4-166c-4a4b-96c1-2133ae943a00}} , where {{formula:c73c576e-ce05-49be-8c61-b748a668445f}} corresponds the number of the valence quarks in a composed system. Also in Ref. {{cite:c4bcc1d7257c643e6ac438a27b047a5b8c68d8d0}} the authors mention that in this moderately large {{formula:a4854ce6-5830-48d8-96c4-f511b379df9e}} region the contribution coming from
Feynman mechanism is damped by the Sudakov form factor meaning an abrupt decreasing of the pion form factor while {{formula:bccbaf3c-cc0c-4298-86c9-bea7fd832feb}} increasing. For large {{formula:2da0d009-400c-4938-bd4b-2a61692601de}} regimes the process recovers {{formula:a39e92fa-d99f-49ef-94e5-ad592d58847a}} behaviour. Other important theoretical works can be seen in Refs. {{cite:d1a8a95489918c4f0f90242c859cfe9336c8de1f}}, {{cite:c50d995a3d9f97e474a42b4122d8d9d6b07b44b2}}, {{cite:c4e4024e602c345c98ad03f562651e1ee700f9a7}}, {{cite:64d742b2efc341838ac83716c88c07c995d3df10}}, {{cite:7b91ce428e02aac7c716c8a79d0bade16e900c5e}}, {{cite:697f03bb6c6c1f99d32583dbdbf0d2e1bd482bd5}}
where the authors have used different approaches, such as, QCD sum rules, dispersion relations with QCD constraint, Dyson-Schwinger equation, perturbative QCD (pQCD), light-front quark
model (LFQM), and extended vector meson dominance model (extVMD), respectively.
| r | be14c91a38b4701dc9bc66494a30b17c |
As shown before, for least square regression the primal distributed ADMM uses {{formula:891ced09-4ee4-401d-930e-c820745e594a}} for linear updating. Since we directly obtain the linear convergence, we define the convergence rate of primal distributed ADMM as the spectral radius of linear mapping matrix {{formula:f7941c83-2279-452c-bb13-4803953341cb}} , {{formula:e2928603-8d17-4a90-a3d2-06960688c3d2}} . From previous literature, primal distributed ADMM converges for any constant step size and {{formula:449a9d13-b242-4c5a-a2a2-53037be5d879}} ({{cite:2e09e0815051fe413d8614314cc8efb42a5c0898}}). The data structure that leads to worst convergence rate depends on the choice of step size {{formula:58f6d3f0-d43a-4879-a8de-b72d1171cfde}} . In the following theorem, we provides the results when step size is relatively large.
| m | 1f07a62d1b8b040228a318d832441a33 |
All criteria are computed based on the implementation in {{cite:01b5ae16fc8954e892757eea4f6a0548eaaf922e}}, available at the publisher website.
https://www.crcpress.com/downloads/K14513/K14513_CD_Files.zip
We quantify the performance of the enhancement front-end that is trained independently, trained jointly with and without GANs in case of Transformer scheme. As exhibited in Fig. REF , the joint training disgrades the enhancement module's performance on SSNR, CBAK, COVL, and PESQ, except for CSIG. It is safe to draw two conclusions from this result. First, these results suggest that these objective criteria cannot indicate the suitability of the enhanced data for ASR task, which verifies the assertion that the independent training leads the enhancement module into the sub-optimum easily. Second, the opposite trend on CSIG proves the assumption that the joint training strategy can mitigate the unseen distortion introduced by the handcrafted loss function. Another phenomenon which is worth nothing is that the discrepancies on CBAK and SSNR reveals the conflicts between erasing the noise contamination and averting the speech distortion. Therefore, the equilibrium between these two goals is critical. The experimental results in Section validate the efficacy of the adversarial joint training with a global discriminant guide for reaching the equilibrium point.
| d | 5225c3c84a09be9ccc8d7915e557dd1a |
Concerning {{formula:e26c52e0-c2c9-44bc-bc92-e38cec87051e}} , defined in {{cite:ff29039797f9e89a6c9b79885e2d34f84497c019}} and here at equation (REF ), another simple identity for (convex) tangential polygons is
{{formula:0a177b64-dfe5-4894-8aa7-5105939cef3c}}
| r | 38f6ba23d969ff1d5dbb1cf4a42f9d6c |
As shown in fig:pipeline, the proposed system consists of two stages, i.e., the initial estimation stage, and the refinement stage. In the initial estimation stage, similar to {{cite:1763fe6f515e1f50eca3f5bb7c2b110d9b6f8ec7}}, {{cite:33e378ca6ce348a4c63a443e5809bf4cca0b61d6}}, a 2D detector is first adopted to extract the 2D bounding box from the input image, followed by an Object Detection Network (ODN) to recover the object poses as 3D bounding boxes and a new Local Implicit Embedding Network (LIEN) to extract the implicit local shape information from the image directly, which can further be decoded to infer 3D geometry.
The input image is also fed into a Layout Estimation Network (LEN) to produce a 3D layout bounding box and relative camera pose.
In the refinement stage, a novel Scene Graph Convolutional Network (SGCN) is designed to refine the initial predictions via the scene context information.
As 2D detector, LEN, ODN has the standard architecture similar to prior works {{cite:1763fe6f515e1f50eca3f5bb7c2b110d9b6f8ec7}}, {{cite:33e378ca6ce348a4c63a443e5809bf4cca0b61d6}}, in this section, we will describe the details of the novel SGCN and LIEN in detail. Please refer to our supplementary materials for the details of our 2D detector, LEN, ODN.
| m | 492b8ec142327c48574101b066e2d9cf |
Given the above three different possible modifications of standard physics, we will organize this paper as follows. In a recent paper {{cite:5f1f7a6e449d514152cb6ee860e96663adbec931}}, we showed that there is an upper bound on the cutoff scale of {{formula:ecc07eb4-650c-45a1-ab95-41eb5d7d4a11}} in order to satisfy the out-of-thermal equilibrium condition of matter-antimatter asymmetry. In this paper, after reviewing the methodology of calculating sphaleron energy in Section , we will revise this upper bound in the scenario when the Friedmann equation is modified by including a non-interacting scalar fieldWhen we say a non-interacting scalar field, we mean this field does not have non-gravitational interactions with ordinary matter. in Section . In {{cite:9e1a24ff85a3678e9448efacc5ef66f5fae64ace}}, the authors also investigated this scenario but they used a different form of {{formula:d313d9f9-befe-49df-91e4-737e90770fde}} and a different form of the sphaleron profile functions. Also, we will use a more up-to-date and stringent observational constraint of the Hubble rate in the well-tested Big Bang Nucleosynthesis era. Then, we will turn on to the Randall-Sundrum type II scenario. The lower bound of {{formula:f086e7cf-3c51-4acc-8c34-31d1f416e90c}} mentioned above turns out to make this model invalid for electeoweak baryogenesis. Nevertheless, a modified expansion history again caused by a non-interacting scalar field in the electroweak era can make this model viable. This scenario is discussed in Section . Natural units are used throughout the paper.
| i | 08fb319f57eadb11d51aadf908de3397 |
High-order discrete methods for hyperbolic conservative equations comprise an
important research area in computational fluid dynamics (CFD). The rapid growth in development of
high-order methods has been to a great extent driven by a radical change in the balance between
computation and memory resources in modern high-performance computing (HPC) architectures.
To efficiently use computing resources of modern HPC machines CFD algorithms need to
adapt to hardware designs in which memory per compute core has become progressively more
limited {{cite:239a118565203dd4c5d1840e963487dfd999760b}}, {{cite:2384fd1800ba86e151d5e6b8296c946632a61b32}}, {{cite:10b40f6fa1dc27051700febe443a8ae18462adf6}}.
A computationally efficient numerical algorithm should
exploit greater availability of processing
power, while keeping memory use low.
This goal can be achieved by
discretizing the CFD governing equations to high order, thus providing
the desired solution accuracy at a higher cost in processor power, but with a smaller memory requirement {{cite:7ae7a57600ac9e504a4884e74c8e1c61c41f82c8}}, {{cite:281839537a67a4d95deabae33f9d0db88016a5b3}}, {{cite:f79eaf45276f4b6b1e5130ddbefc53b1ce902ef5}}.
Of course, a practical consideration in designing such high-order numerical algorithms is that
time-to-solution at a given grid resolution should not increase due to
the additional floating point operations.
| i | 70a098a16e65306603bc2a324d1153b6 |
The next lemma is the analogue of {{cite:b27d91d5b59172726f0365ee10b0cb5f597c5017}} Lemma 5.7.23.
| r | 0a3a71de292d2374134ba0801f885064 |
At the long wavelength limit {{formula:4ebd8d84-4f72-4f69-86da-1723fbaf93d7}} , we can find an analytical expression for both optical and acoustic collective modes of the system. In this region and for {{formula:88661dc8-6e84-4c39-af98-9967e273c233}} (since for all acceptable density values {{formula:bb6ec972-062d-4a96-aefe-7006e3f14ed3}} is larger than the largest Fermi velocity of {{formula:d9e368ec-2499-4bad-828f-386cf7a18ed4}} bands), we can again make use of Eq. (REF ) for the density-density response functions of the 2DEG at the interface and also the limiting form of the density-density response function of graphene {{cite:a689e1a2855a7d238bf8193351648d22a035fd82}}
{{formula:c75540d3-2971-4f0b-9730-5229cefc9fb7}}
| r | 707d12054136816d158209d8c0c47d67 |
We do not present comparisons of our approach to more traditional OMT algorithms such as the Sinkhorn algorithm {{cite:233d0eaabdf058b5246e86451706bf2e4eda0479}} or the linear programming approaches {{cite:edde745b8d1239d3cee8319b7831ac0d7deada99}}, as these frameworks, although they approximate the OMT distances, do not compute the continuous optimal transport map, which is essential for the proposed density estimation. While finite element or finite difference based Monge-Ampere solvers, see e.g. {{cite:5fd76320f371940f9ebeacdb78b039c7847bc6b6}}, {{cite:1855d6451aa95e9fc4c1d58374653544f9c54a89}}, {{cite:4ce3d33a764f69ca4695b862e8a02472c5bcffa8}}, calculate the continuous OMT map, they are not suitable in dimensions greater than two or three.
| m | e94b14dfbc058d5d585a135756e6d8a3 |
until a solution to the original problem (REF ) is found {{cite:597fc15d5239153727911ca5eb95ab415938ad6f}}. This typically requires ten or fewer outer updates and a simple update, {{formula:267e0e52-cd45-4e34-812a-bcffa63392e5}} , that scales the penalty by a constant value, works well in practice.
Throughout, subscripts are used to denote derivatives and we drop the variable dependence of the functions for clarity. The KKT system for this method is:
{{formula:21bc2a17-e818-424f-818c-af8ce4496611}}
| m | 63851fd360b2788d8d53ca420bd3588c |
Another way ahead is in developing robust uncertainty quantification framework via Bayesian inference for Bayesian Matrix NTMs. We note that the copy tasks by NTMs are computationally expensive and Bayesian inference via Markov Chain Monte Carlo Methods (MCMC) require thousands of samples or model realisations for sampling the posterior distribution of neural weights and biases. We can incorporate recent frameworks that used MCMC with gradient-methods and parallel computing {{cite:10019a84d4cbe7eb8af1f8b7d3f78b45b4627227}} to overcome computational challenges. Moreover, surrogate-based MCMC methods for computationally expensive models can also be used {{cite:79df413bd2c5e8cb3b238c4936100927c5c3c8ea}} along with variational inference methods {{cite:4233c9d9d77f79937783c6822afa36688313902c}}.
| d | 95d12c63e13c8249bdc5167deed788d2 |
Can we avoid inverse correlations with a larger training set?
Scaling alone without data curation seems unlikely to prevent inverse correlations.
{{cite:83ab4edfddadd635f1b9a4c541e1fb14a2a97d92}} examined a more general question and determined that the impressive robustness of the large vision-and-language model CLIP is determined by the distribution of its training data rather than its quantity.
Similarly, inverse correlations stem from biases in the training distribution
(e.g. a class {{formula:ac3e0eb9-b8d6-4f50-95cb-600d1d1deab5}} appearing more frequently with image background {{formula:0dc12aaf-5218-4ec2-bdea-644750472967}} than any other).
And biases in a distribution do not vanish with more i.i.d. samples.
Indeed, more data can cover more of the support of the distribution,
but this coverage will remain uneven, i.e. biased.
The problem can become one of “subpopulation shift” {{cite:80ee4dfcdef41e8b6c741157bdda863e50e2be56}}
rather than distribution shift, but it remains similarly challenging.
| d | f80572ddccde542410c28e8b9dce3902 |
which would give coordinates much like the momentum and winding mode coordinates. Showing that this preserves the {{formula:20e5a298-dd18-4ec8-a7c9-107ff0761314}} -structures the usual transition to Generalised Geometry {{cite:e7957e9a275ec604b5ef0012f286b3e49c9880cb}} could follow. So it would lead to a Generalised Geometry description of the {{formula:6fa40bd7-ba30-4da1-b591-257b4e49f46a}} -structures.
| d | 2b5b5ab0f7dfe5dd15dc7a5c563308de |
We study notions of entanglement for such non-spatially organized degrees of freedom in the case of thermal states dual to the BTZ black hole and thermal AdS{{formula:e22ed0ee-b48e-4007-afcf-81c574029236}} .
Concretely, we work in the setting of the D1/D5 system (see {{cite:b95ce2a2cf7d55a52ba31932961770ca67e94451}} for a review).
We start our investigation at the orbifold point in the moduli space where the boundary CFT is given by the weakly coupled {{formula:f65871e3-5569-4bf6-b4b2-d3ce3e61ccd2}} orbifold theory.
We use the following new ingredients to define our generalized entanglement measure compared to the ordinary entanglement entropy of spatial subregions as used in RT formula:
| i | d6751df94d90f44aef346a7db35d3455 |
The thermodynamics of black holes was established in Refs. {{cite:c6b1725f347e4bcebf5e2c23c4f483e2731abb02}}, {{cite:3cc0deb08ef03fd7e2beeabc1a2a397d11a8b319}}, {{cite:a3113fb02d91873ed305b32fd0884485664415d4}}, and in this section we will present our numerical results for the thermodynamic observables, from a BTZ black hole. We take into account the contribution of the AdS/BCFT correspondence within Horndeski gravity. All of those thermodynamic observables will be derived from the renormalized free energy.
| r | 54a4bb7da6be10040f1fa3a6fb133101 |
Notice that terms equivalent to {{formula:e89146e7-0620-47cc-b80f-02adb999d3db}} and {{formula:1a57ce47-20c5-4171-8b8c-2806bf53f342}} do not appear in {{cite:d3e701db0a73650f8c0c2afd5ad1863602688ef2}} because they assume they are {{formula:7bd91733-cbba-45ff-8578-4c6843310158}} . The {{formula:af0b499b-1f7c-4f71-96ab-1e68238aede4}} term comes from a modification necessary in {{cite:d3e701db0a73650f8c0c2afd5ad1863602688ef2}} while {{formula:805ac07b-e9d9-4796-91f4-bf44ae624720}} is required by the initial confidence bound proved by {{cite:19b26856895ed55b550781681deeaaf30372a1de}}. The term {{formula:27919039-9224-4d7b-bde2-a2524dba84d6}} plays the constant {{formula:18633185-803b-4184-a619-e4a801a340c2}} of {{cite:d3e701db0a73650f8c0c2afd5ad1863602688ef2}}.
| r | a630dcc077ecf25b9a3603355776cc28 |
Observe that the processing techniques described in the previous sections can be also used to implement SCL decoder for polar codes with the considered kernels, using a straightforward generalization of the algorithm and data structures presented in {{cite:d3970af4d93fecd06f5e202baa30b7b090d99369}}.
The SCL algorithm was implemented using the randomized order statistic algorithm for the selection of the paths to be killed at each phase, which has the complexity {{formula:0256ee39-c3f2-4b70-952f-c8bfcdc46704}} , where {{formula:7180a1bb-faa3-4972-bd90-bad68832360f}} is a list size.
{{figure:7ac3edf6-d208-4902-a910-278b553e1c76}} | r | 52dd8cc6146b4933d883eaeda1623660 |
Firstly, it must be taken into account that the virial theorem of
Section REF does not include the effects of
turbulence on the dynamics of the core, namely, that many gas
collisions take place simultaneously across the core in the initial
stage of the simulation, so that the gas is simultaneously
compressed in many places, favouring the local collapse of the core.
In fact, early theoretical attempts to take into account this
particular nature of turbulence in the task of predicting the fate
of a turbulent gas system were made by {{cite:fc36849380f2b7d354ce499af31789952b664f94}} and
{{cite:e591098c2ec178fdeaa0d4bcebdc63d8ffc27439}}, who suggested considering a wavenumber-dependent
effective speed of sound, so that {{formula:4eaf71d4-d4ae-4b39-a877-f640096b8ca7}} . Because of this
issue, it is also possible that the resolution analysis presented in
Section REF is incomplete, in the sense that a Jeans
mass {{formula:a7a08331-53a1-4ed4-bbe6-c233bb4315ed}} must be replaced as well by an effective Jeans mass {{formula:77ff05db-a4b3-4ba6-b7c5-d1fd2a3c381c}} .
| d | b2a3e77b6fc29adcff51ca282df1407f |
Also, we have provided a new example of Type II (Example 2).
Interestingly, this example is different from the domino states{{cite:9658ed380b9509638d8a26ea1b0e1165a5dd0a92}},
which can be unambiguously discriminated using LOCC.
Moreover, this example is also different from the indistinguishable product basis{{cite:87ae12843668cba1f707e1c5dd8a4f82688de2e7}},
which cannot be unambiguously discriminated using LOCC.
In Example 2, one state can be unambiguously discriminated using LOCC, but all other states cannot.
This suggests that Type II can be subdivided into more inequivalent classes.
| d | ee50431f7f8661889927cb25bb18f9bd |
To state our results we need to introduce some concepts. Let {{formula:f17b8368-6973-4ef1-8a0b-3289dacb4c20}} be a standard graded algebra over a Artin local ring {{formula:019687d1-e28f-42be-9ebb-5bcbdc088cc1}} .
Let {{formula:248743c1-126d-4daf-aaca-f16a590301f7}} be a finitely generated graded {{formula:5e6d6ce0-0e45-47c3-aac9-2404ec4758fd}} -module. Let {{formula:93cf39f5-7f41-4ccf-8fe8-5b469cc90c38}} denote the {{formula:b83686ea-6881-452f-898c-8335c38f558c}} -local cohomology module of {{formula:1bd96d20-0de1-4fba-b3f8-0714cfcd38b5}} with respect to {{formula:d2f55759-7482-4331-bc77-24db0d7f8dfc}} . It is well-known that
{{formula:b6dfe08f-7e3e-4f85-a8d9-4126beaf13b9}} are {{formula:56b7b923-1415-4638-9e25-0e0d83e4e24c}} -Artininian {{formula:5c9dcc5d-aec6-46db-8757-5034c1f80dc2}} -module (i.e., every descending chain of graded submodules stabilize). It follows that it's Matlis-dual {{formula:778426f4-e329-4b68-b7e4-66e05c97507b}} is a finitely generated
{{formula:33a60981-f748-49cd-a2ab-679ab602d677}} -module. If {{formula:47b24cc0-df2a-48ca-98f7-e5db80a98612}} is non-zero and of dimension {{formula:f9df1301-e039-4240-8cd1-cfc19ad84e1d}} then it is known that {{formula:05ed4043-a419-4bb5-8b3b-1125ca4e5337}} for {{formula:1489f82d-13d1-49c4-b522-40ffd8ad780f}} and {{formula:c2e08b8e-fc93-4e7b-9bca-90f6ed0431fa}} ; see
{{cite:0759ee425a2735eb1c05a8ad0d83883503431f24}}.
We set dimension of the zero module to be {{formula:c4d31575-4524-4bbe-be33-71db3acd0f13}} .
In this paper we prove
| i | 55c6ebdde4285695d3529c7810c3ede0 |
According to these fragmentation criteria, the values for {{formula:bc4de676-1d5d-4ab6-bfb3-e6205394ec05}} and {{formula:c1a8b620-b553-4322-95e3-39d919d1dd8f}}
that we use in this paper favor the
collapse of the core and the formation of the embryonic binary system
according to {{cite:2612844806a0f810b8301c097413e8c194e5c91c}}, but it is not still clear what the next
events are, as they depend on the assembled mass, as we mentioned in a
previous paragraph. So, we performed numerical simulations in order
to determine exactly the main simulation outcome beyond the formation
of the mass condensations.
| i | a44b13b26b8ac7c03606ddc1805671e0 |
We use the Hapke models for predicting grain sizes of a 1 µm radii even though the approximation model assumes that the particle size should be much larger than the wavelengths {{cite:2a2b430c59d2b0ba7294b4675ccd6ff94a795109}}. This study also uses wavelengths close to or even larger than a grain size of 1 µm. Indeed, we test a wide range of grain sizes where the 1 µm grain size is just an instance, while particles with sizes much larger than the wavelengths were also considered. The rationale for using 1 µm radii is that there has been reported the presence of H{{formula:75ef1981-b841-4944-be89-24720542cb51}} O ice with similar sizes on the surface of outer solar system bodies such as KBOs {{cite:97496e1376b69ac8d534dc83d262576037c29ed8}}. Moreover, the use of grain size close to 1 µm emphasizes the Rayleigh effect that happens at grains that have sizes close to wavelengths for the outer solar system surface ices {{cite:546a06fc6c8315cf6d6fb925a008143b47ec05e9}}, {{cite:a50eb3d895608d8f08d768982bfd630c16ddcb65}}. And by taking smaller particle sizes, we evaluate the grain size prediction discrepancies at wavelengths close to or even larger than the grain size of 1 µm. On top of that our result shows how deviant, as it is expected, grain size prediction the approximation models can produce (Fig. REF and REF ) when the wavelengths are close to or even larger than the grain size.
| d | ded048708f6025fdd05c486de8d429b6 |
The proof of thm: localization convex will consist of three main steps: i) we bound the empirical error of the noisy clipped subgradient subroutine (Lemma REF ); ii) we prove that if an algorithm is on-average model stable (see Definition REF ), then it generalizes (Proposition REF ), extending results from {{cite:f5ffc2499db2af6d5341fd3cd77db436638ae04d}}, {{cite:2bf586ca0536b005436d48bf992ee8ee869c1feb}} to non-smooth/non-Lipschitz losses; and iii) we bound the excess population loss of alg: clipped GD run on the regularized empirical objective (c.f. line 7 of alg: localization): see Proposition REF . By using iii) with the proof technique of {{cite:c6f45420bc340b9d23a1aa6dcfe1855a9647b85b}}, {{cite:77035758f51258087e7289cd32f67c617d25e88b}}, we can obtain thm: localization convex.
| m | 3394221c3f3275ec07f5161d1747ed46 |
Measurement of the vertical distribution of optical turbulence in the
terrestrial atmosphere (OTP – optical turbulence profile) serves to
support operation of modern astronomical observatories equipped with
adaptive optics (AO) instruments and to characterize new astronomical
sites. A classical instrument to measure the OTP using scintillation
of double stars, SCIDAR, needs aperture sizes on the order of 1 m and
therefore it is not suitable for testing remote sites
{{cite:2e1199fb62fd67149770b021929ea5f9810d42f8}}. A Multi-Aperture Scintillation Sensor
(MASS) delivers low-resolution OTPs using scintillation of bright
single stars and a small aperture of {{formula:1ef5a845-1d4f-413d-86ee-6e02dceb2144}} 0.1 m {{cite:cf762db38c1e1ccd7fc8ee742eaf576d07820575}}.
Combination of MASS with the Differential Image Motion Monitor
{{cite:82d304ce3da85fa1f840c66403f93957b7d46fcc}} in one instrument attached to a small telescope
has become a standard tool for site testing and monitoring
{{cite:76aa83400d4ee9132ef12920a632a55e75e9141e}}. About 35 such instruments have been fabricated
and used in the site characterization projects {{cite:c587c0a6ad000980a2e1424120ae0c0c63aadd96}} and
for turbulence monitoring at existing observatories.
| i | a87436efa711c5210d5b0189f9aa2c31 |
However, the proposed method is limited in several ways. First, the neighborhood is generated based on the reconstructed data point. We lack a quantitative measure of the fidelity of the generated neighborhood to the original samples. Though the generated samples are derived from the VAE that was directly trained on the original dataset, some details are lost. Second, we adopt a standard VAE to encode the data point into latent space. Moving in such a latent space typically affects several factors of variation at once, and different directions interfere with each other {{cite:ace69b165bce09565bd82af1949c60755452aa0d}}. This entanglement effect poses challenges for interpreting these directions' semantic meaning and, therefore, hinders human users from understanding the machine learning models.
| d | 73e26328ab6ec4bc5e59e6cf4fb15500 |
Although several methods of this type have been developed in the literature, most focus has been on the polyp class with many datasets being publicly released and deep learning methods applied {{cite:c5f1c0eff0488cacd8bea8189978b0efbe0c0c7b}}, {{cite:2c2141e5801711a7b5ac118914bcb2f6c74125cd}}. However, in reality, these methods cannot be used to find other inconspicuous lesions either on the same site or during a different endoscopic procedure. Positing this argument, the “Endoscopic disease detection challenge 2020 (EDD2020)” {{cite:655ba80a51d1b47e011dfb17a17da5c29038bdb5}}, {{cite:b08f52c1f89457cb25e438603fc865549649470d}} released a dataset comprising of both upper-GI precancerous abnormalities such as Barrett's oesophagus, dysplasia, cancer and lower-GI anomalies including polyp and cancer. Motivated by this work, we aim to explore the opportunity this dataset has to offer to develop a unified deep learning framework for the entire GI tract. However, we also leverage other public datasets that have abundant polyp instances from the lower-GI surveillance (in our case Kvasir-SEG {{cite:f906796ad26f08ed4c36056217c363a59ba06adc}}.
| i | 8222b1e39ec9768b5def234956985a54 |
The requirement for very large datasets of manually labeled instances may seem counter-intuitive since this is not how humans learn to recognize new objects.
Humans are constantly fed with images through their eyes, and are able to learn an object's appearance and to distinguish it from other objects without knowing what the object exactly is.
Moreover, collecting large-scale datasets is time-consuming and expensive, while the supervised approach to learn features from labeled data has almost reached its saturation due to the intense labor required in manually annotating millions of data instances.
This is because most of the modern CV systems (that are supervised) try to learn some form of image representations by finding a pattern that links the data points to their respective annotations in large datasets {{cite:2d5fd3f99a0e4ad24f54f768b53dc48a4b1a8ccd}}. Moreover, the data annotation efforts vary from task to task, and it is estimated that the time spent on image segmentation and object detection (i.e., carefully drawing boundaries) is four times longer than the image classification itself {{cite:ec76d77727dadc3771c7e66f1d178c56e96a3fb2}}. The annotation efforts become significantly higher when it comes to highly regulated and specialized domains like medicine and finance in which the expertise level of the human annotator matters more than in any other domain. Moreover, supervised learning not only depends on expensive annotations but also suffers from other drawbacks such as generalization errors, spurious correlations, and being prone to adversarial attacks {{cite:519e13ecb5f113e3262b66ade98b8644ee2598f9}}.
| i | 4f122af8cf92c81300c2716aff472186 |
Confidence calibration with DocTTA. For reliable VDU deployments, confidence calibration can be very important, as it is desired to identify when the trained model can be trusted so that when it is not confident, a human can be consulted.
In this section, we focus on model calibration and analyze how DocTTA affects it. Fig. REF compares model calibration before and after DocTTA for `starting position index of an extracted answer' in documents. We illustrate calibration with reliability diagram {{cite:d5b96bfed0178530f4e47de6f7e2807fbdbad0a3}}, {{cite:3e545eb7672fea102d4ce949275bf0f672e5fdb8}}, confidence histograms {{cite:1bfe5787881f738a0ce877cae6f614f0447bda16}}, and the ECE metric. The reliability diagram shows the expected accuracy as a function of confidence. We first group the predictions on target domain into a set of bins (we used 10). For each bin, we then compute the average confidence and accuracy and visualize them (top red plots in Fig. REF ). The closer the bars are to the diagonal line, the more calibrated the model would be. It is observed that calibration improves with DocTTA. From this plot, we can also measure ECE as a summary metric (the lower, the better calibration). DocTTA on {{formula:b2966ce3-e40f-4989-aace-4a1654418efe}} yields significantly lower ECE, from 30.45 to 2.44. Although the reliability diagram can explain model's calibration well, it does not show the portion of samples at a given bin. Thus, we use confidence histograms (see bottom of Fig. REF ) where the gap between accuracy and average confidence is indicative of calibration. Before adaptation, the model tends to be overconfident whereas, after adaptation with DocTTA, the gap becomes drastically smaller and nearly overlaps. Results for the remaining domains are shown in Appendix.
{{table:435440d6-9a4e-43d5-a90e-a334caeefdc0}}{{table:3acc52af-c380-4719-8690-b934f247fbc2}} | r | 49a1cfa4a688f603acca5e25817237fc |
There are many interesting areas for future work. One would be the development of flexible machine learning methods for estimating the confounding bridge functions. An advantage of doubly/multiply robust methods combined with cross-fitting is that data-adaptive methods can be used to estimate nuisance parameters, yet their potentially slow rates of convergence are not necessarily inherited by the estimator of the target parameter {{cite:ceca74cfb556d9050acbaadc259fc9c7e6153526}}, {{cite:b24510210a57e8aa5cc68ef30ba5d90a75075548}}. A complication in the proximal learning framework is that the nuisances are defined as the solutions to integral equations. Progress in this direction (in the context of reproducing kernel Hilbert spaces) is described in {{cite:c32e8a13f05159495f3f394f9b868038580e6a4e}} and {{cite:91bce9f3a0117cb9bad0d1089de508586da2e096}}. It would thus be useful to extend the work of {{cite:c32e8a13f05159495f3f394f9b868038580e6a4e}} for nonparametric estimation of bridge functions in proximal mediation analysis. Another avenue for future work would be to extend the results of identification and estimation to more general path-specific effects {{cite:10c47b02ffb17909d384bcea31c816c4ebadf293}}, {{cite:a3a032fb7cdec1b82e203416f3ba089d89bbb73a}}, which are relevant in particular in settings when confounders of the mediator-outcome relationship are effected by the exposure.
| d | 6f6f223ba1c987fb58cb9984565ca345 |
Some other researchers {{cite:0641f11a79e89281d9b0f5a4d7cd1bffea463865}}, {{cite:3edfdfced32121f6bc917954a8f1d81248a73328}} also proposed density-based regression methods to take full advantage of annotation. However, variations in the crowd scale and cluttered backgrounds are still the main obstacles.
{{figure:5fd34be4-f67f-4227-981f-6c6c7f0ac290}} | m | 3faa5205b73e5c80e23437cc51bf3e2a |
Our work also has limitations. First, we evaluated only a single diagnostic use case of ICH detection on CT scans of the head, albeit with multiple datasets from different clinical populations. However, our approach is applicable to any other medical imaging use case that utilizes cross-sectional imaging, including diagnosis of disease on CT of other body parts, as well as on other imaging modalities, such as MRI. Future studies will apply our approach to other use cases to validate its generalizability in other diagnostic scenarios and imaging modalities. Second, while this study demonstrated that indeed the examination-level annotations suffice for ICH detection in CT once enough training data is available, some image-level annotations were needed to validate our methodology. In future extensions to other diagnostic tasks or imaging modalities, this minimal amount of locally annotated data will also be necessary for validation purposes. This number of local annotations is very small, however: in this work we employed 1,000 examinations of the validation split of the RSNA dataset to this end, requiring about {{formula:9625d3a0-7a0e-4042-81e2-05289a39ab4d}} image-level labels. This represents less than 6% of the number of image-level labels needed to train an alternative strongly supervised model. Third, the weakly supervised method currently only evaluates medical imaging data; given the potential improvements in imaging diagnoses using multimodal AI models {{cite:fc73944718322a841b93e39142e99a1ea0cd2151}} incorporating multiple types of medical data (e.g., imaging, clinical symptoms, laboratory values), developing weakly supervised DL models that can incorporate multiple data types is an important topic for future study. Finally, although Convolutional Neural Networks (CNNs)—such as the models used in this work—remain the most popular deep learning architecture in medical imaging, it remains important to investigate whether these results extend to other parametrization of the predictors and architecture choices, for example to Vision Transformers (ViTs) {{cite:c4aa9d99cec6d36da45b8ba2ef5bd5df67fa24ed}}, {{cite:086b0f56570333bee9bd25d105691cbfd7dd0569}}—which are rapidly gaining popularity in the field.
| d | 71539b7b95ce78ada08490b2e967ce3f |
In this section, we show that our network can outperform state-of-the-art deep shape matching architectures on standard datasets like FAUST (F_r) {{cite:b49e1a7b9080cb2698a8fbf8c5378071b05969f6}} and SCAPE (S_r) {{cite:6576a37f92d974c21f5635cb20ded39c0db4e120}} as-well-as non-isometric datasets like SHREC'19 {{cite:db4eea091e7d5972911d012496d633a917e41112}} and SMAL {{cite:789914210b0ceabb09ccefd230919843b7b714e9}}. Following {{cite:8caef9be0380a56eb0b9af88319ccad91e34a967}}, all shapes are remeshed so that they do not share the same connectivity. Moreover, we introduce an anisotropic re-meshing of FAUST (denoted F_a) and SCAPE (denoted S_a), generated with Mmg {{cite:bf7272f042380325aa6e689407f703155ebf9130}}, {{cite:2d747c0788a670cca26660f0ccbc8d7e4507b222}}, to demonstrate how some methods overfit to mesh connectivity to disambiguate between intrinsic symmetries.
We show our anisotropic remeshings in the supplementary materials.
| r | 5b8989c43689062ca0016e3506bf9e66 |
The domain {{formula:74dada1c-ac92-40a6-a356-fb024bb763ef}} has however been studied in a different, non-logical setting : it appears in the graph homomorphism literature {{cite:f28b674cd001608daa0de160a579d738fbf40f67}}. The object ({{formula:c05551d5-2682-43ca-afd8-f84cbbc413e5}} ) where {{formula:373af687-1480-463d-b121-dfa925a86982}} if and only if there exists a homomorphism from {{formula:27e5acf9-0868-48e0-bcfe-c26196655b00}} to {{formula:e6cdf2a4-ce8c-43e4-841c-46216b2b6726}} is a preorder and not a partial order. Appropriate quotienting gives us the poset ({{formula:4d54135a-ba03-4976-84a4-20f58ea1ef41}} ) where {{formula:e1a32c74-0d1c-43a7-9b7b-0c05fb9cab19}} is the set of cores, which are minimum elements under the {{formula:8e6e4b1b-e153-4734-8112-d9ab62baf424}} order inside a particular equivalence class. However much of the literature on graph homomorphism concentrates on homomorphism densities {{cite:2dd555870ffb7ec5e6e223607740d8eeb2982140}}, {{cite:64078b10e9480b6332dc0643319574b4e9c72af1}}, {{cite:a5fcfd49918a1cb4ec5e29a8f478884ab06970ed}}. Hatami and Norine in {{cite:3e302608218618acd1462f1471698249c21609a9}}, start the paper by saying “Many fundamental theorems in extremal graph theory can be expressed as algebraic inequalities between subgraph densities...for dense graphs it is possible to replace subgraph densities with homomorphism densities”, thus motivating this line of research. The above paper proves the undecidability of linear inequalities over graph homomorphism densities, showing the difficulty of general problems even in this restricted language.
| i | f8e54a5b2b49ce82a946bfe0c5dac823 |
Leverage the {{formula:3f41c559-0a49-4a76-8284-06e2a9b0faad}} symmetry in {{formula:aa966927-d43a-49b8-9f11-b3b776141a3b}} and {{formula:460a7603-b0b5-409a-835c-8329dfc2ec70}} , which divides the cost by half. This can be done preparing initially
{{formula:dfcf2b65-c527-4a8e-b029-1d09e394590a}}
Then, one can use the second register, in state {{formula:23435f79-7267-43d3-82e3-cb6d8fd07d8e}} to swap {{formula:4926ffae-5d80-48a9-8659-2902f7f33929}} and {{formula:be2e83fd-adcd-49a7-9032-e10172119ab5}} when {{formula:206d7d57-70a5-4198-8ef6-7042752864f5}} or to apply either {{formula:2c8554dc-50b9-42b7-948a-a68cbd1216f1}} or {{formula:bd21b27c-a047-4048-8a34-03a06c9a9c5f}} when {{formula:64bd9de1-91f4-4ec7-a26b-29841256d4a8}} . This means that in step 3 we will have to prepare {{formula:149466d3-06a1-4281-b6ca-b0e2a87df77f}} entries, and in step 4, {{formula:407b3c25-bae8-4ea2-b82d-7ae96df7732c}} .
We can also reduce the preparation cost in the QROM by performing the comparison between the probability {{formula:1e1fd833-2a44-4c21-84a3-768039535a9d}} and an ancilla in uniform superposition, at the same time for all {{formula:49638c7b-e254-4192-9a7f-e4d7c088ca19}} . The controlled swap between the register {{formula:912ce474-1685-44a6-8fe8-df2160e5dc91}} and {{formula:cfa1c822-c372-4d4a-884f-6a082343eab6}} can also be performed for all values of {{formula:613c964f-fe2b-446b-ac63-b49921a2fb26}} simultaneously.
The dominant cost is outputting {{formula:e28e7a68-3acf-42fe-a579-1ca72732aa87}} qubits using the QROM {{cite:b8670a873d1a4ddfe034b70b6c459837b58e476a}}. The outputs will have a size {{formula:99cd6b4a-57f6-47f2-8986-15046c51e911}} where {{formula:def228e5-32ba-4113-ab41-733ea55aefda}} is the size of {{formula:235c6247-a6d4-4df3-8703-3f4415745ee2}} and {{formula:0ccc3904-cf98-495d-899d-271e0e04eadc}} {{formula:73c7d87e-fee5-43da-bdbb-de7e2f2a9505}} , the size of the probability register. The key aspect of this third point is substituting the QROM of {{cite:b8670a873d1a4ddfe034b70b6c459837b58e476a}} by another from {{cite:9a5cebc1ea7b5804d2fdea678ab3d74c72a15aa6}} which allows to trade some gate complexity by space complexity. We will call it QROAM. Calling also {{formula:b369fe3c-0541-4c24-88d9-c17fe9ee593d}} the number of entries we must look in the QROAM (including steps 3, 4 and 5), and {{formula:ce4047f1-27c9-4b97-aa82-676c1c0b60e9}} an arbitrarily chosen power of 2. Then the complexity of computing the QROAM is {{formula:a6270275-4e0e-43e8-8335-14ab5c45ef94}} uncomputing it in Prepare{{formula:242b6d13-cea3-4d65-b272-c7c0d24b568b}} is {{formula:381fc837-e932-4b78-a28c-a42aa5a31ad7}} , where the {{formula:51ff09df-828b-4b16-9370-2cf7968b8d41}} and {{formula:43869524-647e-43ff-b0e4-c9da48586a52}} in compute and uncompute respectively can be different.
As an aside, we can indicate that if we were to use dirty ancillae (anciallae that is already being used for other purposes) the cost would be {{formula:7fb0892a-ee54-44e7-a6aa-7457c463dc84}} and {{formula:6128df32-0dc5-4c46-a9f1-69b52e673dce}} for compute and uncompute respectively.
Since the largest bottleneck is in the number of Toffolis required, we will focus on minimizing that variable. This means taking {{formula:68fddd61-b7c6-4d71-9d24-5349db0caa21}} for compute and {{formula:923d5744-4223-4790-ad77-12e7eb2c8a00}} for the uncompute step, what means a cost of {{formula:f1501241-1ace-4a64-b449-8dd6f4c0df1e}} and {{formula:1feb42e2-c88d-4917-a9eb-5c71abc0d719}} respectively, giving a total cost of {{formula:a161fe0f-4c74-4d2b-8644-dcec64f72d74}} . Since we have chosen {{formula:d6fa2195-81c2-4d6d-a7d3-56deee9e5e60}} and {{formula:9ddea5f9-a03b-45b5-b216-059cb61fce85}} , this means an overall cost
{{formula:711eb042-d701-4b95-97bc-4e3c64eb75b8}} and half as many ancillae. Since {{formula:6d246bfe-27d0-40d9-bcb9-fe160ee9f79f}} , the number of Toffolis is {{formula:2c201225-5834-4c4b-8f3f-51c2d6c0f61a}} .
| m | 9916b26f8043abf671f26087b11ada39 |
We use the Kohn-Sham DFT based electronic structure analysis
implemented in the SIESTA {{cite:1925386d9c80cacee884a48935629abb653eee16}} software package to
study properties of the GNFs discussed above. When performing DFT
calculations for these GNFs, we include 20 Å vacuum space in
each of the {{formula:d75d9890-2a64-46b5-9dd7-0865cc9c2627}} , {{formula:e5675132-2d28-4636-9308-6f9933b35ad6}} and {{formula:4132e2f2-84b0-448a-8557-7223b52f799a}} directions, which is sufficiently large
for separating the interactions between neighboring slabs. We choose
the PBE exchange correlation functional {{cite:5c8ed644faec96e49f2bb5e6da169ca67ed590d1}},
which generally gives a good description of electronic structures of
GNRs {{cite:a08c3e3a5d2f029c433e75fd18c968439f776cd0}}, {{cite:dd3c2ad45d6bfa282dc53d55c0ebfaec94d6a8e1}} and
GNFs {{cite:2989810d822c33935926566354a48014a7e47d9e}}, {{cite:3a73fdb7d3aca132d6b91ff5f00fb8b0eb71baf4}}. We use the
double zeta plus polarization orbital basis set (DZP) to describe
the valence electrons within the framework of a linear combination
of numerical atomic orbitals (LCAO) {{cite:2ba61f6968c86489a806687526ed09391f5eb4b2}}. All atomic
coordinates are fully relaxed using the conjugate gradient (CG)
algorithm until the energy and force convergence criteria of
10{{formula:bbb24ca3-cc3d-4146-b2df-446d98cefd91}} {{formula:2a2a755a-ea99-44c2-870b-389dfb790505}} and 0.04 {{formula:e7739ebf-c726-4b60-a417-0e122afed703}} /Å respectively are reached. All
calculations are performed on the Edison system available at the
National Energy Research Scientific Computing (NERSC) center.
| m | 557ec98dc4f34a9198c5433078ae10a7 |
In a LPAI, sequences of laser pulses are used to split, deflect and recombine matter-waves to create atom interference. In inertial sensors, these sequences of light pulses commonly use counterpropagating two-photon Raman transitions with large one-photon detuning {{cite:0e489501e42c84c5f0845d3908a8bf663faaa680}} between hyperfine ground states of alkali atoms (e.g {{formula:c2caf07a-8a2b-4b88-b3a1-198bca4b9068}} and {{formula:5ab12952-9d3a-4702-99ed-a1b0cdb99354}} for {{formula:5aab2094-2ebd-42ce-9f66-5b36ea8719e1}} Rb). They form the basic atom-optics elements by finely controlling the external degree of freedom of the atoms through the generation of coherent superposition of momentum states. In a counterpropagating configuration, the transfer between the two internal ground states is always accompanied with a change of {{formula:acbe095c-74e9-4815-9318-8bd2923904e7}} of the momentum state, where {{formula:4d9ad0dd-2c1a-4895-a8b4-f9147a9875bc}} is the effective wave vector.
| i | 771de387150e600b8ce3ae6246075c17 |
Another takeaway is that the design of multi-modal detectors should take sensor degradation into account. In order to avoid that the entire detector becomes overly reliant on a specific input modality. As our results with AVOD {{cite:c50e0505e3b8bd55a23ffc3c5599b2bf444bb4d0}} show, such a biased-design can lead to poor results under an adverse scenario where the LiDAR modality is failing. For safety critical systems it is necessary that such detectors are fault tolerant. As part of the future work we will be moving towards designing realistic data augmentation pipelines to augment different weather conditions on the LiDAR point clouds. Further, we will design a fault tolerant object detection architecture which can perform in the case of a complete sensor failure, whether it is camera sensor failure or a LiDAR sensor failure.
| d | 87df11b9e04a067bbf2cd680fdca282e |
Above the percolation threshold, {{formula:e3a601dd-3511-43c1-ac0d-cafa186162f5}} , the giant cluster contains a finite fraction of sites, {{formula:38f83c14-d3cc-42cf-97a6-e0799a5cb79b}} , which behaves in the vicinity of the transition point as {{formula:9173332e-2602-40a6-99e6-5a4ee42743ab}} , with a critical exponent {{formula:185fb5c0-7c7f-48df-9b09-ce29571015e2}} , the value of which follows from scaling theory{{cite:598122aa52f259578b4e2d028c725ac3fd688889}} as {{formula:80f01c3d-a90e-452d-b374-dcd37fb645c0}} . In {{formula:6333f9ea-e3ac-4c09-8f4a-9d4df903d5dd}} traditional percolation the critical exponents are{{cite:598122aa52f259578b4e2d028c725ac3fd688889}}:
{{formula:654374e6-4b94-497a-bb60-85ba967d69f7}}
| r | e2717afdfc7dc2c5ced8cde2ba2b75c2 |
Chaotic scattering is an important area of research in nonlinear dynamics
due to its fundamental applications to a wide variety of fields such as physics {{cite:f8c2c8d7a4cc4042252ec55abdc8fbf3396d9bdc}}, chemistry {{cite:f3c18161af2676d248b8737e34acb73b02391c36}}, {{cite:782217419b930b55f59c4ec99cc8b2a706b6d544}}, medicine {{cite:60c5a1706e20ffeaa4d9ae7e6a9b1b84efbada83}}, {{cite:a4b85f7274e13704309b6a794ee2eb2aa55d590a}} and biology {{cite:4eb924e2c52b28ff3b87fae34ac409a54769fba0}}, {{cite:db52744f2d7086e46672bf6b0aed9dd688da1654}}. In the context of physics,
many chaotic scattering processes can be modeled by using open Hamiltonian systems. These systems are characterized by a potential that exhibits one or more exits through which the particles can escape to infinity after describing a transient chaotic motion. Some paradigmatic examples are the Hénon-Heiles Hamiltonian {{cite:e041a8568cb47c012417f46449d5ff542fdff4ac}}, with applications to astronomy and celestial mechanics {{cite:08d3bb329be7f4774e7d26dbe5e41d5c351cdec5}}, and the Barbanis potential, used in quantum dynamics {{cite:cf12106c427cc7c85b198c3abb8ff567881501c1}} and chemical physics {{cite:5acc0bddf0c70eec1a30aff1d4b7c990db5c6da9}}. Certainly, most of the research concerning chaotic scattering in open Hamiltonian systems has been done in purely conservative cases {{cite:c66dd04bbfbffebf51a9d9c3254ca590578a3a44}}, {{cite:01041406c6e231ea5bef1c901ce2120acd14aa90}}, {{cite:fbd64593c82140a6e675ba6b3770ed70149be256}}.
However, many dynamical systems are not isolated from its environment or they have internal irregularities that can be considered in the dynamics by introducing external perturbations. For this reason, recently different lines of research have emerged, seeking to elucidate the effects of weak perturbations on the escape dynamics. In particular, the periodically forced {{cite:6e1ec764163501617bcb08def5fe59f8dea498db}}, {{cite:474ffee20b3299b751bb41ab02adb692dd642e8d}}, dissipative {{cite:06130e50f4a3f30c72591e93304f0e3cc3d767d0}}, {{cite:71345adab85b4d5c7c725122aeb000d90e97a7d9}} and noisy {{cite:b4d084abc13ad83f0ec22278287ea2f1d6de526a}}, {{cite:18893c7e7d2a861b4674de486a7184933360f70b}} cases have been studied.
| i | d2a0ca49e7a635cc679fda2408863af4 |
Before we focus on how the treewidth behaves under the bridge operation, let us recall the following fact, whose proof may be found in Diestel {{cite:959e202238be113cbc80a3b4357dfdb16dd75da8}}.
| r | efdd4c5c43f5ae7ef20730affc1d1228 |
where {{formula:d407841e-07d8-49f6-bf13-f7537c33a004}} , {{formula:9dd58e5c-9f2f-4f97-827f-dcf7ac350c85}} , {{formula:02db61a1-3ad3-48a2-93d8-c4501d0e518e}} are: the estimated speech signals, the clean speech targets, the estimated noise signals, the noise targets and the parameters of the model, respectively. In this work, we force the estimated sources {{formula:b224f896-459e-471a-8474-2781d0f598cc}} to add up to the initial input mixtures {{formula:ac91d6c1-39c9-4f11-8d36-a4b17b255ed3}} using a mixture consistency layer {{cite:ad81f3aa77cdeb8814a405981b88f74e665b47b6}}.
| m | 65fa899540e97b3b6af6a430c614e18f |
This is the Euclidean analogue of MuRP {{cite:852b4389e290c941097f80ac9e290efd9fef13ca}}.
As shown above, SemE models have achieved state-of-the-art performance in all nearly evaluation metrics on the two standard benchmark dataset.
Remarkably, in the task of WN18RR, the low-dimensional model SemE-{{formula:9f9dcf99-5e9b-4f0b-aba5-a504aaf0eaba}} s has already provided promising results with even a small amount of parameters in relation embedding, which further demonstrates the advantage of the proposed approach.
| r | b6cbae913b3fb2592b0a0ba715b3e200 |
By the convexity of {{formula:40ee4520-9ef2-4113-bceb-c0856381604a}} ,
{{formula:7bf09f23-f710-49de-b7bf-2c50c6d13020}} is a convex function whose gradient, namely {{formula:2136e711-c78c-4cfb-b0b7-7d995bbd436a}} , is 1-Lipschitz
continuous (see {{cite:6982da160a3ee7bedf65ca0e527919da57c0490e}}). On the other hand, we also have
{{formula:1b93a122-3760-49a5-aad7-0c69d35e7b40}}
| m | 27d473d3271c925e8652eac9cf2f9e84 |
Employing VLBI data sets at 15, 43 and 86 GHz, a consistent description for the temporal evolution of OJ 287 radio jet was provided in {{cite:f62e0a909a685856e783a3bcaeeb2ef09f71565c}} making use of a helicity parameter that allows for outward jet motion that is not exactly in a straight line, as noted in {{cite:7d07916c13fc21528069bcf3178888825ee3dde4}}. In addition, one may use the information of the time evolution of optical polarization and determine the jet orientation close to the jet launch site {{cite:fb44f00c8d170ea2b0193f693ac2bdac707b61e6}}, {{cite:0b8bf33a2015e3b705787e7da80b4dcaf7b5d892}}.
| d | dc0ecabc94d8b548bf410b986915bc1a |
The spectacular variation observed in the DLCs of SDSS J163401.94{{formula:389bb28a-c926-4089-953c-1ce1c324aa9d}} 480940.2 is remarkable, as such a variation is genuinely unexpected for the non-jetted-RLNLSy1s. However, exceptional variation, and deduced minutes like flux doubling time support the jet based origin and thereby approaching to the extremely fast minute like variability (with a flux doubling time of {{formula:eea1fcb6-a924-4934-9a6b-4f49c6a3fefe}} 10 minutes), observed from flat spectrum radio quasar (FSRQ) PKS 1222{{formula:1310ffb3-9c84-4b2e-9a64-4d6a85a25c35}} 21 at 400 GeV {{cite:d546baffd5de78870af4ccfdaa8c0ed2f2a0a62b}}. On the other hand, the detection of a compact core component only in the VLBA observation at 5 GHz for this source, classifies it into the non-jetted-RLNLSy1 category {{cite:8bf0f146b9ee9a0e044d58b4dae757fe2547c531}}, which is contrary to the present findings from this source. This might be either due to the quiescent state of this source during its VLBA observation, performed in 2013 {{cite:8bf0f146b9ee9a0e044d58b4dae757fe2547c531}} or due to limited sensitivity of VLBA. Another reason for non-detection of jet component, in the VLBA observation of this source by {{cite:8bf0f146b9ee9a0e044d58b4dae757fe2547c531}} could be its lower black hole mass of {{formula:80432fe6-6c39-46ce-a732-f31e6ff80924}} M{{formula:3ec3a646-b5f7-455e-8050-77712b6cb263}} (see Sect. ) and lower radio luminosity of {{formula:ad1b0bdc-c1f8-4f44-956b-b770735135c4}} erg s{{formula:3a357329-8289-4765-b2ae-1727762a6244}} at 1.4 GHz {{cite:8bf0f146b9ee9a0e044d58b4dae757fe2547c531}}, indirectly implies to harbor less powerful jet {{cite:243d3b6aa5ed545bd63679e70985388bc39721a1}}, {{cite:1089fc98bf1581ab886fe3fe14a3d63d08002467}}, hence not capable to escape from the confines of its host galaxy {{cite:f622a6fd960c69e6687175180d2e30e2e337488c}}. Finally, one last possibility of non-detection of radio jet in the VLBA observation of SDSS J163401.94{{formula:bad3f12b-695f-4d65-9463-02b60c786bb8}} 480940.2, could be either through synchrotron self-absorption as it occurs in gigahertz peaked sources or, more probably, via free-free absorption that can be understood as follows. Since NLSy1s are typically characterized by a high Eddington ratios {{cite:bda539be04c35e1b56ee3d25c1127827ee04eb70}}, {{cite:49c2bf83f78b247d21fb06399808c3c65965c80b}}, {{cite:716dc0995116a0df28c27c4be14173204b6c14fa}}, and they are generally associated with a dense circumnuclear enviroment {{cite:0eea8bdeadf05a860804eb2257015f356187b818}} with a high star formation activity with respect to regular Seyfert galaxies {{cite:b718c998cb7d495fec5c1a4944007a8bc94e8e81}}, {{cite:0bbb500ac981b1952373427b59c5a1d3860415dd}}. Therefore, the high star formation along with the nuclear activity of NLSy1s can ionize the circumnuclear gas around it, and thus, the large quantities of ionized gas produced via this process could be responsible for screening the jet emission at low frequency, and hence resulting in non-detection of the jet component at low frequencies observations. Even formation of a cocoon of ionized gas {{cite:0d3d3d83f7ede8d6d830ebcbdbd543162c682dba}}, {{cite:1de272e10bdeaa183a2e472d54124a8205858038}} can also be possible when the jet passes through the interstellar medium, which could also be responsible for the free-free absorption {{cite:19d298d4c68830b93a3dc86f668b23dba9a6747f}}.
| r | b5d651d785a1486d6e342cb2cbc4739a |
First let us compare with {{cite:09713f46bfdc8da67226658474d840992accd7ad}}. For simplicity let us assume there are no additional controls {{formula:832cd7eb-efe9-4a16-ba32-5b3d4b24afd6}} . {{cite:09713f46bfdc8da67226658474d840992accd7ad}} assume the existence of a function called a `confounding bridge' which then plays a key role in their analysis. A confounding bridge is a function {{formula:04724c2a-e572-4cd5-85d1-1461dbd8237a}} with the property that for each {{formula:c11ed845-e9c7-4fd8-9118-d21813018c59}} in the support of {{formula:dcdf5c0a-182d-4fd8-a9e2-fb60d0a29e96}} , with probability 1:
{{formula:e62a066f-797f-4c0f-ae21-65f45f41e1a7}}
| d | cd3918f60e825ffae8ad20322d483a0c |
Local alignment score based on Smith-Waterman algorithm {{cite:0e313ddaf7e6792a14bc01464de425b5b629a890}}.
| m | 5828d3fbf8e4d0808eabcfa2ce09eeff |
The coupon collector's problem is an old problem of probability theory which in its simplest form dates back to de Moivre, Laplace and Euler, see {{cite:4b6ede7cd9d5a2d69f35436fa27f679f4784bc6a}}, {{cite:d16207c01682738d9eadaf90498af07d518f3649}} and {{cite:cf40acf2e9a16cb1ff7685f5d646f3716126a9ed}}. Whereas de Moivre used a die with {{formula:7339721c-9c7a-4bf8-b9b3-bfadc680b1c1}} faces to pose the problem, Euler and Laplace used a lottery interpretation as motivation. However, a more recent example for a situation in which the coupon collector's problem occurs is the collection of pictures of the participating players of all teams before and during every World Cup. Typically, fans can buy the pictures in packages of five or six. Two natural questions which arise are: How many packages does one need to buy to get the full or a specific portion of the full set of players? How many stickers are missing after one has bought {{formula:85ad5963-c12b-42b2-b74d-deb9b190d048}} packages? The first question was for example studied in {{cite:145d6edd419a6b42c84cf4a193184c7d1c84a0b4}}, {{cite:572f3a9a50c7605a31fedb8adb27ca77b67ffa30}}, {{cite:9930a3af8a6f255060a8a46a4a5d665d2477fc4f}} or {{cite:51d4c999c064bb3bb5dbc37956310ea833e2e873}}. In the work at hand we will deal with the latter of the two problems.
The version of the coupon collector's problem we consider can be described as follows. Assume there are {{formula:0f01237d-2e3d-4013-aa00-6bc8254e9bfe}} different coupons. At each time we draw {{formula:40dcb71a-0f9a-4f66-9212-fccca6df42f0}} of these {{formula:46818889-95bf-4e4d-a4bc-4b6bb7e6248f}} coupons, where we assume that each of the {{formula:65c85d16-628a-43c8-a99e-d0ed0acbf2b3}} choices occurs with the same probability.
We are then interested in the distribution of the number {{formula:a69e285c-7180-4775-bcc2-cbf2d2825f19}} of coupons that have not been drawn in the first {{formula:fd3d1b51-0705-4053-8d5a-95843458ea06}} drawings. In a conceptually equivalent interpretation the {{formula:589a25f2-cd23-4fef-a191-1fb67d0da2ca}} coupons are represented by {{formula:f4576df4-3a89-4510-ad84-b7c7a7b52842}} different cells numbered {{formula:c69e24b8-1231-4aec-8258-991b522204d9}} , and in each drawing we place {{formula:7402ff79-d74b-4725-9f53-5f0e3e38d945}} particles into {{formula:b91d7bce-fc40-45d5-ac9b-e1b81d20e722}} distinct cells, see Figure REF . We are then interested in the distribution of the number {{formula:1ffb7f4e-f0fa-40a3-b790-b199b6492a89}} of empty cells after {{formula:cc62c9f9-4dda-464d-a08c-b43925e4e35c}} drawings.
{{figure:8ab496d5-1aad-473e-813e-2c986f37b0af}} | i | 224e7b6108306b3174618bb01141e4c7 |
Performance on some detection datasets (e.g., rfcx, gibbons) was lower than others. These datasets are challenging due to the sparsity of the vocalizations in the training data. AVES was the only model that performed competitively across all datasets. We expect modern regularization and data augmentation techniques, such as mixup {{cite:cce62a982495e90671710f867eff99bed1a3b95f}} and SpecAugment {{cite:5890b4ce1b73697a4a3161d538efb9ded546f436}} to help improve the results further.
| r | 5e9cc98528f7b4fd0ec624b17bd596dc |
Our results provide a base for further research in many directions. First, all of our analysis is in the limit of large sample sizes. With a small number of simulations, higher order terms ignored with the delta method as well as the difference between a {{formula:3c565334-f26e-497c-82ce-9e0a99b1853f}} -divergence and expectation MSE may be significant.
Future work may also consider the effect of the supremum over a class of functions in an integral probability metric.
MMD {{cite:60ec1caf535223b2c30c7aecedd878a4c8b8ef46}} in particular is a promising candidate for further analysis, as unlike TV and Wasserstein it converges with rate {{formula:a6062341-290b-47a1-84ad-26726b607e57}} for empirical samples.
| d | 57f058843616f88481f3d27dc2909753 |
From Andrzejak et al. {{cite:679219f58a4afe9b829b1e90079bfe77e3a88236}}, 10 Participants (5 Healthy and 5 Epileptic Patients)
| d | 2738538329bb83c72ba3845b34fc442f |
The forward selection process does not necessarily evaluate the same models when fit multiple times. This is because the selection processes explored different combinations of change points and thus there arose inconsistencies in the chosen models from each repetition. This highlights the need for the change point selection methodology to fully explore the space of change point combinations. Other methods that potentially do this, such as reversible jump MCMC {{cite:96dd01acee2aa336c2efbebe065d45611be18531}}, often have difficult implementation and similar difficulties exploring the parameter space. As demonstrated using the synthetic data set, the methodology we propose allows the selection process to explore as many change point combinations as possible through a simpler implementation.
| d | 095c5cd16f4ee2d4f1b43c35f2e77e66 |
The intrinsic magnitudes of nearby Type Ia supernovae (SNe Ia) to which distances are known independently are characterised by a large scatter. However by exploiting the empirical (wavelength-dependent) correlation between the intrinsic supernova magnitude and the timescale of the luminosity decline {{cite:85668e904a8b62549820f88d21ef5ecb62da76a8}}, this scatter can be reduced to {{formula:6d156048-5ea9-4ac5-8334-9647b27b6b8a}} 0.1-0.2 mag, making them `standardisable candles'.This too has however been questioned by the recent evidence for luminosity evolution of a sub-class of SNe Ia with redshift {{cite:6b79491c171c1ecb04161b0cc5eb333a6648608b}}.
Recently, the magnitude-redshift relation of SNe Ia in the nearby ({{formula:b56c09b3-c3a8-4b9a-a9f3-3a11a75317ef}} ) Universe has been leveraged with the local Cepheid-calibrated distance ladder to measure the Hubble constant to increasingly high precision {{cite:deafc4b38a5df16f3616bfe0d6779a65fae54481}} in what is claimed to be a model-independent manner. The single largest source of uncertainty in determining {{formula:6559ae8f-6144-4a78-a907-f53f58699f4c}} is now said to be the mean luminosity of the SNe Ia calibrators {{cite:40a5f7ac27e866d7355829b1fe8ccec67ed48561}}.
| i | 273daa5c899644fa3cbde70d48d2faab |
To demonstrate the effectiveness of the training, AuthNet model is benchmarked against a two-level system - several state-of-the-art models in face recognition combined with lip reading. Since FaceNet {{cite:360e915a07bd6cb9d31084a957960acbc3bbcb05}} is the current best-performing face recognition model reporting an accuracy of 99.76%, it was used for comparison. Garg et al. {{cite:55a9a45eef3d870668509343b0d231263b524dbf}} proposed a lip reading model that achieved an accuracy of 61.2%. The models of Chung & Zisserman {{cite:22875a1d91a72bca066cb881ab83224ccb826233}} and Gergen et al. {{cite:ca6c271d3b31faf03c765ce036414d09fc5177b3}} were the previous state-of-the-art models in lip reading on word level with their reported accuracy values of 65.4% and 86.6% respectively. Lipnet {{cite:a718955bce4b97f5bff57dfbfd6ddf8808b71702}} has been included to benchmark the performance of the proposed AuthNet model. The scores and comparisons are reported in Table REF , which shows that the performance of the trained model matches the combined performance of state-of-the-art models while overcoming errors or imposter attacks in these models.
{{table:d057a3bf-bbd4-4b44-b9b4-e44433b8ed8f}} | r | 9c230002ae589e8d10b0c54f2fe813e8 |
We have presented a theoretical study of diffusion mediated reactions in an evanescent CTRW on a one dimensional lattice. A finite trapping rate is assumed when a walker reaches the trap position (imperfect trap model in Refs. {{cite:3cc870db936fdbfbe6db8772f47587511ebe2248}}, {{cite:bde6684f7ac61230abf7c3c833776d7b497dcad2}}, {{cite:284ccf837c4c07ddaaa65974b6bfdbbce56ac338}}). Therefore, in each encounter the reaction will not always occur and the walker may escape from the trap. Exact analytical expressions for the reaction rate and the survival probability (probability of no reaction) are obtained in the Laplace representation.
| d | c6884e7b591218c0cad5d3cfba2f476d |
We have studied the properties of collection of polar self-propelled particles moving on a two dimensional rectangular channel along an order-disorder-order interface with periodic boundary condition in both directions. The interaction among the particles is taken as Vicsek type viz; particles move with constant speed and interact through short range alignment interaction. Inside the junction or disorder region, particles experience a high noise disorder state, and outside
they are in the ordered state. The width of the junction is adjusted by the junction width {{formula:e19baca2-8605-453b-bc42-967a39ba89b9}} . The model is motivated by the Josephson junction, an analogous equilibrium
system in solid state {{cite:bb21c0b4c08411c8b07c1c6ecebc705b1ee45bf7}}. We studied the system for the two cases: (i) system WOP, where we do not impose any easy direction for moving SPPs and (ii) system WP where a small external biased direction of motion along the long direction of the channel is introduced. Interestingly, flock experience more disturbance for wider junction width in the system WP in comparison to the system WOP. Further, at the junction, we have found the current reversal for a range of intermediate width of
the junction, similar results are their for the Josephson junction {{cite:287b8795b76448b55997a29b6c1bbb6ebc473c2d}}, {{cite:bb21c0b4c08411c8b07c1c6ecebc705b1ee45bf7}}, {{cite:2ee4ed0d6d2fc61b4a2479118aa7844de63269a2}}. For a range of intermediate widths, the boundaries of the interface acts like a reflecting walls and due to that particles avoid the motion inside the interface for system WOP. Further, the particle current at the junction decreases with an increase in the junction width.
| d | bc5c8dc067139e3db432fc5e56290cc5 |
In this paper, we propose a lightweight CNN for HAR using Lego filters. To the best of our knowledge, building resource constrained deep networks suitable for HAR has never been explored, and this paper is the first try to develop lightweight CNN for HAR on ubiquitous and wearable computing area. Compared with standard convolution, convolution kernels constructed by lower dimensional Lego filters can greatly reduce the number of parameters. The Lego filters can be combined with the state-of-the-art deep models widely used in HAR, which enables substantially improved efficiency for various HAR applications. A method named as straight-through-estimator (STE) {{cite:4df96165fdabff7b82330a458338ef55beca6260}} is used to learn optimal permutation of Lego filters for a filter module in an end-to-end manner. A classic split-transform-merge three-stage strategy {{cite:4208214a8543e330e639e6d499e0258513c82eca}}{{cite:f35ead49164bacbedc3e108292301329fcbe2ff2}} is utilized to further accelerate Lego convolutions. In our previous work {{cite:4102175fd633a16e1db6cc1c4ff929968f8bd074}}, layer-wise loss functions are used to train standard CNN. Without loss of generality, we train the Lego CNN with local loss, which can further improve performance without any extra cost.
| i | 0cb84748870c5487597cda2f6a85a086 |
where {{formula:1f2eccf3-c2bf-45aa-953b-b1f48476702d}} GeV is the Planck mass.
The density perturbations from which PBHs formed would have arisen only after the
end of inflation {{cite:067f5f0f6f94bc9d40eda3873f57002d1fb3c04e}}, {{cite:2cd310bfb1cf9ba434b3b9ccd32a1efce6ea7edd}}, {{cite:f87290c44df75c99025fa262299f648315c108c7}}, {{cite:34d74ae056e008e815f2ab2c258819b9e7e89750}}, {{cite:064ef52a711c70425571b637c35dc16f74ab3ec4}}, {{cite:61ca12d78d87e56cc5782365826f55a36ca1a983}}, {{cite:c6e15ab54fcb859d67f661f95716889108965465}}.
On the other hand,
for masses {{formula:9d1beb90-30f6-4a79-8de3-ac36647db057}}
PBHs would evaporate via Hawking radiation
before the start of Big Bang Nucleosynthesis (BBN) {{cite:40da9a1ddccbe669d6c1300c29e0bf9de6a3be30}},
being essentially unconstrained by current observations.
Nevertheless, in the mass range {{formula:66968f9f-27c1-4233-86bc-5cf7541efc97}} ,
PBHs might cause a matter dominated epoch in the early Universe.
| i | ae76b205aa1b9a2e410e9a4fc5e334d3 |
The same argument works for both case (i) and case (ii): we will assume we are in case (ii).
Clearly, {{formula:2cd26c8a-fb2f-4066-ba5e-fc95278f1124}} is a closed subgroup of
{{formula:a9ccabd5-4307-4b6f-b375-1f2fbd1dc1c8}} .
Therefore, it is sufficient to show that if {{formula:600760a6-164a-41bd-8ad2-d5ec0001c829}} is a character of
{{formula:0e09892b-681e-4ea8-9a39-506a5c34f090}} which is trivial on {{formula:b7869f12-f42d-460c-b225-ebd6c7ce4102}} then {{formula:29c6917d-2881-4e98-a236-0263c8492a73}} is trivial
on {{formula:124dd235-cdb6-44cf-91f0-299933fb86bd}} .
The characters of {{formula:ba8c673d-146c-47e0-ae5a-511dc34fe7f2}} taken the form {{formula:70a80bc7-df6b-4643-b24c-2c7efaf31f0f}} , with {{formula:ec171194-da02-4c77-b3f0-7083e703264f}} .
If {{formula:b9250395-a0c0-4915-8de5-933e324fb106}} is trivial on {{formula:8147176d-3fa9-4551-9200-4c2da9b42f33}} then
{{formula:8cdfccc2-7d7a-4efa-8553-644f2576a13b}}
Passing to the shift map, this may be rewritten in the form
{{formula:60e700f5-ee2f-4634-b49b-f7f3316a38b6}}
whenever {{formula:240f1aab-15c5-4671-9344-8394a55fafa5}} and {{formula:3cf8a175-6e52-4a5e-bb3d-abbc40a70b3b}} .
By Proposition 5.2 of {{cite:073d4d409f127c98490c80ee8cf92a73f9b0a84a}}, equation (REF ) may itself be rewritten as
{{formula:292c1a9a-20c5-4d0e-8774-d756e46261cf}}
where {{formula:e209007c-3094-4e88-948c-90f837bf95e5}} and {{formula:eae65816-b899-4b2d-ab18-dd0ca2494fdc}} .
In particular, if (REF ) holds then {{formula:9afd8f6c-dc38-437b-9a87-57629830ce89}} is cohomologous to a locally constant function.
Applying the not locally constant condition, this forces {{formula:7e1a8648-6dd7-4e33-a38b-3d219298d1f0}} .
Substituting {{formula:2feba060-cac3-4106-b547-a19e3c3400fe}} into (REF ),we have
{{formula:4d71babe-e723-4c81-b80c-83b91973251f}}
Since {{formula:92c45fda-f297-4abe-bd61-544a06b84ad9}} generates the torsion free
part of {{formula:35986c54-da56-45b5-a15b-544e5d2c5514}} , we have that
{{formula:11b89482-5153-4383-8305-318e66649ddd}} .
| r | 74a0bd54020e5d8adda19ec0e64f985d |
The existence of minimal length {{cite:02b5b7a13d1361f41ca1c41b73b32bb626f5be82}} has been predicted by various theoretical models such as string theory {{cite:7456e16e862d28412936ffd2076469e942827a8e}} and Doubly Special Relativity (DSR) {{cite:a99ab481cc3679d05e814571e4241c851c83e1bc}}. The presence of this minimal length may arise from imposing the quantum gravity effects in quantum mechanics through modification of the Heisenberg uncertainty principle, which is well-known as the generalized uncertainty principle (GUP) {{cite:9f625064f98e80b9e9a9508709cdb49d8baa1b05}}, {{cite:a337ad162abe96ea2b076ba72b50122e6b3fc4b5}}. From the perturbative string theory, this length is due to the fact that the strings cannot influence through distances smaller than their size. An interesting property of the existence of the minimal length is the modification of the standard commutation relation between canonical position and momentum in quantum mechanics and also Poisson brackets algebra in classical mechanics.
| i | 4629c645d412d6cf28a2a735517ae288 |
Previous studies using gamma-ray data of our Galaxy have performed similar analyses. Some studies claim that the gamma-ray flux profile traces dark matter distribution while other studies claim that the gamma-ray flux profile traces stellar distribution. Therefore, it is still a controversial issue. However, the resolution of current gamma-ray detectors is not high enough to get the central gamma-ray flux profiles of other nearby galaxies. Fortunately, the resolution of current radio telescopes is able to fulfill the task. If we can observe the radio flux density profiles of some nearby galaxies with high resolution and different frequencies, some better analyses or clearer signals could be obtained to verify our results. In fact, many previous radio studies mainly focus on constraining dark matter by using the radio intensity of a region (e.g. radio intensity of M33 galaxy or LMC, see {{cite:a8a86564df34001f7063e23ac4362a74d3bb1876}}, {{cite:87ac1951fbb5533d1a1799063bc0aa7b50c4af7b}}) or radio frequency spectrum (multi-frequency approach) (e.g. {{cite:856de087bd09993b5d909701a8b834302b582f50}}, {{cite:c98d1ee283d67de3b49d64b2eb0f4fcaa900e0ee}}) of galaxies or galaxy clusters. They could identify some ideal regions with the lowest signal to noise ratio for dark matter detection in the radio wavebands. Some other studies have obtained the radio sky map of our Galaxy and used the sky map data to constrain dark matter {{cite:cc4d0ed6a4c757d918584918e6b7117163651c5c}}. However, these studies have not explicitly examined the likelihoods between the observed radial emission profile and the predicted radial emission profile contributed by dark matter annihilation. In our study, we show that using high resolution radio density flux profile at the central region of a galaxy is good for constraining dark matter. Therefore, observing and analysing the radio flux density profile (i.e. the radial emission profile) would be another important way to detect the signal of dark matter annihilation and constrain dark matter properties, which are complementary to the traditional radio analyses (using total integrated radio flux or frequency spectrum) {{cite:6b2d466907d1f055ddc69424c0f8f69cac9c4320}}, {{cite:c55f1e15466c3193ac9f67958463a7c3b7671367}}, {{cite:856de087bd09993b5d909701a8b834302b582f50}}, {{cite:cc4d0ed6a4c757d918584918e6b7117163651c5c}}, {{cite:c98d1ee283d67de3b49d64b2eb0f4fcaa900e0ee}}, cosmic-ray (including gamma-ray) analyses {{cite:e849b46e366bbc74effd8c031f10eb6bdb411141}}, {{cite:49be313666cd9cb21dd82c1a1e874108dc047072}}, {{cite:de5a52b25c6698182259a17b02a73363dc71814a}}, {{cite:3d1425169dfee993895616025462fde97b71a282}} and neutrino analyses {{cite:b2c7a1c93026c6df7692e2b65c9d7181e64a5365}} of dark matter annihilation.
| d | e0a63a473ec449c48ccafae64affa403 |
The Bound in Context.
Figure REF plots the competitive ratio bounds, for a fixed online cache size {{formula:135ec236-a9d3-4cbe-8ffa-8b52860e0062}} and block size {{formula:0b0216d4-a900-4f05-8770-1fca3e1e2b7d}} . Our resulting lower bound is much greater than
the Sleator-Tarjan {{cite:1fd0bdbaa0fd4e80857a1bb13c929eb56cbe6df0}} bound,
meaning that the gap between online and offline policies is larger in GC Caching
than in traditional caching.
The gap starts at a multiplicative factor of nearly
{{formula:7c702c4f-05a5-4aca-929a-52cf6fb64fe9}}{{formula:6fbd84b0-aa2b-41fb-bdeb-d629038152b5}} when {{formula:67c7d584-32b5-4263-b48a-a8096de80d08}}
(since the {{formula:8fc11b93-7027-4066-8636-4f1b616d257f}} term dominates),
and tapers off, hitting {{formula:053c8a09-53c7-4fe5-afa9-8fd6fb98c045}} when {{formula:787c56b1-b2d5-4e70-8313-7f21abe9f334}} .
Table REF gives three salient points of comparison
for the Sleator-Tarjan bound, our lower bound, and our upper bound (discussed in Section ):
constant factor augmentation,
the point where the augmentation meets the competitive ratio,
and constant competitive ratio.
These results show that, compared to traditional caching,
the introduction of spatial locality increases the gap
between online and offline policies by {{formula:09253091-7fe6-437e-9059-9829122b3ccc}} , which can be
spread between the competitive ratio and the augmentation factor.
In prior models, the augmentation factor ({{formula:4b8740f3-a052-4280-b67e-7889555a31e2}} ) equals the competitive ratio
when both are 2.
By contrast, in the GC Caching Problem, {{formula:7e4bd57e-295f-44be-bf88-17a41a717194}} has a competitive ratio of
{{formula:d0ac30f0-c3ec-4012-8cef-cede6c84eadc}} and a competitive ratio of 2 requires
{{formula:27e31667-9c03-458a-889a-163a2db45c06}} .
The meeting point of the augmentation factor and the competitive ratio occurs
when {{formula:d674925f-7081-4501-9d11-cf35a91e5247}} .
| d | dc735778f30352374c86c4b7d02243cf |
In IE and sequence labelling tasks, the model will receive a text sequence, and label each token to one of the various classes. A sequence model can be simply added on top of BERT by connecting BERT's hidden-states output with a token classifier (Figure REF ).
It is trainable by feeding the output representation of each token into the classification layer to predict the corresponding label.
We add a multi-class classification layer followed by sequential cross entropy loss as the optimization target.
Fine-tuning all the parameters end-to-end to model the IE downstream task with BERT is straightforward. Most hyperparameters for the fine-tuning process are the same as in pre-training; and only learning rate, batch size, and the number of training epochs are chosen as according to the validation set {{cite:e92b9a397982da7a1b7e4a38e2d6378e51018325}}.
| m | ff9e7f866f7acb20c2435b4215d1cfd3 |
Note that there exists a corresponding classical protocol in the SMP model with shared randomness, with a similar complexity. One way to see this is that the quantum protocol is ultimately based on the use of the swap test to approximately compute the inner product between unit vectors, for which there is an efficient classical protocol in this model {{cite:26b3fe006d05fae798839e7c53e1fca3c6852833}}.
| r | 8a546c03a5c0497a5c82fd98fbcc1800 |
Experimental details. For our experiments, we compare the icsn to several baselines including the unsupervisedly-trained {{formula:86350b08-642f-4ab7-a4f0-89f6031a6c1c}} -VAE {{cite:d9976589d95be1506e939bf357267ed7e258d8f9}} and Ada-VAE by Locatello et al.. {{cite:71830a330ec2f9f2d1245595b135882647acd78b}}, using the arithmetic mean of the encoder distributions as in {{cite:daf12085fa99c574b042a578ed1e871ebd2ea607}}. For a fair comparison with icsns which are trained via the shared match pairing of {{cite:bc5f55b17d8b3c6b972f19f10f321bd0deb0f746}} and the Ada-VAE, which was originally introduced as a weaker form of supervision, we also trained the Ada-VAE with known shared factor IDs. This baseline essentially resembles a {{formula:ded351bb-629b-44eb-b828-3cc12467a12f}} -VAE with an averaging of encoder distributions between pairs of images at the known shared factor IDs. It is denoted as VAE in the results below. Lastly, we compare to a discretizing VAE approach which uses a categorical distribution via the Gumbel-softmax trick {{cite:c378f65c1a4575be94688a47764a8ddebbd78552}}, {{cite:3d7d0089b3b0f4121ee946fef93d7290cb336ab4}} (Cat-VAE). Cat-VAE is trained the same way as the VAE, i.e., via share pairing and averaging over encoder distributions.
| r | 6156522cfca3873a59247fb77bd93742 |
Under the setting of transfer learning, FixMatch {{cite:2ca536c63aca4a3c41e22a045ea7f8647c7e9292}} tends to be the dominant technique with in-distribution data. However, it applies pseudo-labeling as the key component and is doomed to suffer performance degradation when faced with out-of-distribution unlabeled data. In comparison, Self-Training {{cite:fab6fad795e14335e2e019fe7de12869b288eeea}} is much more robust to novel categories, despite a slight degradation. On the contrary, the proposed RelMatch with shared label space can easily obtain state-of-the-art results on its own with out-of-distribution data. And RelMatch also significantly outperforms all comparison methods when being equipped with FixMatch. In general, the prominent merits of RelMatch are three-fold: (i) through building a shared label space and enabling inter-sample consistency, RelMatch can leverage out-of-distribution data to mine useful knowledge, (ii) through the label transfer strategy, RelMatch can generate more reliable pseudo-labels, and (iii) as a higher-level constraint, RelMatch can offer complementary supervising signals and boost previous SSL techniques.
| m | 95474c167d03399f6ba678173d1587c5 |
The proposed TCL is evaluated on five image datasets and two text datasets. For image clustering, we take 21 representative state-of-the-art approaches for comparisons, including k-means {{cite:d2cf61f6ca1269eff9f18ee70f05a9b5bc5627e7}}, SC {{cite:fe4262a91141da7e037d8126800c81f9bb9cecbf}}, AC {{cite:be9bbddb7c66d17cde0716fe688decc131b74352}}, NMF {{cite:e07a64dc85501989fde32787c9cb2a56e9936e7a}}, AE {{cite:171208eba772bd51644a09ca8bd121ee123b314e}}, DAE {{cite:8b78eff7d6c93c3c9890d6abc13cb679d226b2c7}}, DCGAN {{cite:076b94451f13e590811eb69a74caf635c9d3ad29}}, DeCNN {{cite:35db5507a08ed6cafa512059b2cd69ab48da42bd}}, VAE {{cite:917656307dfb11602fd63f05572abb0c329b6624}}, JULE {{cite:5b4b3a1fd46cfc1dd2eba8ecfe981edb0b89e785}}, DEC {{cite:9ade5ad7a1cb789e577e115129b99308b3120e52}}, DAC {{cite:ec2c07deaaa8909f533e2b90fcc6c6a2ffa5992e}}, ADC {{cite:03248ce9c7cb1bb391f7c4988ffda4b3bc824b65}}, DDC {{cite:4f22b3693128bd4fac75ce4c5d7318f264f55cc9}}, DCCM {{cite:f6df5caeb763472baaa8e315dfcfeece4825a9da}}, IIC {{cite:360961a1bdef59866ee8ef9b146fa7d0524852e3}}, PICA {{cite:f3a8c6e5d47fa63faae3830fb7e86568515ad2e4}}, CC {{cite:4c6ea710876c075f391c5433f1d8a58549cc60e1}}, SPICE {{cite:a7df32c2cb84ba4b3fcf3a92550604d4265f6872}}, SCAN {{cite:e95a02610fed3ac8517d98d780f60e237aaf0b48}}, and PCL {{cite:ddef2eac561fea4c0d19b2a0210a17b319ba73ac}}. For those representation-based methods, namely SC, NMF, AE, DAE, DCGAN, DeCNN, and VAE, clustering is achieved by applying the vanilla k-means on the learned features. To ensure the backbone is the same across all recent deep clustering methods, we reproduce SCAN with ResNet34 using its official released code. We would like to point out that SPICE further boosts the clustering performance under a semi-supervised framework, and it uses a deeper and wider ResNet backbone (e.g., WRN37-2) which enjoys a much better feature extraction ability {{cite:0810b5be5678ac87c53d3fc4c1f123f2e4eb4ecb}}. Thus, for a fair comparison, here we compare it with its self-trained results which are achieved on ResNet34. Besides, SPICE uses the model pre-trained on ImageNet for ImageNet-10/Dogs (denoted by “()” in Table REF ), while all other methods including ours train the model from scratch.
| m | 9c70cb09611bf2bedfa380888a4ee39d |
Reinforcement Learning (RL) {{cite:fd27b0ccfba521475da21d8cadd93b0b885f1b92}} has demonstrated great potential in solving complex decision-making tasks {{cite:3085bca6c725341e1bbff798479a79f5e9f55886}}, including but not limited to video games {{cite:1be1faa1b0f6de29513c6c86ebf8cc3ebe72d39f}}, chess {{cite:72a9586e0a2c2f6fd58024e741035dd7b1246c32}}, and robotic manipulation {{cite:ee5cf328c942d46258f3a03817780a613407c939}}. Among them, various prior works highlight daunting challenges resulting from sparse rewards.
For example, in the maze navigation task, since the agent needs to navigate from the initial position to the goal to receive a positive reward, the task requires a large amount of randomized exploration.
One solution to address this issue is Curriculum Reinforcement Learning (CRL) {{cite:b5c112ddc0893a122df705c6ef6bc150a515f726}}, {{cite:d4950cfe4f88138b9bdf791cb40dcfc96e6459df}}, of which the objective is to create a sequence of environments to facilitate the learning of difficult tasks.
| i | cb1d29091fb091a432f3b87d759ee5ce |
Given the SNE {{formula:6352cef2-a0fe-4d63-a80b-e2b4e5566d3b}} , select an initial point {{formula:536773c8-0313-436e-b67c-8dc594c0e828}} and a maximum number of iterations {{formula:06fb6b15-2ef8-4389-aa51-44dec1a47fc2}} .
For {{formula:9fd430cc-272b-44dd-b996-7f755039264b}} , do:
Select a forcing term tolerance {{formula:932de189-5401-434d-becc-4a90e1e762b6}} .
If {{formula:cd4ba2c3-c90f-450f-b7ee-13a6a9a2ba20}} :
Calculate the Newton-GMRES {{cite:ee2e9e9b9811d56fbca3a306343c146934c7748a}} step {{formula:5e703d74-6c18-47f3-a790-19e8c358f7f7}} based on the tolerance {{formula:c1c521f6-9ecd-43a5-b54f-8f1d4f5a9573}} .
Proceed to step 2e.
Form the local tensor model (REF ).
Calculate the approximate tensor step {{formula:ab17cc9d-8c50-4c03-86df-cc5992095268}} according to {{formula:50137400-93c3-4607-a873-2aee53871ed5}} by solving one of the three methods presented for selecting {{formula:e991c249-03d1-4f66-8610-9b218e145b6d}} .
Set {{formula:72f3b0aa-5584-4be2-bdde-8e8853854026}} where a linesearch strategy using the directions {{formula:efd5545a-2031-4a55-baca-8808fbc78724}} and / or {{formula:5c4d8afa-a2f0-42b5-b534-e1bb3d7db682}} is used to select {{formula:a1198707-dfd6-4c56-a8b0-77f1e6d7e9e2}} and {{formula:902eee1d-09ee-484a-b80d-d86e045c6aed}} .
If {{formula:519bd60a-8a57-4605-b410-ddb7e9957236}} is an acceptable approximate root of {{formula:9ee4d035-4a48-487e-84f5-415d8e63a3a7}} :
Stop.
| m | e27958750ae3accf0efdc872bba35f7e |
For the circumstellar disk mass we considered an average value of 0.01 {{formula:01b4ef6c-2f94-4e30-a052-f3a5f943664d}} , and the radial extent was between 20-120 AU, similar to a transitional disk with an inner cavity. While the circumplanetary disk mass linearly scales with the circumstellar disk mass {{cite:4b7e2c35525ce890d9d47da7be512da6f5ffa587}}, the changes in mass will also result in different optical depth, which can affect the results described here. The large, optically thin inner 20 AU can also affect the results.
| d | 6fefd2ac4db7ad52b0179bc0f4974daf |
Computation Time Scaling: Fig.REF shows how the computation time of our MPC scales with the number of obstacles. The linear trend observed validates the remarks made in Section REF . To recall, with an increase in the number of obstacles, only the computation complexity of constructing the cost function in (REF ) changes. The number of variables of the QP ({{formula:8756ea54-859e-4cc2-b62e-5445cc0968d5}} ) remains the same as it depends only on the trajectory parametrization. Furthermore, even within the cost term, only the matrix-vector product {{formula:9aa5ffea-7ea8-43ea-b3ba-d15fb82a5ac4}} needs to computed at each MPC iteration. The matrix-matrix product {{formula:0703661b-a2b9-40e6-977c-6ea0d63dff6b}} does not change between the MPC iterations and thus can be pre-computed. Although a naive matrix-vector product scales quadratically, with appropriate parallelization this scaling can be made linear. Most linear algebra libraries like Eigen {{cite:a49ffeff514322df9e5b08ad2aa50c24c87c2c62}} automatically implement such parallelization at the back-end through multi-threading over CPUs.
| r | bf1d1c4e41b61c62b28903093dd52366 |
Figure REF illustrates the workflow of our OntoGCN neural model. Each gene contributes a node to a knowledge graph
where edges represent the similarity between the genes.
We create edges that connect each gene to its K-nearest neighbours in the ontology embedding space according to the cosine distance. We use DL2vec ontology embeddings that are a graph-based method learning gene representations over three biomedical ontologies (GO {{cite:6030c390df12636e45109aa33b6538264c9a9c6d}}, UBERON {{cite:2ddc924fc9e10dc644e8f1a45850b5e970715f1a}}, and MP {{cite:a8c6ee52b9da8ff5d0c1b4ca648865071bb317a0}}). The topology of the graph is the same for all the patient samples in the dataset, but each patient spans a new instance of the graph with different expression values at the nodes. At each graph convolution step, the neighbouring nodes are aggregated together based on their connectivity (Figure REF shows the update step for the ESR1 node, but similar message-passing happens for every node). After convolution, the gene nodes are dropped at random and also pooled using topology-based aggregation clustering. Finally, a prediction is made from the remaining nodes via a fully connected layer.
| m | d3035afd361728cd01d8e5d21ae74282 |
In particular, the formula (see {{cite:0ec15f58139c225faeb54a1d3d96d7fa4c3fb552}}):
{{formula:257f24c5-874f-49bc-8214-23b878f5861a}} can be written in the following form
{{formula:d7002a33-74f2-4e61-b649-ea210fc1fa0c}}
| r | cb97fb6faeb76493c082055fce5b8806 |
The statistical analysis of large, complex, and high-dimensional data has become a significant challenging problem. Due to the rapid development of complex, performant technologies, data can now be collected on a large scale, resulting in high-dimensional and high-frequency data, sometimes necessitating high-performance computing, which is often a limitation for practitioners; see {{cite:b7402f2428c48e5499237f0ca313eed2fb18e366}} for a general view of data science and big data. Among various approaches, functional data analysis (FDA) provides statistical methods to handle large-scale and complex data {{cite:c47a5e5bf8c7316c51eb5b7be1bad00d792ed805}}, {{cite:ac4ef34bf658e96c10b75a8fd56c1be62a51288a}}. For a general introduction to FDA, the reader is referred to {{cite:fd009688b6a4ad6f8cb29525e215f936b54e5c83}}, {{cite:9b748cecc8774fced3086b35d7892bfd912e3deb}}, {{cite:7fffcbce6e4240043150df7d558e772040cc0291}}, and {{cite:f963f3f5dab12607d4924e37821f84d29bf63227}}. FDA assumes that observations (called functional data) have characteristics that vary along a continuum, e.g., curves or surfaces. Thus, FDA deals with data that are defined on a space that is intrinsically infinite-dimensional.
| i | 735ae22fb85dc853de87afda055d5961 |
Now we must show that {{formula:82b60912-c4ee-440d-ae3a-9f43ae4fb16e}} . By {{cite:f8d843e14c51201f8c2ea14e6c59ccf7aa829e1d}}, since {{formula:7e4accee-39fc-4690-951d-ed62c87a5d4b}} is a topologically graded {{formula:4ea50e6c-4394-445b-83d9-0bea335925d6}} -algebra, with grading {{formula:14d640f7-0aed-4cc7-b2db-16a58ad30c7e}} there is a commutative diagram of surjective {{formula:81a8bd8c-3ee2-4012-a756-9ab40e4aa05b}} -homomorphisms
| r | 58030674772ce7354709cf572d4daf40 |
Dynamic mode decomposition (DMD) is a more recent ROM approach {{cite:3dbc6cab2553f7fbeef2bc19df2e205c50c4ab45}}, {{cite:81528aec73d46b16de143c90bd61f4eb8d170193}}. DMD is non-intrusive (equation-free) and provides a simple linear dynamical system model ({{formula:5f786c14-ed3b-4bce-8109-590c8e444aa2}} ), which could be easily integrated in forward time. DMD has been used extensively in different applications, and its mathematical simplicity has facilitated several extensions {{cite:c7b967165beaeecd5f0efdbea543f7590006a70d}}, {{cite:da391f4cfc54733924879703257b7f155f84b388}}, {{cite:2509e5e74bd0552b0b92dfb4385a39830aa5ea72}}. In the context of cardiovascular flows, DMD has been applied to blood flow problems {{cite:9b93a8392480dfa3cd38d8cc60fe5def549db7b1}}, {{cite:1f92e52e50152ad4318c99e1ae541f500a8eaeeb}} and multi-stage DMD has been proposed for studying blood flow physics during different phases of the cardiac cycle {{cite:9b8a9a50b59374449583429532319e40a5e5bffd}}. The Kalman filter could be easily applied to any linear dynamical system. DMD provides this linear system in a reduced-order space and therefore seems to be a promising approach for developing reduced-order DA models. There are extensions of DMD that incorporate Kalman filter based ideas for parameter estimation {{cite:6042325bb41507e71a90c94d62e38e1d0d4655c6}} and denoising {{cite:77d8a35d84f5ce417da89be09da9a659603f5783}}. Recently, DMD has been used in combination with Kalman filtering and smoothing for denoising time-resolved hemodynamics data {{cite:6bdc227d09622b06a764780a33ef094818781abc}}. Additionally, DMD is used for computing the mapping matrix between low-resolution and superresolved 4D flow MRI data {{cite:abff20000701b696fbc9fcbaec57a2793f7a4a02}}. However, none of these studies have performed a DA study where low-resolution experimental and uncertain CFD data are combined to address not only low resolution and noise in experimental data but also uncertainty in computational modeling.
| i | f2eefa7dfe7c814a3ec30ad0646df732 |
Quantum process tomography
We fully characterize the process of W.-Z. phase acquisition using quantum process tomography {{cite:d87ce72e2a70514bfb80b6fafd6fce42b4037a0e}}, {{cite:7b9aebf7ea722fb64a2c0cd5a84f52daf9de10ff}} within the ground DS.
An arbitrary transformation (operation) on a quantum system with initial density operator
{{formula:0416525e-4d29-4bc1-99ae-a19a45e2404c}} can be described by the action of Kraus operators {{formula:672b931f-a456-486b-b664-a184f067cbd8}} :
{{formula:dba6f7c5-d4ed-4612-9a37-d94235076aa1}} .
The Kraus operators {{formula:ad1c7ab9-ccde-4e61-b06e-c2b43de69fff}} completely describe the whole process, and can be expanded by a basis for operators {{formula:1dd6b51f-b1e5-4d90-a44a-2c88f0b7a6e5}} as {{formula:dc1db184-48df-4f75-b085-55677657ef51}} , where {{formula:ca90ca8a-6811-4327-8331-718dd117108a}} ) is the coefficient.
Thus, the density operator encoding the state within the DS transforms as
{{formula:686237df-96ad-487b-8585-a2971c736a77}} ,
with weights given by the process matrix {{formula:01809561-87de-4613-8049-d723b21f9f97}} .
The process matrix {{formula:405b4c1e-1af3-432f-a4f5-23d0c7329232}} completely and uniquely represents arbitrary transformations.
In our experiment, the path-dependent process matrix {{formula:64912241-945b-4df0-b6d9-6651265221f8}} describes the transformation from the initial quantum state at {{formula:39e5a4a8-0f4e-449a-911b-5a7d74fee9d4}} to the final state at {{formula:13532400-23e3-4f0c-b401-1b090dbe7e0c}} , characterizing the W.-Z. phase acquisition process including any potential experimental imperfection.
Under ideal unitary evolution, each element {{formula:910c0df5-97dc-43e6-aca8-bb8b8f4a1a6a}} is derived from the non-Abelian W.-Z. phase .
| r | 050abe0704d210d22dbcbabe8ddfc365 |
Previous works that employ autoencoders for style transfer tasks {{cite:1a37744e3b3eb6f9eacb8feeb1298e0c9dd67df0}}, {{cite:c7f4452cd9a6069caada31742f90c39862472c7c}}, {{cite:6a99a1839fc1b8c3be2c38ab094230fdebc0b6eb}} often suggested adding adversarial losses {{cite:f63010298665d5ba9ffe42196274ff99445696bd}} on the latent space, so as to discourage it from storing style-related, or attribute-related, information.
However, a potential downside of this practice is that it introduces additional complications to the training of our Transformer-based network, which is already very complex itself.
We instead demonstrate that by using suitable {{formula:612d5763-db78-4dd2-b35b-d1592a631cd5}} and {{formula:7a4136a4-a8c3-4db3-8434-a68a74da1a62}} to control the size of latent information bottleneck, both strong style transfer and good content preservation of input {{formula:3d8aeec8-148b-4dae-9161-aaef6101d49a}} can be accomplished without auxiliary training losses.
| m | 84570bbc6f8c2a84bbeb2e3611e2a005 |
In COVID-19 detection, the networks' performance decreased similarly for both weight decay and spectral decoupling (Figure REF ), when training the networks on the combined BIMCV{{formula:169ff7f6-97a6-419b-bedb-288aca7cdabc}} and PadChest dataset. Radiographs contain systematic differences between data repositories and medical centres, such as laterality tokens and differences in the radiopacity of the image borders, which could arise from variations in patient position, radiographic projection or image processing {{cite:b4cc009508a99f8565170ad7f2bfd161676a7fc4}}. These differences can be easily leveraged by neural networks to detect where a single radiograph originates. We speculate that spectral decoupling was unable to prevent shortcut-learning due to the ease of shortcut learning in the combined PadChest and BIMCV{{formula:d4f2e7ee-15f4-49d8-ab44-7be6242d2c65}} dataset. In addition, our results showing the ability to prevent shortcut learning (Table REF ) were obtained after considerable hyper-parameter optimization and no significant differences could be seen in the class activation maps between networks trained with either weight decay or spectral decoupling. Thus, removal of any obvious superficial correlations from the training dataset is crucial as there seems to be a limit of how much spectral decoupling can help with dominating features and spurious correlations.
| d | 887e453b8387827f7af9bf68e7099aae |
Implementation Details. We build our method on top of the GFLV1 {{cite:f377b1f33464fa3445b40c87d55ffcbc7438d9b9}} detector using their official implementations. The teacher and student detectors defined in our experiments are standard GFLV1 architectures. For the GFLV1 detector, ResNet-50 is used as its backbone, FPN {{cite:9693961c8264d66728a3069aaf6b3bcc39fa4dd6}} is used as its neck. We trained our detector to follow the same parameters described in their paper. All the experiments are performed on 8 NVIDIA Tesla V100 GPU, with batch size of 8.
| d | 0c50ed8903bfb2edd3a111ff37763c5d |
Since few existing methods currently have the same capability and the same experimental setting as ours, we modify the existing competing methods to enable a fair comparison.
We choose state-of-the-art deep learning models for 3D point clouds as benchmark methods, including PointNet {{cite:11105f11876338fd3ce3bcbd88c7947b2cef3668}}, PointNet++ {{cite:d90b53bf39b557b451df494ed9599a5e90c12f76}}, PointCNN {{cite:09b9c4e15b9e3a0b624003e559c550665ffad8bd}}, and PointConv {{cite:38219e1fb2979ccd478de55c111fdab44f3b3368}}.
We carefully adjusted the size of individual models so that the number of parameters of each model is roughly the same or on the same magnitude as ours.
Among them, we use the labels at the top level as the supervised signal for training; for the unsupervised middle level, we select the intermediate output from a middle layer, e.g., the output before the feature transformation layer {{cite:11105f11876338fd3ce3bcbd88c7947b2cef3668}} of PointNet,
and then perform the mini-batch K-Means algorithm {{cite:0a6604ba6d25b5ef22527877acf6a048ced521a4}} to cluster these features.
Finally, we perform the same matching process as before to ground the clustering labels to the latent part labels.
We evaluate their performances using the aforementioned metrics in Section REF .
| m | ea487bc8bdf02dd8b891a094dbfddf91 |
We also tested the impact of our network augmentations against adversarial attacks; here again, we showed that PC helps to improve the robustness of the networks. So far, the most promising strategy for achieving robustness has been adversarial training, whereby adversarial datapoints are added to the training dataset. While efficient, this strategy was also shown to be strongly limited {{cite:fa87e8bd28df6a7d3899db7ee8a1847be1185780}}, {{cite:1f5595e944233d038dc9c002991d0af28e650641}}. Apart from factors like the choice of the norms used for training, or the high computation requirements, it is ultimately performed with a supervised loss function that can alter the decision boundaries in undesirable ways {{cite:1f5595e944233d038dc9c002991d0af28e650641}}, {{cite:0a10582f2b1468ddf045ff44b4784138648b0340}}.
| d | ce2b1cd5bc45cf2d8c4d8462f489d350 |
We refer the reader to Refs. {{cite:b0f0846037c3ea52da39a001ded77d9ad99b463e}}, {{cite:56e6517f063cc5b11f0c4d5a14f4a61ffa505497}} for details on the interpolating operators,
the parity and spin projections, the measurements, and exploratory studies of the spectrum.
One observation we immediately made from our preliminary studies, using the dynamical ensemble with {{formula:f818c4d9-5dc8-457f-929b-2d226b863b5e}} ,
was that {{formula:83157f07-37db-4cbe-8a3f-43dccb37d6c1}} , being a natural choice of top partner, e.g. Ref. {{cite:d120ae5bb5168c64c46ba90b164b9281c9844c85}} (but, see also Ref. {{cite:5be110344878ce03e1e82112587c452b21450a32}} for the other possibilities),
is not the lightest state, as shown in the left panel of Fig. REF .
To further investigate the mass hierarchy of chimera baryons, we performed additional measurements in a partially quenched setup using the same ensemble—the
valence fermion mass is different to the sea fermion, {{formula:659c39a5-5cd4-4915-9706-ee129e943ae3}} .
With the choices of {{formula:e01b0617-bafb-48cb-9310-3fa57e49880a}} ,
we find that the mass ratio between pseudoscalar mesons composed of antisymmetric and fundamental constituents {{formula:d93327a2-aa88-4c6d-88ea-35d05aa4bdb1}}
is about {{formula:151a8a5d-2d07-4b90-8531-88a10602b0b9}} , and {{formula:57c15477-0884-4d56-9bf2-f6079dd0a15d}} is almost degenerate with {{formula:8a5dff7f-be47-47c8-8034-6b299c736ade}} and thus the lightest state in the baryon spectrum.
A similar behaviour has been observed in the quenched approximation {{cite:56e6517f063cc5b11f0c4d5a14f4a61ffa505497}}.
| r | 2e41b644947f376d2aaffe75e0107e68 |
Figure 3 shows the space-time evolutions of the atomic density integrated along the transverse direction at relatively small
repulsive potential strengthes, and other parameters are the same as the experimental parameters.
Fig. 3(a) shows the case of a very small perturbation {{formula:3e804e11-13ab-43e1-a7af-e899044ac8f3}} .
However, in the left panel the propagation of the wave cannot be directly observed through the density profile. In order to
visualize the trajectory of the wave, we present the normalized density profile in the right panel, i.e. the actual density
subtracts the ground-state density. It is clear that the initially induced negative perturbation splits into two density dips,
which propagate outward at an almost constant speed and slow down when approaching the superfluid boundary due to the density
dependence of the speed. For such weak perturbation {{formula:46c0d53e-4bf1-437e-9b87-854d2bce727d}} , one might expect to excite linear wave. One can extract the
propagation speed from the center of the normalized density profile {{cite:3369d538fa0ac852913004b2498b90da9af13d57}}, {{cite:66e594860d80e86a7d1c335cf58ee9455aed2b29}}. The obtained
speed of {{formula:8b58df86-89d7-4def-b36a-44d45efc9bfb}} is consistent with the analytical prediction {{formula:4cde843f-a248-43a0-99ce-a91d901b2df4}} . With increasing {{formula:8503d73a-ecd3-41a9-820f-d53690835a0d}}
in Fig. 3(b), in the left panel we observe directly the formation of the shock wave from the density profile,
in which the shock front is represented as a sharp discontinuity of the two density lines forming a “V" shape.
In addition, one can see that after a long time dynamics, the steepness of the shock wave front
is associated with oscillatory behaviors,
which are more obvious from the normalized density profile in the right panel.
{{figure:0348738d-8768-45da-b41c-13e3ab15265e}} | r | c1d62726b9b391ac15b3c11e6e685ebb |
The availability of vast amounts of data and the advent of deep neural network models have accelerated the adoption of AI systems in the real world, owing to their significant success in natural language processing, computer vision, and other data-intensive tasks. However, despite the advances in performance across these tasks, deep learning models remain a black box, i.e., it is extremely hard to understand how the inputs map to the outputs. Recent research in XAI has attempted to address several aspects of "opening this black box" to help humans, both the system users and domain experts, understand such models' functioning and decision-making process {{cite:f7ce16e6e8ea3b154cfa1d9795d2c09d7005ab08}}.
{{figure:78c21b8d-27ae-4b72-90c6-b80aed8de2cf}} | m | 4960c62c0f76d672a41b4f79b47fd55d |
Scene-specific APRs are not designed to solve domain invariance by nature, thus they lack the ability to generalize well to different domains of the same scene. Despite being trained with multiple scene data, the MSPN and MS-Transformer also do not encourage domain invariance to different domains. Unlike these approaches, our method aims to address the domain invariance of APRs. In this paper, we hypothesize that using image pairs for a given pose under different conditions as input and using an objective for domain invariance should improve the accuracy of a given APR in unseen domains when compared to the APR that is trained only in the real distribution. To check our hypothesis, we introduce a robust domain adaptive framework for absolute pose regression. In our work, scene images are augmented for different domains by using generative methods to train parallel branches leveraging a contrastive Barlow Twins objective inspired by {{cite:c920a3b236c76eb6a8303487ab2cb4f139692146}}. This objective has the goal of reducing the difference of embeddings for images taken from the same pose under different domains. The proposed training framework is general and can be applied to different CNN-based APRs for improving domain invariance. Additionally, the number of parallel branches used for training can be adjusted for different tasks. The proposed training framework is shown in Fig. REF (a) for three parallel branches with shared weights, one for the original image and two for domain augmentations. Domain augmentations are performed by processing the original image using generative adversarial network (GAN) methods such as ManiFest {{cite:14f8c7936078e6a63fa0b6c052b287801a96a805}} and CoMoGAN {{cite:d4621f25b78eb3678c223637968098f6f5f610bb}}. While the proposed framework ensures the robustness of the model by using parallel branches during training, for inference, as shown in Fig. REF (b), the model is loaded as the single branch since parallel branches share weights thus not adding computational complexity when compared to other methods.
| i | c5b7a685829ae6db681712e24da1c7e0 |
The run-times of our techniques are broadly similar to those described by Anders and Briegel {{cite:bb284b5cc3c86436dc38121ecf8959940dd12442}}, who describe stabiliser states as the image of graph states {{formula:a7ab34f2-59f6-450c-9e0b-f49e0ec5e5f4}} {{cite:f8922fcad2cf25375ebd4c56376c7e96d432ceeb}}, {{cite:2c7af1e9dafc7d563954ee9d277899fc6ee04710}} under a tensor product of local Clifford operations (essentially products of {{formula:02e897c7-5149-439a-b864-e1443a270e32}} and {{formula:68beb852-bbea-49f8-91a5-df7c02df8f19}} ).
Note that our techniques represent the state {{formula:59cdbd66-5e24-43a9-b95a-339e37b363e0}} using a rank of {{formula:08bc8010-e54e-43d2-837e-1bf2ea29873d}} , and expansion matrix {{formula:e36c30c7-8340-4185-a3df-1926028f6e1b}} , and a Gram matrix {{formula:bc2986f8-2a57-49a8-b346-ce2687234923}} .
From the way that the Gram matrix {{formula:8fc66bb1-14b0-4634-bd93-e8af5a42ad83}} updates under {{formula:56d04607-2169-4e50-b57d-ad5323db353a}} operations with our techniques, it follows that a quadratic form expansion for a graph state {{formula:9304762c-c64b-4f2d-844b-98b222db4f22}} is essentially to set {{formula:aaaaa704-c62f-46ae-ab7d-f1c26f2fea03}} , {{formula:09ca605a-4e1e-444b-a46e-fc0f54d494ff}} , and to set {{formula:bdf0a422-e70f-4950-9672-f17c330a6366}} to the adjacency matrix of {{formula:37f19e39-884e-4b09-ac73-c5d93f9f5d90}} .
The degree parameters of Ref. {{cite:bb284b5cc3c86436dc38121ecf8959940dd12442}} then coincide with the sparsity parameters {{formula:a4ac46cc-265f-4f22-811a-3d775b0ef599}} for the Gram matrix {{formula:11cade65-f3f4-4dfc-ab76-b7ee14f52ee8}} .
Quadratic form expansions of the sort of Eqn. (REF ) may then be considered the image of a graph state under additional {{formula:a55d6e99-ff56-4f72-9dfb-999a0e5b5c02}} operations, followed by a `CNOT circuit' — which is to say, a linear isometry of the form {{formula:eb570a9c-3621-415a-95b3-9e43cdef0840}} .
The principal advantage of our techniques over those of Ref. {{cite:bb284b5cc3c86436dc38121ecf8959940dd12442}} is in the use of the that isometry, represented by the expansion matrix {{formula:ac7023cf-093b-43f4-9c95-2abac68d5a2d}} , to represent correlations between {{formula:1e2e0ab2-f520-478d-8994-23053952a7d2}} -basis measurement outcomes.
This provides us with improved simulation of parallel {{formula:88e0a830-bc1a-4c3b-a582-2413cf0df521}} -basis measurements as in Section REF , and a way to try to systematically reduce the complexity of operations in structured circuits as suggested by the results of Section REF .
| d | 9394ce8f1e8673f96d64b00251c7e748 |
We show that estimating the state-action value ({{formula:e9822df8-347d-4809-b4d5-d6bbabc29a59}} ) by minimizing the mean squared Bellman error leads to a regression problem with confounding, the inputs and output noise being correlated. We provide a re-intepretation of the popular strategy consisting of fixing the target Q-network in Deep Q-Networks (DQN) {{cite:3b6ad639e82b4c0c7862e1d27b1e8466a5da59d3}} and Fitted Q Evaluation (FQE) {{cite:a1b208be93cfea7cb652c5aaf341797cba3fa80e}} as a way to overcome confounding.
We extend the IV interpretation of the on-policy state value ({{formula:8de136bb-17ae-487b-93bb-35f95f918db2}} linear estimation problem to off-policy state-action value ({{formula:a3951885-ff61-4a6e-9e47-d8a07d9830eb}} ) linear estimation. As shown recently by {{cite:22b1ee7ad5d61d9370e3db7aee0f491d1bc2bbba}}, we can further recast the problem of non-linear {{formula:38e46e4f-21ec-4199-9a29-a357bbe8e633}} -function evaluation and OPE as a non-linear IV regression problem, bringing together the literature on IV and RL.
We review recent IV methods developed in machine learning, including Deep IV {{cite:8ee64031d053e287dbef36b4583d54c81760c1d1}}, Kernel IV {{cite:c46a3851431e7037e87940671223d0b9188a6768}}, Deep Generalized Method of Moments (Deep GMM) {{cite:f12ebf5c4936f9490d24f704823ceb4ae1d22f84}}, adversarial GMM (AGMM) {{cite:ac993bd036ba16fbb5bee093fc8a45a94b5027dc}} and Deep Feature IV (DFIV) {{cite:22b1ee7ad5d61d9370e3db7aee0f491d1bc2bbba}} and specialize them to the OPE problem. By doing so, not only do we recover some OPE techniques already available, but also obtain novel methods and insights.
We evaluate the performance of these techniques empirically on a variety of tasks and environments, including Behaviour Suite (BSuite) {{cite:2b88f1ace5cb845f13f7b7c75d99f6c72cdd75de}} and DeepMind Control Suite (DM Control) {{cite:96c950a34036efbd803b83a63b8edc6cf4c06911}}. We found experimentally that some of the recent IV techniques such as AGMM display performance on par with state-of-the-art FQE methods. We open-source the implementation of all methods and datasets at https://github.com/liyuan9988/IVOPEwithACME.
| i | 8203e23c1397fdc064ada98a28452a80 |
Motivated by this, we extend the theory of noncommutative differential forms by Connes {{cite:4547a729cb4b2e76ed4956b85a7ec5091a901554}}, Cuntz-Quillen {{cite:6e427be1dba0def520f7f9a17002199b33f65dd3}}, Ginzburg {{cite:0cba4930ae8b1e6d706b321f6d31a7ecfebc2d9c}} to the context of near-rings. The main idea is that, every element in the near-ring {{formula:f912d086-1129-4dfe-bece-e5a59c4e640c}} , which is interpreted as a program written in the language of {{formula:15e152c7-5ac2-4a11-907b-e8cdbb6a6c98}} , produces a family of maps on the framing space {{formula:fbfe6043-ba3c-462c-b5fb-6f88cfbd833f}} over the moduli of machines {{formula:6522965a-ab55-4299-a17a-742a645f51a8}} , that is, each machine in {{formula:2e032d3e-5f76-45a4-8aad-0aaa52c376bf}} performs a computation {{formula:1d433b17-a6e2-4431-8354-91fead161e19}} specified by the program. This statement naturally extends to differential forms.
| i | ed8c68bb2886c95e04a6def66af41ba4 |
The straightforward solution to reduce the GPU memory is building fewer 3D cost volume. Fast-MVSNet {{cite:d4ed859d69d50e6b0204c68aae664b579f44ff8e}} only calculates the 3D cost volume on a sparse depth map and propagates the sparse depth map into a dense depth map. CasMVSNet {{cite:d9dee1c5be67a850c925990d3ce7bffcefecfcfe}} and CVP-MVSNet {{cite:89d612bf2c898b70880080c1a09ab5b237cf01d9}} uses a coarse-to-fine strategy to deal with the 3D cost volume and reduce computation cost on the high resolution. GBiNet {{cite:3088a9505ee0272f5cb48adf5d3dda4d058177d4}} treats the MVS as a binary search problem and only builds the 3D cost volumes on the side with a high probability of containing the depth.
| m | 5c8cdc132b35bc7fdffaac531f933b90 |
Sun et al.{{cite:f4508fe0877aaf9eb80b2a050c2621344eadd6a5}} propose an end-to-end integral regression model to extract 3D poses from 2D heat maps. Madadi et al.{{cite:8450a641d00bd9b3e23ad464cf316de7e9220678}} use CNN-based 3D joint predictions as an intermediate representation to regress SMPL pose and shape parameters, and then reconstruct 3D joints in the SMPL output.
Dushyant et al. {{cite:5a5e9f052900f04eeb82f46ec5dd631ba746cd4d}} propose a method utilizing a fully CNN, which regresses 2D and 3D joint positions and motion skeleton to produce a real-time stable 3D reconstruction of motion.
Different from the end-to-end methods, Martinez et al.{{cite:dc93df77bbea9e30f4fc650b9ac1b7f0f03f882c}} use a simple but effective regression network to learn the correspondences from 2D poses to 3D poses without using any image information.
Moreno-Noguer{{cite:abffcddd364bbbe63ebe473f5fc8f1bebc719492}} implements an approach to learn the correspondence between the 2D distance matrix and 3D distance matrix with a regression model.
Wang et al.{{cite:a5025dcbdde0610a08d3d346364a16196cb36e90}} use 3D data to train an intermediate ranking network and estimate 3D poses from 2D poses by predicting the depth rankings of human joints.
| m | 9eab3aa568dc548e5cf0398d5498bf14 |
We summarize some basic facts about {{formula:e06666d0-b41b-4bd5-a1a6-636cd1cda3bb}} -adic analysis that will be used in
this paper. For a complete exposition, we refer the reader to {{cite:6f855327629fd3a1e19bbb2e3bf6882e07a66d56}},
{{cite:172b4c9febc1fd47beafb9adb77bcb3e05ae6c1b}}.
| r | 81e7127b09021215a04aa1205af951c0 |
where {{formula:68d36313-a14a-444e-a0c0-a233c85293e5}} denotes the Euler number of {{formula:3ad2f5ef-4131-429c-9c7f-f115d81ec0d8}} and {{formula:11167e5d-368a-4bf2-9073-c038ea981666}} the degree of the divisor {{formula:957e531d-9503-439e-b050-898aa9fdf08e}} . If {{formula:5d5e6282-e252-4d21-93a9-ce39cbeaff18}} , then the unique metric exists if and only if {{formula:1c14bea3-8c0f-49dc-a984-a1ba0c3c0cd4}} (see {{cite:66b6c137cd662479b270e1ec708fe330a98b5cf3}},{{cite:34a31b5a621f0f0dd9ed023a79389f33063da41c}}). If {{formula:7f6a793d-6f2f-40e9-ac8b-fa3df62317f6}} , the problem is still open now, except that there are some partial results. Troyanov {{cite:3ca184d13135d709e325a1e0365148ef9bbafc82}} proved that there is a CSC-1 metric on {{formula:5bed1e07-89b0-40c2-99e2-44c5c0e023ab}} if and only if {{formula:628366d7-00dc-4566-85cd-8b0f2aa6a17a}} . Troyanov {{cite:34a31b5a621f0f0dd9ed023a79389f33063da41c}} found a sufficient condition under which there exists a CSC-1 metric on a compact Riemann surface {{formula:29512d82-2c4b-4514-8775-ea86d53264c0}} with finite conical singularities. Provided that {{formula:24fa89a0-c24e-4db6-9c7c-5e7594110252}} and all angles lie in {{formula:49fd6823-f63b-443c-88cd-f3f5d00091b1}} , Luo and Tian {{cite:e99455cc7c04b7b4c210213e346713428276140e}} proved the sufficient condition of Troyanov is also necessary and the metric is unique. Under some restrictive conditions, Chen and Li {{cite:deea88904f6723b0d76efd5476679512cbbeadb5}} found some necessary conditions of the existence of CSC-1 metrics on compact Riemann surfaces with finite conical singularities. The authors {{cite:62f2ee3d8a87af108a8cda6fc3c3f147350bf1f7}} studied the developing maps of a CSC-1 metric {{formula:575193f5-5340-4665-a1a8-4e1ed17bc60f}} on a compact Riemann surface {{formula:d41f56fb-cad6-4921-8d98-7f95e39a5a53}} with finite conical singularities. In general, a developing map of {{formula:62c7763f-117f-41d6-8490-613851d8ab20}} is a multivalued meromorphic function on {{formula:3beb6907-2a2d-4c5b-ab94-a296d66f1624}} . In particularly, they constructed CSC-1 metrics with finite conical singularities by using a kind of meromorphic 1-form on a compact Riemann surface. For more results about CSC metrics with singularities, we refer the readers to {{cite:27b429dcb57e8ca01d722cd1b86b5fe6dc131f4c}}, {{cite:9152b063a0d4482a75a351eecf9d11467ee816f1}}, {{cite:857fbad078110f1742f9d81e815bb20c40402106}}, {{cite:281219f525d71a2fffea9f7180fd6039c3e10590}}, {{cite:d31bc5c2d3532f55d1fe477b7edd17341a647c26}}, {{cite:9ec86c9798217dc28f48e322aaed444b6fdaf696}}, {{cite:781d3aa005648bfabedd5a4f5766e692432b9826}}, {{cite:1622721586793768773b92e9ddcb263c3d4303b8}} and references cited in.
| i | 9f376b515b8968759f5915ec7d2470fb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.