text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Discussion.
Our evaluation shows that learning on monomer datasets, especially when the dataset size is small, is much more challenging than larger datasets as used in the related work {{cite:c182e142a1fed55cd36b1ddacd81aa09311229a0}}, {{cite:c503b5768cb6c279abea6ef073feda049f3a75d1}}.
GraphNVP shows poor performance since it uses molecular graph adjacency matrix as model input, which is extremely sparse for our relatively large monomer structures.
The vocabulary-based JT-VAE and HierVAE perform reasonably well. However, when the dataset is small, JT-VAE is not able to mine a vocabulary that would allow it to generate many unique molecules. The more diverse vocabulary of HierVAE clearly solves this shortcoming, but the low Membership score shows that it does not capture monomer class specifics.
In general, grammar-based methods can better capture class-specific molecule characteristics than DL-based methods and have higher Membership scores.
However, they perform poorly on RS.
Specifically, MHG has fine-grained rules that simply attaches atoms. This leads to high diversity but the rules hardly capture domain-specific characteristics.
The STONED method iteratively replaces atoms based on the SELFIES grammar and
only performs interpolation and exploration based on training data, making it hard to embed build-in domain-specific knowledge in the generative model.
The overall results show that:
1) Our learned, substructure-based grammar successfully captures class specifics
– a critical evaluation criterion which has been ignored so far.
2) Other critical, domain-specific metrics such as RS can successfully be optimized during grammar learning. Our score is {{formula:9148038d-0d16-4efa-9bab-1287455acfa7}} higher than the others. More importantly, the optimization is done in-situ during grammar construction, and hence it avoids post-processing.
3) Our method is the only one that constantly achieves stable performance.
Altogether, these results clearly differentiate us from the others.
| r | 32336f042511a9acd5341ff595290c25 |
Finally, we conducted a user study to evaluate the quality of generated explanations.
We measured user performance on correctly answering questions based on explanations, to test user understanding of agent behavior.
We also collected user subjective ratings on explanation goodness metrics {{cite:e6062f326ebf003cab4fedc4768bbacff702ad40}}.
The results show that the generated explanations significantly improve user performance and increase subjective ratings on various metrics including user satisfaction.
| i | 4923f686cd16e8da4b07d45fba2299e2 |
{{cite:3db310bc334b7bb7221d25d2bc90e6190e559a36}} propose to train trading systems and portfolios by optimising objective functions that directly measure trading and investment performance. Rather than basing a trading system on forecasts or training via a supervised learning algorithm using labelled trading data, they train their systems using a direct, recurrent reinforcement learning algorithm, an example of the policy gradient method. The direct part refers to the fact that the model tries to target a position directly, and the model's weights are adapted such that the performance measure is maximised. The performance function that they primarily consider is the differential Sharpe ratio. Denote the annualised Sharpe ratio {{cite:7511758d760548af01d447fb9bbdf27ae83a325e}} as
{{formula:cccbfaf1-aa6a-40f4-b3da-97b0dd59e0a1}}
| m | 01e551eac6e0827431a062a6847df8bf |
One direction for future work is to generalize our analysis to universal dynamic regret minimization (i.e. with arbitrary comparator sequence), which would make the online bilevel optimization more widely applicable. Further, there exist some other regularities in the literature defined in terms of the functional variation {{cite:5cd431bd372434e879d36355619313c404ed93c9}} and the gradient variation {{cite:baf2220fc8bfd7a860adf0c437b5cf60368fec16}}, {{cite:c6248211d69f559e7c5f988f6e3e46ee52c4149a}}. For online bilevel optimization, providing regret bounds in terms of these variations, and studying whether multiple accesses of gradient/Hessian can improve the dynamic regret as measured by these regularities, are also important future works.
| d | 3f605d228e2e4d0b381bec041cd9488e |
In this section, we follow a linear evaluation protocol {{cite:1e9a08e422b764a2b0022e595bd2a468d2f4cf49}}, {{cite:6bb8060ef40b649c363b6449698ca16eabc05ba3}}, {{cite:e1a638642e77164e902f52cffa1863d2dc475ff2}}, meaning that the encoder weights are kept frozen and we only train a linear classifier / regressor on top. As shown in tab:downstreamlinear, models trained with our method “ContIG” consistently outperform the baselines. Linear evaluation aims to provide a good idea about the quality of semantic representations stored in the model encoder.
{{table:9d473a4d-8bff-49fb-96ca-09502462c6be}} | r | e40f9c2e17425109b8067bb2c2d62379 |
From the experiments we observe that noise can have an unexpected effect on the accuracy of the model – as such increasing noise may not always decrease the accuracy of the model. We believe this can exhibit the generalizability {{cite:772345846bdda8a61ea16c1b2de5c8fc588c564e}} potential of HDFS, whereby it is relying on the structure of the code, rather than blindly following the labels in the training set – which is a positive sign as it conversely shows the model is relying less on memorizing {{cite:772345846bdda8a61ea16c1b2de5c8fc588c564e}} the data. However, there is a limit as to how much noise the model can take. For example, we observed that the accuracy of the model drops significantly when trained at a 100% noise level. Thus, further investigations are necessary to investigate the memorization and generalization behaviour of the model. Lastly, in our experiments the size of the test set was relatively very small (134 samples). A larger variety of test samples may be tried to obtain more generality in the results. It also remains to be seen whether the trends revealed in these experiments can extend to other datasets.
| d | d92b1c23decf554bd6edfbb919b4bea3 |
A VQ-VAE by itself is not a generative model. However, it's quantized index matrix
({{formula:e168d452-cbbd-4ec8-9c67-8a6af8ca7051}} ) makes it possible to sample new data with the help of a PixelCNN
{{cite:5fba3cae89842dddbfc587a1825afc68dc4ac47c}}. A PixelCNN is a well known autoregressive model, which is mainly
used to generate new images given the training data distribution. The basic principle of
this model is that each pixel in an image has a probability distribution that depends on
all the pixels that came before it:
{{formula:5b9fcc06-ab2f-4db9-bfa4-c201702f5f13}}
| m | 7ab4e9f3e88497e6e28786ec2566ff8d |
The amplitudes-based construction of the radiated field (REF ), provided in this work, has implicitly used the on-shell condition for the outgoing massive particles {{formula:d3c0cfb3-e750-4d6e-9ef9-3d0ae5f8aa26}} , which discards terms linear in the velocities and higher, in the scattering amplitude. However, for bounded-orbits, such a condition is not present for the components, and in principle should not be imposed in the computation of the radiated field. For scalar sources, these sub-leading in velocities contribution has no additional effect on the radiated field (we showed agreement with the classical computation, for quasi-circular orbits). However, deeper understanding of this matching needs to be provided, in addition to exploring how to fix a gauge for which the general orbit result (REF ), matches the classical result (REF ), as well as the inclusion of spin effects at sub-leading order in velocity. In addition, in the context of scattering amplitudes, higher orders in velocities are naturally included. However, for closed orbits, these corrections are consistent only – by virtue of the virial theorem – when also higher orders in the gravitational constant {{formula:bfb1c41c-da93-4eaf-96a8-a6521c4c4eb8}} are considered. For instance, at quadratic order in the BHs' velocities, the radiated field could contain contributions from both the tree-level and the one-loop 5-pt scattering amplitudes. One might wonder whether the amplitudes approach could reproduce the higher-order corrections to the energy flux for non-spinning binary black holes {{cite:6f7bf196ac2838a378a2c65471aa41a8fd012881}}.
| d | 3f5831b4d5feaf91c4b1da6d49b40385 |
see, for example, {{cite:887b40cf9bb318f66ceb2778cdcd5f61ce2b078e}} p.368. Based on this observation and
taking the limit in {{formula:a26c624a-8c24-4654-9acb-79ed0218e876}} , one can cover the case {{formula:474ee5f5-2d88-4349-8604-459ac367bc70}} and show
the global-in-time solvability of (REF ). This provides another proof of the
first part of Theorem REF , though the present article relies on
the method of energy inequality.
| m | ce5b69b76cdfefe75fe7caccd4e0cf37 |
We implement GCN and GAT using the PyG library {{cite:14a139254c731d3d28347dc99f9a720b951e6602}}
and LP based on its description. For other benchmarking methods, we
adopt their released code. All experimental settings are the same as
those specified in their original papers. We run each method on the
five datasets as described in Sec. REF .
| m | 22ea0d944c7971a1f2d48478f0d498b9 |
Finally, we address the impact of this effect on the Zemach radius extraction by the CREMA Collaboration {{cite:0cda31ddff2c270ba490d675b85de1e4201d0f74}}, {{cite:0c29bb2c75c2811dbd4ac944a77b71b853f2e60c}}, that measured the HFS of the {{formula:50712e41-ae3a-4d6b-b526-a099ea0f3470}} state, obtaining {{formula:a0d84552-ba64-4753-a7c8-cbd1a36161ca}} . Comparing to the theoretical results for the HFS, {{formula:0f858966-92b8-4c1d-ae67-b7ff8fec246d}} , see {{cite:12435bfcf51f56da61849bbdf3fe33517e5955bc}}, {{cite:cf93e438c530ef0a440cadb0a413e65a0876e940}}, {{cite:309131169c5dd4e636afb8877ebcc35f823ef2d9}}, {{cite:1d9a531edd6b1af6fe867acfa44de51de43553ee}} and Table 3 from Ref. {{cite:e16ec7087574d4d370468bfbed833c22cf94597b}}, they obtained {{formula:060d2b4d-3e08-4c22-8ebf-46c2f29de382}} {{cite:0c29bb2c75c2811dbd4ac944a77b71b853f2e60c}}. Incorporating the missing contributions from the axial vector mesons to the theoretical estimate in eq:HFSfinal together with the pseudoscalar contribution {{cite:2220fb5827db0488b2e72f732924b8931f0efbf9}}, {{formula:6a783dc8-8e12-48ad-9ca2-36a6233ce670}} , we obtain
{{formula:9b72d04e-667a-48dd-b7d0-22cb634172a4}}
| r | 72300d53835dd87de635a87b22d6f107 |
The combination of clustering techniques and CNNs has recently been successful on small data sets in that it can go largely beyond the performance of traditional frameworks, i.e., performs clustering without updating features {{cite:943bc6d28a2a7bd4a9f1213b29b7a2b0036e2db9}}, {{cite:40acd93dac9baa197a63ca56567d8f085f97ad58}}, {{cite:e764ef732bab70dd7fb9aae42fa4835ed8590639}}, {{cite:dbe1f8efb560dd49e96a5b071486877fd61377a4}}. For large-scale data sets such as ImageNet {{cite:f5a429859ef9d70e7f899b2f365aeb799f7c5f93}}, DeepCluster {{cite:0a10fd55ee078ae7b0b4c24e253bdcb6802c0b1e}} provides an easily implemented solution using {{formula:02a9c84e-4768-4b4e-ad6b-b72b8670c1ab}} -means to cluster the deep features and reassign new pseudo-labels to each image. This makes unsupervised learning of the features on large-scale datasets be possible. {{cite:bf737344b18a72cd1795782e7aa7a122490ac597}} solves the instability problem of DeepCluster by breaking down the global cluster into the batch-wise deep features clustering and labels update. These developments motivate us to develop our own work in unsupervised image segmentation wherein we will seek to integrate domain insight of unsupervised superpixel based sonar/SAS segmentation methods with the modeling power of deep learning.
| m | 53d7ec4e03ac209e86575cbbb5bd86e8 |
In Fig. REF , we show the effect of the load {{formula:617635a3-43ab-4992-9839-7a0a910de72b}} on the center cell's throughput {{formula:87e090b9-103e-4075-8516-35e05dddaa9b}} .
All the curves increase linearly till a peak, which is the desired region of operation, and then drop quickly to zero as the system becomes interference limited.
All the users' packets are successfully decoded in the linear region of increase, and at high {{formula:ee314f51-b209-4575-99f1-b728630a7d88}} , beyond the peak, the throughputs drop to zero.
For {{formula:2f2c8e89-9abc-4fd2-a3e6-d15679a2ca6e}} we see a 70% drop in the peak throughput from {{formula:6c0ab564-a95f-47b1-a01a-f2964cef919d}} at {{formula:30acd807-98c6-45e3-8ea5-f47ca21d5791}} for SC to {{formula:a07293b9-2bd6-424b-ade2-8fde94ee9f8a}} at {{formula:eb437ef9-16ae-42c6-a935-fd7697a33c0e}} for MC.
This is because users face a high degree of inter-cell interference in the MC setup, unlike the SC setup, especially at high {{formula:64aec69e-b47a-4c1c-9d64-25a3d830b900}} .
In the SC setup, the peak throughput reduces from {{formula:1107a8a0-25ca-46ea-b189-c842986926c9}} for {{formula:f73f0c39-075c-4c09-aa56-63c01974bf31}} to {{formula:3672df5c-4ad7-470d-ba11-dd004c082e01}} at {{formula:6b2c436a-c767-497b-a2b3-a7355efc1bdd}} for {{formula:5bba9baf-4671-4037-932d-57a403bf1017}} .
This trend is similar to the MC setup for which the peak throughputs are {{formula:12649256-3f9b-4ea1-bb4f-adf70c82a4c2}} for {{formula:72b7bb44-2552-45e7-bca6-be24a373f246}} , respectively.
This is because the system's interference suppression ability with MMSE combining reduces as we decrease {{formula:ffe2bc2f-2c39-4cc5-aa1b-3794658ed52b}} {{cite:dd390cc2b6e463b3119236898f2341ff7e3c5f34}}.
This holds true with {{formula:596745fe-8030-4c80-8079-19590f7a2b47}} also, which corresponds to a lower SINR threshold, and consequently higher {{formula:98e5f256-49c8-467b-bb56-3db552d108c1}} .
To summarize, at high {{formula:1d00807d-4457-49d9-9855-416e48295e54}} , there is a high degree of inter-cell interference which SC processing does not account for, resulting in a substantial drop in performance.
| r | 79b5dbe3c3e9b2b2518076123d71528f |
Note that the {{formula:5924b51f-3773-4772-840b-4e6df31b8114}} above is the AdS radius constant. The existence of pressure indicates the presence of its conjugate form, thermodynamic volume {{formula:020033da-0ca1-4646-b67b-f22acaaed6d5}} {{cite:78d6feee5645ec3b30ab8ba9d9fdcb1ecbbbc627}}, {{cite:7a56b69ff5b3028d795931a98daa014282b633e0}}. The introduction of {{formula:3cb5ef52-2268-4e1a-adfa-077ab460fd60}} and {{formula:ccf69416-e32b-4b5e-9e2f-0180cafd421f}} in the first law of black hole mechanics leads to a more comprehensive study of thermodynamic quantities of black holes, one of its most enterprising branch of study is now widely dubbed as black hole chemistry {{cite:080591d01cdcc5d1a20d11f6a259468e283b9033}}, {{cite:8ba1fb164dff57356819d6d773338a0ca3300f7a}}.
| i | 43cc5388d85a9f04101cdc70437d8a55 |
Targeted maximum likelihood estimation (TMLE) has emerged as a flexible framework for estimating a variety of causal estimands {{cite:c6174dba53a52ed1f9180cfe93c5ae1165051424}}. Specifically, {{cite:36dc7a0a531518f7e3bf462de817b68505cfb7d3}} apply this framework to estimate {{formula:f4b8a7a7-63b2-45ff-bddb-a84160b2a149}} within the transportability setting described in Section REF . TMLE is resolved in an iterative manner by initially finding {{formula:31f49420-eb73-4b55-ace8-0443c1a5a86b}} and {{formula:c1899be7-e897-47a9-9a64-531d08ae1001}} using the independent sampling units {{formula:d76ec0a0-00e9-426d-8e18-c26a88510c26}} , which estimate {{formula:a04fdf67-565e-4f2b-a963-0c7700cd5d89}} and {{formula:85514411-0d80-4c14-890b-a11d7d2b4da0}} , respectively. If we were to stop here, we could formulate the G-computation approach for estimating {{formula:1e963cf2-7434-4bd3-b383-6ef648537286}} by solving for
{{formula:a055db43-a9fe-45a4-a7d7-c52b260b8ffc}}
| m | 4f3c49ef3b2df6dff92948f2ab57c10e |
The solution to chemical reacting flows may contain shocks. Moreover, numerical schemes may result in non-physical numerical approximations, e.g. the density and pressure are negative, and the mass fractions are out of the interval {{formula:47b76b30-2c98-4681-b8cf-60b9c659c6c3}} . The non-physical numerical approximations may further lead to ill-posed problems and eventually blow-up of the numerical simulations. Hence, constructing a bound-preserving scheme is essential. In this paper, we apply discontinuous Galerkin (DG) method that is high order accurate and flexible on geometry. The method was first introduced in 1973 {{cite:080b4e226810484783dd5e33d9fd3deffaea8bf0}} for the neutron transport equation, a time independent hyperbolic equation. Later, Cockburn and Shu extended the DG method for solving time dependent problems such as nonlinear convection problems and Euler equations. The framework was given in a series of papers {{cite:463ed7610fa59c328f450c89bad51fbedaa681b2}}, {{cite:4f8fd3abf18f3fbb32916c20710cd6c3cc0920a0}}, {{cite:3889e6302405199ec77f26ff8d1ad5e9e8994c23}}, {{cite:498e2dc266bb0d736cad0590bf76418930fad05f}}, {{cite:441d1d1a31666bb6c9cc238681e9f47aea255620}}, where the DG method was coupled with Runge-Kutta (RK) time integration along with TVB nonlinear limiters to achieve non-oscillatory properties for strong shocks. However, the TVB limiter is not sufficient to maintain the positivity of the numerical approximations. In {{cite:8e269c3ebfb7dea044bb3ae8de6c8b15c2267ae6}}, high order DG methods for two-dimensional gaseous detonations were constructed to preserve the positivity of density, pressure, and all the mass fractions. The idea is to apply first order Euler forward time discretization and find a sufficient condition for the cell averages of the DG numerical approximations to be positive. Then a slope limiter is applied to construct new physically relevant numerical approximations, keeping the original cell averages. The time discretizations can be extended to high order strong-stability-presering (SSP) RK or multi-step (MS) methods {{cite:cc4a00d9554d8cc8c54cfb3f3e787b14de685658}}, {{cite:d52abbb596ab488885f2a02028ce16741f0568b9}}, {{cite:fd9619d2ad0d8272633fb94268fbdec56582a46c}} since they are convex combinations of the Euler forward method. For our problem, we need to preserve not only the lower bound 0 but also the upper bound 1 for the mass fractions. The first work preserving the physical bounds of the mass fractions was given in {{cite:d656ed12061023c8f942e12c5b1385ad48912954}}, {{cite:275222b95e8b14670d8affe76e4e50727f0c2603}}, where the compressible miscible displacements in porous media were discussed. Later, the idea was adapted to multi-species and multi-reaction detonations to construct high order DG schemes in {{cite:f00398a15a3e22c85552e732ada42b6e2b8f9bd7}}, {{cite:04914950f9030e325479aa88b25aebe7d7b5a4b8}}. The basic strategy is to apply the positivity-preserving techniques to each mass fraction {{formula:33110aeb-d367-473a-b502-e336788290f8}} and enforce {{formula:f6254052-5d42-4bcf-9a86-3985e3f6a15c}} to obtain physically relevant numerical approximations by using conservative time integrations. The extension to finite difference methods was also given in {{cite:ace31b2769363fc621ec14160fd808641d16ea0f}}.
| i | 51207d1f8c42e20eb25c0c9fd2f202c8 |
The strict convexity and differentiability of the regularized optimal transportation problem makes it possible to prove significantly more general results.
A central limit theorem for entropy regularized transportation costs, centered at the expectation of the empirical cost, was first obtained by {{cite:dd30379c6bc999809ca9c68a4a4bfb286522973f}} (see (REF ) below).
Generalizations and extensions for discrete measures have been proved by {{cite:99f6ae71d15968bee520668806cdec42b8e383e3}}, {{cite:7a49bceec7f33769e457b6ea07a3be938ba1f027}}.
A growing body of work investigates the properties of the entropy regularized optimal transport problem from the perspective of probability and analysis, including its asymptotic properties as {{formula:f7c8e0b7-ff22-4fff-9219-3899d2017636}} {{cite:7b6852193c4a22c02907246e8e057dcbe7be6e7b}}, {{cite:5bd6a4b02c36031a10797641907a74f460578759}}, {{cite:a82f4f3c98336924a3d96f0e96f8179c457623ef}}, {{cite:9693bc44bfb4d9fafa784ce4495446244a3c1854}}, {{cite:daa743e2ffd254fb214b1d5cab45320b4e733160}}, {{cite:e9218814ded0d685848f7a76ceefb8c426812388}}, {{cite:90444af5aefde8cd5da05e12015971638716f570}}, opening the door to further statistical applications of entropy regularised transport.
| i | 82451fba26d5415bd8b52a9e9bf91951 |
We showed that using linear self-attention mechanisms such as Performer and Nyströmformer in place of a vanilla attention mechanism with quadratic complexity in vision transformers can overcome a significant computational bottleneck that limits the application of transformers for long sequences formed by image pixels. More importantly, linear attention mechanisms in ViX can achieve image classification accuracy comparable to ViT while using far fewer computational resources, GPU RAM, and storage memory. This result was expected as linear attention mechanisms such as Nyströmformer and Performer were able to reach the performance of vanilla transformer using half the GPU in LRA benchmark tasks which included image classification {{cite:36c0f44527efb915e988bd8b002f89b86580112f}}, {{cite:4e631ff04acb8dc846e472fb479fa72f34618108}}. Our experiments confirm that replacing quadratic attention with these linear attention mechanisms in ViT architectures also leads to significant reduction in GPU RAM and computational costs without deteriorating their performance in image classification. This will enable the usage of transformer architectures for vision applications by practitioners who have resource constraints in terms of size of the GPUs (i.e., RAM and cores).
| d | 06c70064da918389ac6d731bb9656e91 |
Deep learning has shown unprecedented success in numerous domains {{cite:d85a268cf2441903d09dd703c1ac5575951e81ef}}, {{cite:7f49e174f723b11103afb9028f38a519ecbece5e}}, {{cite:f937739c1bf6e5ff1343524f608eeb8e5de5a6a5}}, {{cite:24b11c3d3949c0a1984b842cec21d5761bdaf1e3}}, {{cite:c5ff3519964d89d5138629df399dc73e12a1ad28}}, {{cite:c04e54abf8db4339150ec13d6c23fe6ea3873521}}, and robustness plays a key role in the success of neural networks. When we seek robustness, as a general property of a model, we would like the model prediction to not change for small perturbations of the inputs. However, such invariance to small perturbations can be detrimental in some cases. As an extreme example, a small perturbation to the input can change the human perceived class label, but the model is insensitive to this change {{cite:8f19c1ec179f3616edf046c119db139ffea2205f}}. In this paper, we focus on balancing this trade-off between general robustness and sensitivity by developing a contrastive learning method. Contrasive learning is commonly used to learn visual representations {{cite:9250f6b0f0438bafa5dc46a026acb88dd9476bb8}}, {{cite:4fad356d50b8f23cdd68511d9406dabfe0026c9e}}, {{cite:3372b40464067239e65cbbfbb103963a63728a98}}, {{cite:bf1871a278dc8e924ce84bf304820012753434ee}}, {{cite:50776b2062f6b7ea22a8e5f397dacb7a3f0bbcd8}}. Our goal is to promote change in model prediction for certain perturbations, and inhibit the change for the other perturbations. In this work we only address robustness to natural (non-adversarial) perturbations. We do not attempt to improve robustness to carefully designed adversarial perturbations {{cite:c70c57aeed8b929e4d9c94cad985e8fb2456fc3a}}.
| i | 79505329d3e8bc8eea0d58670361e670 |
Remark 2
The fact that the acim {{formula:8ea27fbe-201d-4cca-a193-92ab73b001d8}} appearing in Theorem REF is unique is a trivial consequence of the fact that any acim satisfying the hypotheses of Theorem REF has to be unique. Indeed, to see that this is the case, suppose there exist two such measures. Then by part (ii) of Proposition REF , both are equivalent to {{formula:4f85b3e5-1c81-484e-a559-1ad7a5fc5a12}} .
By assumption, both are mixing with respect to {{formula:1a0162eb-3e5c-4c08-a00c-69ea392bf0e5}} and hence ergodic. It thus follows (see {{cite:5866e953a7ac323f67960ef4b0929d77b6fd8a72}}) that the two measures are equal.
| r | 8ba7187e6475bb090e0bcfe6fba2430e |
In order to check our results, we have derived new semi-classical relations analogous to known semi-classical relations in the literature (also sometimes called the consistency relations or soft theorems, etc.). Some well-known examples are the Maldacena consistency relations {{cite:29ddc7ecc33e67b96b7833a481b631dd06db2284}} for the three-point correlation functions, the SSV relations {{cite:1ebaf2c358b65c84588975ba6466330cbb687fa7}} for the four-point functions, and the GS relations for the infrared part of loop diagrams {{cite:f6b269a5e054f86ee4ca97e871dd21b1b8c3c457}}. These relations are generally valid for curvature perturbation modes and graviton modes, but they are not directly translatable to the present case. In the minimally coupled case, the gauge field sector is invariant under the conformal transformations and therefore does only feel the expansion of spacetime through trivial conformal rescaling of the flat spacetime results.
Thus, it is evident that the non-trivial contribution to the correlation functions arises due to the time dependence of the non-minimal coupling that breaks the conformal invariance. Therefore all the correlation functions computed here are proportional to the time derivative of the non-minimal coupling function. However, a suitable redefinition of the pump field in terms of the non-minimal coupling function and a gauge field redefinition allows us to write the action for the polarization modes of the gauge fields in the identical form to a scalar field in a fictitious expanding universe, with an expansion given by the time dependence of the non-minimal coupling.
For a non-minimal coupling corresponding to a fictitious inflationary universe, we show that our new semi-classical relations correctly recover the squeezed limit of the full in-in calculations. We checked the agreement with our new consistency relations both for the vector field itself, and also in terms of the corresponding electric and magnetic fields.
| d | 62c016a968275bb468f992a9cde37987 |
{{formula:4ce5704b-cf5c-4d3a-b501-e9f595c7228f}}
OPENet-fast achieves comparable performance to the optimization-based methods on the accuracy, while OPENet surpasses them significantly. For areas with poor texture or severe occlusion, our method perform better.
{{formula:9f2dd68d-18cb-493f-afaf-7cf7bbf208c3}}
Compared to the previous unsupervised methods {{cite:d18fd36429c4d96f597c31c5b4f3a1d3b32e8a58}}, {{cite:5ef3a6a86fb5904100c3c4fc0e578d33be327196}}, {{cite:0945ab2547021ddf02d63a05b9ed98e06223a7e4}}, ours could handle most occlusion cases well and generate global smooth disparity, thus greatly surpassing them.
{{formula:24a9d2a6-f864-4b41-950f-be248df31595}}
Supervised methods achieve better performance on 4D Light field Benchmark {{cite:8dc089844ea3d355b24437cfdb35801c20de946e}} than ours. However, significant performance degradation occurs when generalizing them to other datasets (right in synthcross and newdata). The performance of our method is much robust across different datasets.
| m | 1fefa42a260293ee6a1549ac7b3c1df8 |
As the approximation ratio of {{formula:d7c4296a-08dd-4698-82af-0c22a6237349}} is depending on the values of {{formula:e26ca354-5b2d-4e80-9a70-cf6ebe414d82}} and {{formula:f4c14c58-b4c7-4453-891c-5c470f9f5163}} , we next discuss some practical outer constraints under which {{formula:06494151-16d1-4a09-b59f-89243c20d00f}} and {{formula:90927f2c-fd51-4fab-95fc-3edaefbb3e3f}} are well defined. In {{cite:9f19c2549362517ecd2607c80e3d37901dbd3a98}}, they present monotone {{formula:a9627696-414a-4e29-998c-717a86cb6baf}} -balanced CRSs for a wide range of practical constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. If {{formula:0d6f8840-e80c-42e0-b3b9-eff4a9b8b251}} is the intersection of a fixed number of knapsack constraints, there exists a {{formula:9fb89d27-4354-470e-b4b5-e15a6ddabe60}} -balanced CRS. If {{formula:2fdaedd9-35c4-4045-a174-422376f76390}} is induced by a matroid constraint, there exists a {{formula:c6fd3e8f-9d04-43a1-8282-54ea5b8dee63}} -balanced CRS for any {{formula:9e0b620b-70ba-44fd-98a8-5f85921034e8}} . We can use their results as subroutines in Algorithm REF to handle a variety of outer constraints.
| d | 2a2fd72fb8e470e37118c28562972bf3 |
Consider the “Sudakov" analog of the gaussian-based parameter {{formula:095bdf7e-6895-42e7-aa75-1ab07febf82f}} : recall that by Sudakov's inequality (see, for example, {{cite:3dc4c7931b58c4eeb5b4e9b205ef5e33c2c43859}})
{{formula:e1b8a98b-5359-41d0-af4d-56cb466e8ece}}
| i | c780bf1a23b2fc6425deb7236dca94cf |
At the redshift of interest of this work, the escape fraction of Lyman
continuum photons is found to vary over a range of {{formula:2dfa7eee-b4ba-4242-b2f9-cb47cafd1762}} with an average value of {{formula:68db9513-fc7e-44ac-93a1-214c7778f4ba}} , as
inferred by the Keck Lyman Continuum Spectroscopic Survey of
star-forming galaxies at {{formula:76e09cff-e886-4f9c-b69e-be567a8cad31}}
{{cite:c197c1d6bb62a98f47c4f555e2c08d1ec6a6d740}}. We further assume {{formula:e506ba52-e7c1-4b4e-bfee-3719b20b374f}} , corresponding to the average metallicity, i.e. log{{formula:b5ff62ff-dca6-4191-bd89-b197fd9ac5fe}} , of high redshift DLA absorbers and a Salpeter initial mass
function with {{formula:69930256-e367-4c94-b379-d1331a7a6e69}} , as given in {{cite:bd60b0584554b9c2d53a5d4935be6b6788ed9eb7}}. The observed Ly{{formula:4c5709a8-d2af-4873-b321-929db61279ba}} luminosity
also depends on the escape fraction ({{formula:d4cb234d-4bf1-4dbf-bcf4-fc7d306f3eeb}} ) of Ly{{formula:51c1ff66-a7ef-492a-a357-27d4712d5847}} photons, and it is related to the emitted Ly{{formula:3de652e5-a0ee-4a38-8dea-e04235084931}} luminosity ({{formula:fcb9727d-e875-4f23-9a46-bd41ab30cf9a}} ) as {{formula:a202f14f-ef51-4245-bd15-7d0b18cf1724}} . The Ly{{formula:4a389563-9671-4847-bc25-f8bb3cdb1eb7}} escape fraction increases smoothly and
monotonically out to {{formula:789e43f3-5cdc-4b67-9910-a302b1143e35}} and strongly depends on the dust
content {{cite:cfac40927867586db8eec7194826f94f90905cc3}}. At the redshift of our
interest the {{formula:a1c5390d-8b7b-429d-9258-2ebb5ee9380f}} , as is estimated
for the high-redshift ({{formula:999b3b08-ceec-4438-8f46-a791878b996b}} ) star forming galaxies by
{{cite:42e6b157081bac5a670fea03f09a67b7398d5dbf}}, {{cite:cfac40927867586db8eec7194826f94f90905cc3}}.
| d | 2c59c6287235708cf52dba40f7624e60 |
We select the best results from {{cite:4f3f0c09cf3d49059ffd0162f8286c4a14277099}} for comparison – for each testing subset, we pick the best accuracy over 7 methods and 3 different architectures including 4-layer ConvNet, Wide ResNet, and ResNet-18. As shown in Table REF , our simple baselines clearly outperform the best results from {{cite:4f3f0c09cf3d49059ffd0162f8286c4a14277099}} on 9 out of 10 testing datasets, often by a large margin. Our baseline method using LR outperforms previous best results by more than {{formula:2f2dc779-bff8-4021-83f7-1800338bb518}} on average. Also, self-distillation improves max(LR, SVM) in 7 out of the 10 testing subsets. Moreover, we notice empirically that logistic regression (LR) performs better than linear SVM.
| r | 8e4059ef0ab94fadc3d4d961ff5131b1 |
In sum, the CFB-A yields two key implications for the experimental
site. First, as implied by the increase in the cHPR performance metric,
which focuses on purchases within the first 100 products, the CFB-A
recommends preferred products earlier in the ordered set of available
products relative to the other extant methods (i.e., the first 100
products contain more consumer matches). This result is also consistent
with the finding that the CFB-A outperforms other methods under the
NDCG metric ({{cite:92982164d19c4c4531160cf4e15a56c8430c83b7}}), which accounts for
product rank and favors recommendation engines that place chosen products
earlier in the list. Second, because more products are sold in the
experimental grocery site while the grocer's revenue remains constant,
the CFB-A tends to recommend less expensive products, which could
limit the revenue implications of the recommender system in retail
settings.
| r | 218edb01ad0e9f37aa783d0ad1727926 |
Table REF shows the comparison between our method and the baselines. It also compares with the methods attended to this year's challenge. While our method overpass all the baselines on nDCG, it falls short on mAP. The MI-MM approach projects both modalities onto a shared action space using linear layers via max-margin loss, which is a simplified version of {{cite:e3adbc378f8039fe3e9e2f7032ced9722992d92e}}. The JPoSE approach {{cite:6da4f899775bdc1d8568c4ef62a026138c28f73a}} uses a triplet loss to separate captions into the verb and noun spaces. The JPoSE* refers to our implementation. DCRL {{cite:b4b0fa4fe49347aa0cc9b422d03cd277ea3a78fa}} considers both inter-modal and intra-modal constraints at the same time to retain both cross-modal semantic similarity and modality-specific consistency in the embedding space.
| r | 271af3608170fc4c55b4dfba08aa21fb |
We study the butterfly effect for rotating geometries in {{formula:6fc5f4f9-441f-4b31-b08a-cadbc7ceb869}} by computing systematically the rate of disruption of mutual information at very late times due to in-falling rotating shockwaves with angular momenta per unit energy {{formula:73b03916-4889-4ca1-a32b-045d902fcab4}} . We find like the analysis in {{cite:5947cec5b2b7d52b77068c6b260b74aa1f8d1de5}} that this rate is controlled by the blueshift suffered by the in-falling shockwave which is given by {{formula:3c00c23e-6144-4a5a-bdf3-00b4d331030c}} (REF ) which for angular momentum {{formula:173d14b5-938f-44e6-9200-3638d7f3c8f3}} can be greater than the temperature of the black hole {{formula:8f8f5195-8b34-4acb-ab41-5c8213c74ee7}} . We also find that the rate of growth of the wormhole {{formula:fede937e-c550-422e-a576-84978b674d23}} the HRT surface connecting the two boundaries, as a response to the shockwave can grow at rate {{formula:a64acb44-fbea-4d47-a69c-41c8d2032c59}} which can survive the extremal limit for a particular values of angular momentum {{formula:0922dbc1-e7b0-46d3-92fe-c4a472449b7f}} Fig(REF ). This is in contrast with similar analysis in the static case or for the case with {{formula:51c6cb62-86d3-45e2-a6d5-5bd9f536f170}} where this rate drops to zero and is always bounded by the temperature of the black hole. We also find that {{formula:01d47221-84ac-41cf-aa52-6abc5708b4af}} can easily be greater than the temperature of the black hole for {{formula:92194297-0eea-404d-bc8b-b617e44fb8ba}} for sufficiently non-extremal geometries. This rate is at best bound by the blueshift {{formula:cf5d7d6f-962c-4c2f-ad3b-8a69306c71f7}} even when it survives the extremal limit. The scrambling time also behaves as {{formula:e6680643-ddca-4020-a1c4-009eddf48b6c}} in such cases.
We also find terms that increase the scrambling time but these terms do not scale like the black hole entropy for large black holes. We find one such term which increases the scrambling time to be {{formula:c1c0113e-45ab-4ba6-bb67-e158e022f04b}} which expressly depends on the angular momentum {{formula:0b1e6fcc-db18-4d1c-8703-b375c05c9415}} of the shockwave. Such a delay has recently been observed for the case of charged shockwaves in RN-{{formula:436c5541-a08b-4775-abad-86c2af8893d5}} {{cite:b2228ac4328130f9c25e8146e7d70a481671a3ef}}.
Interestingly for near extremal black holes as {{formula:52964eb8-3ed6-4617-ba19-99179b98d974}} survives this limit for values of {{formula:d3388044-2e5b-44c9-8e56-a9ceaf0cf758}} which tend to {{formula:7142d04d-1351-41a9-a6af-e78e74a15172}} , the scrambling time is still proportional to the extremal degrees of freedom of the Kerr geometry {{formula:bde7d261-fbbc-4d95-a682-f30690375ee3}} {{formula:d02c3391-81b6-4f2c-9043-83be58235466}} (REF ). This particular feature is very interesting given that string theory is able to rightly predict the microscopic degeneracy of extremal {{cite:1cadb13c1477ee79e511e24cee3f860bfdf9c7e3}} and near extremal {{cite:d0790b8786814aa3b4eb9b3b474bfbc5e205999b}} super-symmetric black holes.
| d | e6ad97a395facf394cd1069393812ab9 |
In this paper, we have considered the classical double copy
(specifically the Weyl double copy of ref. {{cite:e78a197a7ba226ae7738f59980caa67dd3e786ed}}), and
how one may formulate this in twistor space. This was already
considered in refs. {{cite:6c8b5360e7ce201795ddfeca1a5f0606690a3c21}}, {{cite:6fe0b53c0c3314e1e455a17e9a9ee4c0c34ea1fb}}, which showed
that a certain product of twistor functions can be used to derive the
Weyl double copy in position space. However, this creates a puzzle, in
that one cannot ordinarily multiply twistor functions together in the
Penrose transform that turns twistor quantities into spacetime
fields. The twistorial quantities associated with any spacetime field
can be subjected to equivalence transformations that do not affect the
latter, such that they are representatives of cohomology classes. This
casts doubt on whether the double copy can be furnished with a
genuinely twistorial interpretation, or whether the twistor approach
acts merely as a useful book-keeping device, that can be used to
efficiently generate instances of the classical double
copy. Furthermore, refs. {{cite:6c8b5360e7ce201795ddfeca1a5f0606690a3c21}}, {{cite:6fe0b53c0c3314e1e455a17e9a9ee4c0c34ea1fb}} used the
language of cohomology groups, and if the twistor picture
makes sense then it must also be possible to instead use the more
widely used framework of Dolbeault cohomology.
| d | 14ba6f10a54bc69cbbc2f8cc8575511b |
The most common theoretical approach to studying the correlated states is to start with a continuum effective Hamiltonian, often referred to as the Bistritzer-MacDonald (BM) model {{cite:de0c0d3a48cacdbc4921d1790d9e22fd58a157fd}}, which gives isolated narrow bands for a range of near-magic twist angles, and then to project the Coulomb interaction onto the wavefunctions of the narrow bands (sometimes including few remote bands as well).
The BM model {{cite:de0c0d3a48cacdbc4921d1790d9e22fd58a157fd}} has achieved success in many respects. It correctly predicts the first magic angle where the bands around the charge neutrality point (CNP) become extremely narrow and captures their band topology.
For relaxed structures, however, the BM model –which was originally derived for a rigid twist– does not include terms which are nominally of the same order in gradient expansion as the ones which are kept, such as the pseudo-magnetic fields induced by the {{formula:5b1b76e9-eb4f-4ede-9073-f014fb524c78}} symmetric strain from lattice relaxation. Moreover, next order gradient terms are needed to accurately capture the narrow bands near the magic angle due to the anomalously small non-interacting bandwidth obtained without such terms {{cite:44331249397dbf1e6fe6a95a55fac1e7577667a5}}. In addition, the narrow band wavefunctions of the BM model are nearly particle-hole (p-h) symmetric {{cite:9df58264666f1d2856f0f12892813262cbd537c6}}. The presence of the p-h symmetry is known to play important role in choosing the correlated ground states {{cite:8321bf233bfd70d90225100511c6dc4bf7d4297c}}, {{cite:7f5d8d347fe378ffc048f96e3a46b7cdb2882571}}, {{cite:9df58264666f1d2856f0f12892813262cbd537c6}}, {{cite:d5eddeacff905f690ed75bc7d82ed2dbc083ba5e}}. Experimentally, it is also seen to be broken at low temperature in that various correlated states appear more stable on either the hole or the electron side of the CNP. This motivates development of a more accurate low energy effective continuum model for TBG.
| i | b2c0b6064f84081ab2f61fa6328c954e |
In our application, smoothing provided a relatively small loss in stated precision compared to complete pooling and direct estimation, smaller than what could be expected from dividing the data into non-overlapping aggregates, e.g. 5-year age groups. As such, the smooth estimates allowed for a visualisation that revealed patterns and the existence of heterogeneity of effects. This approach is semi-parametric and can be compared with recent developments in non-parametric methods such as causal forests {{cite:5692525b789778c89b4870e32b5d955454b4b10c}}, which incorporate recursive partitioning into `honest' trees {{cite:da30a3e068b2c83830514e5242d5c26b044b21a6}}. Honest trees and causal forests are modifications of classification and regression trees, respectively random forests. These algorithms (repeatedly) partition the data into nodes that are homogeneous with respect to the treatment, the outcome or the treatment effect. By design each node is a convex region and can be seen as a small domain with the boundaries automatically chosen by the algorithm. In order to obtain unbiased (`honest') estimates of the effect, data must be split into a training set to grow the tree and a separate estimation dataset to estimate the treatment effect, which is different from using cross-validation to evaluate the accuracy of the procedure or to tune parameters to optimise the procedure.
| d | 7f8d277c2011811cd6c41bde2a9b1fde |
Results
are shown in Table REF .
We compare to variants of USL with {{formula:cebbc4b6-8a50-4f1d-b304-3bcca8469784}} views and ablate RoIMap and the distance transform loss, {{formula:222665d6-4451-42cf-ad85-ffacc61778f8}} .
We report a random baseline which predicts each object as a sphere with random depth {{formula:19ea6713-f09c-414e-8ed4-152659439b7f}} and depth extent {{formula:f53043d2-6972-4946-a645-e98cb2102a10}} ,
and a fixed-depth baseline which predicts each object as sphere with {{formula:7ef4561e-36d1-465a-8a2c-e6364ff73a93}} .
We also compare to Mesh R-CNN {{cite:006e88abe07c368090666f213c50ffbac1d7402e}} trained on Scene-Shapes with full 3D supervision.
| r | 8964ddcfffe2c55602cfa9865dbf6605 |
Fixed Regions (DAVIS): The literature considers removal of content within a fixed region of video in scenes with dynamic objects. Our formulation (REF ) can, in principle, be applied to this case (Figure REF ). However, for high accuracy, one requires additional occlusion reasoning in optical flow, which will be subject of future work; the focus of the current paper is to illustrate the benefits of the scene template. Even with our current methodology, our results are comparable to {{cite:cac2c0e4c0f7591e7e78ce65853f1ee1d562d9c7}}, the state-of-the-art for fixed regions: PSNR 28.02 vs 28.20, SSIM 0.959 vs 0.957 on DAVIS following the experimental setups of {{cite:cac2c0e4c0f7591e7e78ce65853f1ee1d562d9c7}}. We provide detailed discussions in the supplementary materials.
| r | cbe37e4cf497c1d06371e026b66e223e |
Quantum teleportation is a fundamental protocol in quantum information science that has no classical analogue {{cite:d29c96d184245d6017a36a263239e6d9a0680f08}} (see also {{cite:19455258d9d8be8e6881647085f5869c4eeddb6b}}).
It consists of transmitting an unknown quantum state from one place to another by using shared entanglement and local operations and classical communication (LOCC).
Quantum teleportation plays an important role in quantum technologies such as quantum information processing protocols {{cite:d1e74fd1ee0417667b69893f971f421c692b5346}}, quantum computing {{cite:e838a785e59fdfcd9fda186ce7f7533f09b5fe7a}}, {{cite:f5023625ebedab366874e77fe2add4f539109de6}}, and quantum networks {{cite:a0643239fd2a29bcf2059e51b9c29ee640949aee}}.
Since the invention of this protocol, various modifications have been proposed, such as probabilistic teleportation {{cite:7f9173565bd554e8905f79be3e74965b5b9dd699}}, {{cite:f5aa5347db9b58b91626721dcf691ccc0536741d}}, {{cite:fcf07fadf1ff1e2bb905c86d0ebbf1aec926f6e8}}, controlled teleportation {{cite:018fd1b45f9f1d4d87d3c71bd8fbf648d55af401}}, {{cite:7e9ca932fca15de661490fd2bf49eaf3a0098e40}}, {{cite:a8fb698dc15ece6c3bed5b708b0f6e2062e2fcb9}}, {{cite:18888ebea9c017f23c1a0a026cfb6759d3721bd4}}, and bidirectional teleportation {{cite:49d2b82ad1aaad25cdeb0ccd98e2a3e4b925f544}}, {{cite:068831ec97a95f6f50b2270497fb285c4b797f96}}, {{cite:ebdc7aa3c1a4e9160594597c6aaaaff9bffc7893}}, {{cite:5124d689b16089bc018e9694d2d400ee38ea9854}}, {{cite:1fb34ac60c5694780ca11fde10a12a6ed89fa3bc}}, {{cite:dcb26dabdab192eda6eb5858ee45ca8b4622ae6b}}, {{cite:fd23bb1de85a068faee4e1e72d226b92913c837a}}, {{cite:4fbfd54b0d0090f4bcc533279a8dee62669df771}}, {{cite:58d7c38a0899ab404976e044893870cae6278d97}}.
There has also been significant progress in implementing quantum teleportation in laboratories around the world in the last three decades {{cite:a9ee7a696838e271ccbddb4c95a30fd5487de19e}}.
Several experiments have implemented the teleportation protocol for simple quantum systems {{cite:9d9e96a6007c0c55182e36872c76465adbc4d5ea}}, {{cite:47a24b81e06a119889d7ac885570f5ab2e23219a}}, {{cite:51e72872d5b2b462fcafdbda76f3e982557af02a}}, {{cite:562471bf40dad860a667440cc936b7d2522e98a0}}, {{cite:3714fc2d8f26f6a306e193e8afdd532d2823510d}}, {{cite:2369631a64615295daa5113e049720ed8b84e4a9}}, and attempts are being made to extend them to more complex quantum systems {{cite:a208d34c20fd7aefb061f9eb8780701d71ff42d9}}, {{cite:5ee956526e53ddff471c10f008d06c5142a97cc4}}, {{cite:fe932285b8729551cdad1507c9137617ec1ad433}}, {{cite:a7512b8f1fcc1dab67d966cdee3e7c312f57ca07}}.
| i | 5545de76ee406b57887c1c4401b551e5 |
In this paper, we have investigated the holographic BCFT with CBC.
The CBC is an interesting BC of gravity, which is elliptic and leads
to a well-defined perturbation theory of `quantum' gravity
{{cite:71f6914d1a1214bbd2507ced20c2dec6506cc871}}.
For simplicity, we have focused on the classical gravity.
We derived the massive gravitational modes on the end-of-the-world brane
{{formula:2a6c2263-1894-4e54-a8cb-180770b82c4c}} and do not discuss the quantization in this paper.
Compared
with NBC and DBC, CBC is more subtle. For the
simplest perturbations with homogeneous extrinsic curvature
around the AdS background, we showed that
CBC does not impose any constraint on the
integral constants of the solution and the central charges of
the Weyl anomaly.
Nevertheless the central charges of the Weyl anomaly can be fixed
holographically if one consider more general metric perturbations around
an AdS or a black string background.
In this way, we fix the central charges of Weyl anomaly for
holographic BCFT with
CBC. Interestingly, we find that the central charges are the same for
CBC and DBC, although they are different BCs in general, which yield
different locations of the end-of-the-world brane. We also study the
gravitational dynamics on the end-of-the-world brane.
Remarkably, we find that there are non-trivial
gravitational dynamics from the extrinsic curvatures, which obeys EOM
of massive gravity at the linear perturbations on a Dirichlet and
conformal brane.
Finally, we analysis the mass spectrum and find that there is no
massless mode on the Dirichlet and conformal branes.
| d | 313a563f6a71c74f4c1e8815f3bd244f |
The problem of finding adversarial scenarios (a.k.a. falsification) for CPS has attracted much attention in the past two decades.
Many approaches and tools are developed using stochastic search and optimization techniques, e.g., random tree search {{cite:d963490c6cb46447a235875af8298915a72b6202}}, {{cite:86b544aef90f9d3c355e2e102051c5a46a0d1b34}}, {{cite:d2dbc2e3c933cfc7641cc14a3e79948eeddc75d2}}, {{cite:2ee20d5b716f5a419d141efe0ad97b7ce531fd6e}}, {{cite:e1c1d20d4b554761d7ba0041e002efdabcd0da26}}, specification-guided stochastic sampling {{cite:d563adc7bdf21f386ce9a808809738a3399e1d0e}}, Tabu search {{cite:b4e170ce2c6a95186964a6928a926341ea20788c}}, Bayesian optimization {{cite:635dc1ed300f3f726e93edd53d8a87587e42979e}}, {{cite:0ad7548e36a04e643493166e5ff9b4a5ea6990ea}}, nonlinear simplex optimization {{cite:c3fd27ca6aced38f22fb5f49870fa884dea038ae}}.
These works all treat the system under test as a black-box to avoid its high complexity, and search for a specification violation at the system level.
Notably, some recent works {{cite:f934796547e2c5a24718aced36375a05c5a4fe51}}, {{cite:ddde519a910df7f9cbcc0752b5ddfa85abaaa538}}, {{cite:860975c286590569f700a588524aeb3111d2852c}} study systems with vision/learning-based controllers in the loop.
In particular, {{cite:860975c286590569f700a588524aeb3111d2852c}} searches for a physical environment configuration (encoded by {{formula:169c2faa-3788-4967-8af6-b4f727e6cb0c}} ) at the system level, whereas {{cite:f934796547e2c5a24718aced36375a05c5a4fe51}} uses a decomposition idea and focuses on the impact of imperfect information (encoded by {{formula:f20f4eaa-c991-4e64-aec0-a90de6e5bb4d}} ) to the learning component in the cyber layer, which is equivalently important.
| i | 8cc101be167181f473de4de0b9ebdf2a |
with {{formula:4b3a8a0f-a1dd-477a-9422-99f4f647d4aa}} , {{formula:7d68791d-c339-41de-a026-b596da421d17}} , {{formula:4727b268-0d43-4698-b452-46dcc9d40a04}} , {{formula:9ac0d234-3b18-46de-bae2-d715aa744617}} , {{formula:7b7cfaf7-51a1-45cc-a88d-f24f42788e0a}} .
Solutions for problem (REF ) are identical to the ones for the following problem (under some assumptions {{cite:cbd23fcf00f48cc435e814cab79cd6a97355f6e8}})
{{formula:1ee27bf7-5709-4882-93d4-06561c7778e6}}
| m | 2ae6c654f6544050a60ef93b6f1d208b |
Having the spacetime action of gravHS-SDYM, we also show that its twistor action can also be formulated as a gravitational {{formula:49f9e988-7133-4900-b999-d2123a5fd9bb}} theory on deformed projective twistor space {{formula:7eeb1a23-9a3b-4203-82fc-52d0c70f1ebf}} . Therefore, the gravHS-SDYM is integrable. This strongly suggests
that the self-dual sector of the HS-IKKT model is also integrable along the line of {{cite:dbecba79852559ff4fca81a0f1afebc151bad0e9}}, {{cite:43fa2a5c3a9fc832d2e6fd6028f4e8ff8aea03da}}.
| d | bb011eeb28b7074d96c7d51b303fc468 |
For medical image segmentation tasks, data augmentation techniques are also used in UNet and its variants nnUNet {{cite:fddbbd37d501d13406b081fc6edc9cfb16358b66}}, R2U-Net {{cite:24f28f71df3c4b8abc86bbef16f534f22dbd158f}}, etc. However, these methods are simple and hand-made, and the improvement of segmentation accuracy is limited. In {{cite:7aac99338e669f4ef4b5ba4b0f18414e41e69307}}, the authors proposed to utilize reinforcement learning to search for augmentation strategies. However, it costs 768 GPU hours and it only searchs the probability of each augmentation strategy in {{cite:7aac99338e669f4ef4b5ba4b0f18414e41e69307}}.
Moreover, the difference between natural and medical images such as spatial contextual correlation, smaller scale of dataset and unique pattern of specified organs or tumor makes the augmentation strategies adopted in natural images difficult be transferred to medical domains.
| i | ee2a030f146a89d7ac5205c8ae89f5ed |
However, measurements of entanglement are difficult to compute in CMT, especially in a strongly correlated system. With the increase of the degree of freedom, the dimension of Hilbert space exponentially increases. AdS/CFT provides a novel tool to overcome this problem in strongly correlated systems. According to the AdS/CFT theory, a gauge field is dual to a gravity system {{cite:c35c458f84db9d6738df51d868f12aaa8e08b73d}}, {{cite:7b63f013c5c844d0fac1a2f862837e66722037d5}}. This duality makes the study of strongly correlated systems practicable and builds a bridge between a quantum system and gravity spacetime. Moreover, AdS/CFT offers intriguing paths to understand quantum information and the emergency of spacetime {{cite:db7cadfcec7f5f1611fa9422d21c0ea6fc5341a6}}, {{cite:d8816c1f182f81a3fba35ede5f90a03ace4f7a5f}}, {{cite:d12212d5fe6c30dcb5c329c91cf082796bd98525}}, {{cite:8575fff1f42842e4851bf4112a6134dbef0a9cf2}}, {{cite:619c1663924b8b3fa63d5018cb880c1d2ddbbdaa}}.
| i | 8e60b1344ed89ec1c5a3b16d7ea618e1 |
Metrics. We report the metrics that are standard in the field. For depth errors {{cite:1a0e20175982392e71e8c1d0f081726f2d74f967}}: Absolute Relative difference: {{formula:38ae5328-fcff-4d42-b818-a4740793a65a}} , Square Relative difference: {{formula:df5290d0-1e39-4599-a39b-94cf36f2a2b2}} , Root Mean Square Error: {{formula:fba07e86-3197-4b6d-b823-d9cbaefa9a85}} , RMSE log: {{formula:e51e5546-a7e6-490f-93db-a5017a4a3002}} and {{formula:3efc7701-fe51-4d00-b0e1-f3d4aedaf699}} with {{formula:a5a9a285-0c4d-478b-8df9-005ba9bedb54}} : {{formula:420b556c-3a42-4336-bca5-d5a4932d8b58}} . Regarding uncertainty, we report the Area Under the Sparsification Error curve (AUSE) as proposed in Gustafsson et al. {{cite:0ee253e9831d7cbb2e281d20bf778deded07a5ff}} and the Area Under the Calibration Error (AUCE) {{cite:451358de20e53a1e9cc0ed14320dc2f7950a3ef2}}. The reader is referred to the sources for details. The AUSE indicates the quality of the uncertainty estimation by comparing the model (specifically the ordering of the estimated predictive uncertainty), and the oracle (the ordering of the true prediction error in terms of RMSE). This metrics shows the extent so which the estimated uncertainty serves to sort the prediction errors, hence being a relative metric. The AUCE is a measure of the uncertainty calibration. To compute it, we define intervals of confidence level {{formula:3e3f328e-fc5c-4b67-95fb-199d5988637a}} using the cumulative density function of our predicted distribution. The calibration will be perfect when the ratio of the prediction intervals which covers the target {{formula:632ab8f7-b0d4-4ce7-a33e-8f17e7b94af3}} is identical to the confidence level {{formula:b5337492-a2e2-4cee-a1ba-fc560ffca9c9}} . The AUCE is specifically the area between the absolute error curve and a perfect calibration {{formula:1456a100-39fa-4b0c-a770-fcafbbe8d4d7}} for a predefined number of confidence intervals.
{{table:bbc43ccf-d8ce-4f0b-9374-5591e2ba59bd}}{{table:72d52225-513b-46a5-8ff3-073a956c9540}} | r | 141441a5b27e0426146c07eb1afa335a |
To seek more week conditions, we turn to the application of Jang equations (a certain PMC equation) in Schoen-Yau's proof of positive mass theorem {{cite:8394d2d51d7bcc0633d46f52b813b58b5b32b0aa}}. They constructed Jang graphs, graphs of a solution to the Jang equation, via a blow-up process so they can reduce the general case of positive mass theorems to the setting of the Riemannian positive mass theorem. In {{cite:515f2f3b370250d0b583cbc06d2cb81d33b53d84}} and {{cite:8949d73c3c477579bf6046ae160605f3c195c94f}} Eichmair used this blow-up process to establish the Plateau problem of PMC hypersurfaces under the barrier condition. For further related applications we refer to {{cite:221e4d086f330253691f156d0a79772d23f85b6e}}, {{cite:a1cbc5a84699da071e33277f546f772de9035f28}} and {{cite:379687769d3c197c9459dd612ca0ca32fb12f724}}.
| i | 6027bd5039c45a5e71ea510ab3617cae |
In addition to the previous discussion, there are some future directions that we would pursue.
First, it would be interesting to see whether one can have {{formula:a0664611-9915-452a-9fa0-dce07b2f6a79}} -matrix description of Abelian topologically ordered phases on the book-page lattice. Note that the generalized K-matrix theory on the book-page lattice would be quite different from that on the plane, because of the inequivalence between electric and magnetic excitations. In the planar 2D toric code, we can introduce the K-matrix {{formula:7ad0a7c0-aff8-42e0-b447-e5be01a5d2c0}} for the effective field theory description. In contrast, on the book-page lattice, there could be more geometric structures encoded in the K-matrix. There is an attempt {{cite:6a915578aae09da0ea1ab23b25b8b78be8f42401}} to construct discretized {{formula:6bc162b6-31ba-4f66-b1e3-3b88a5a5c77b}} -matrix description of the Chern-Simon theory on non-trivial discrete lattices, such as tetrahedron, where the information about the geometry is given in the form of couplings between gauge fields and the ones between gauge field and flux. To investigate whether it is possible to establish the generalized {{formula:eaad4ad6-1334-4522-b184-3546db2b6007}} -matrix description of the topologically ordered phases on the book-page lattice in the similar vein could be an useful approach.
Second, it is intriguing to explore non-Abelian topologically ordered phases on the book-page lattice. A simple step is to put pairs of twist defects in the book-page lattice and study the fusion rules with the defects {{cite:03b265afa1705072ae9b13f69f7f095096effa24}}. It could also be an important issue to see how the non-Abelian anyon is delocalized on the book-page lattice. Third, it is known that there is an intimate relation between gauge symmetry in a theory in {{formula:e7425e7e-e3e6-4e85-8cf3-ddb794f4e04c}} -dimension and the global symmetry in {{formula:64ac0436-f5ae-425b-9cf6-2adda6ebbf37}} -dimension holographically. For instance, the {{formula:e15cf3d2-662a-4a40-a3b1-fa729f1492c3}} -{{formula:f059ef74-e720-4d3f-978e-0e9b196ef0c7}} duality in the 2D toric code is closely related to the Kramers-Wannier duality in the transverse Ising model on its 1d boundary {{cite:f83e3e26aa3e8a462b35f7bf87080ead00f3dc22}}.
Studying the transverse Ising model on the 1D Cayley tree
would help us to make better understanding of the bulk-edge correspondence. We will report our results elsewhere.
| d | 6619be9c2065c79525de6b17c7f14965 |
The great successes of the convolutional neural networks (CNNs) {{cite:a934a13d4bf20412f94db3daba901b66aeeb07e9}}, {{cite:90c5a6cce8bc45eed2399623037f24384ebc8139}}, {{cite:a885b785be7c756026391ea79d67244b71b8938e}}, {{cite:6b90763b85e44f173c3461fe4895558a51eca5c0}} have liberated researchers from handcrafting visual features {{cite:0e37ecf78441e54ec2d79c79b1d4847cf431d6d2}}, {{cite:1913e96ffd294ed8be58a066f1b39a7eb72c091b}}.
By means of the inductive biases {{cite:aeb4ad82384e91d86f83edb023ce03d09c5ffe2e}}, , focusing on the localized features and weight sharing, CNNs are potent tools for tackling visual recognition tasks {{cite:a885b785be7c756026391ea79d67244b71b8938e}}, {{cite:652c8d1726fc72eb84b452297522eb2cd11a9775}}, {{cite:1975fa14342a1e53d623530edb2f23a4f5981d95}}.
Nevertheless, such biases have constrained their abilities towards building deeper and larger models, as they have ignored the long-range dependencies {{cite:fbc1cf45c4d27a6efe97150236e947f7fb27562a}}, {{cite:b737bf9efae21013378758a85956e48b41c6156d}}.
| i | d9d3fe2fa69062cb0d2ca8d69449f9b4 |
Inspired by prior works on gradient resilience to partial updates{{cite:58fd2fc08dc2e4b7cdee514c293cd86cb0ed8e79}}, {{cite:1020bd5abbe245603f8f12fe23ac86a7071b0b5d}}, {{cite:8a9f1cebdeb2d0bc8d2d3d57d0e5dca08024ff9a}},
we apply more aggressive approximation of gradient computations. This
allows us to further reduce the amount of computations by {{formula:d2322592-74af-4bf7-9655-4597e938e1dd}} when training on MNIST
without affecting the result accuracy.
| i | 3964791d44d296216b4f29c6e0ca43ab |
To prove (REF ), let us view each pair {{formula:b337586f-ea51-4d82-af95-cdc49b5a2561}} , as a random tuple with distribution {{formula:63797e0d-0fd6-4f57-a583-4b1d333f0c9d}} , with {{formula:700c59c9-49f9-4650-be88-29c46f78b892}} the Dirac-measure at {{formula:e8eba221-95a0-42ce-a1a6-e46dafff7d4d}} and {{formula:e0a50611-8006-4bbe-9e30-d783dcf17e66}} the distribution of the error. Then {{formula:012ab53f-8c47-4e32-90a8-3e1dce052256}} , where in accordance with A.1, {{formula:527bf7cd-8498-4448-b0a9-7bf3758bb9d1}} is the empirical distribution of the design points. Further let {{formula:cbd2b95c-2585-4ed7-8849-0ebd34271a94}} denote the empirical measure placing mass {{formula:45d954fe-a019-49bd-9607-ab47122ae476}} on each {{formula:652cf957-c583-4364-bb97-14868e627555}} , i.e., {{formula:6bddabc1-ac62-4f0a-a6dd-278cf716dde8}} . Then, adopting the notation of {{cite:a6e58cd1726641a71c4ef5cd14ac0c43b6bba093}}, we have
{{formula:358cdc4b-93f4-42e1-ae5a-d4d3b04099f7}}
| r | 379f5ba00a917764f21663a93413a139 |
Since it is natural to use deep neuron networks to learn embeddings for its powerful expressive capability, C2AE {{cite:a440dec63de7fb87adf53e5cff1d8190a68f480c}} performs joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Similarly, Rank-AE {{cite:e00745fbd5f4493d1191f882a338c2a492fc77c9}} proposes the ranking-based AutoEncoder {{cite:cd19bb3b7669a7405df034231240915908f5a1d6}} to simultaneously explore inter-label dependencies and the feature-label dependencies by projecting labels and features onto a common embedding space. DeepXML {{cite:79dc9f657be673609debc01612f7dd02f6a88ddd}} explores the label space by building and modeling an explicit label graph and learn non-linear embedding for both features and labels. An obvious difference between XML and other text classification problems is that labels have the same form of text as instances. SiameseXML {{cite:eedcbe9ae8ef4c170c833985c3a85418e731fc86}} takes full advantage of that. A siamese network is leveraged to encode both instance and label features. The inner product of instance and label features can be used as a probability estimate for the instance that belongs to the label. As part of the classifier, the label feature alleviates the problem of insufficient number of tail label samples and reduces the number of parameters and training costs of the model.
| m | cc9220528df4a231be50a3a64a9d9683 |
In this work, we follow the training settings of DARTS {{cite:344b51800f5ca3f935ac076cfb10196d929a7a05}} for image classification tasks to train networks from randomly initialized weights, for fair comparisons between the searched architectures.
A large network of 20 cells (i.e. {{formula:a0ad4a51-7a36-4642-94ce-6a533f3ecf4f}} is set to 6) is built with the selected normal and reduction cells while the initial number of channels is set to 36.
Most hyper-parameters are the same as the ones used in the above search process except lr, path dropout, and batch size which are set to {{formula:3a3d38fc-c901-47c3-8ba2-52bf06c4f44a}} , {{formula:b9047fe8-8b58-4564-8815-a0403e1ba021}} , and 128, respectively.
For further enhancement, an auxiliary head with weight {{formula:923a1c79-eede-40f9-95e0-9531e24547df}} is added into the network.
Instead of half training images, this work trains the network from scratch over the whole training set for 600 epochs and evaluate it over the test set.
| r | 8102ae2aea0ba94609c2bf02040f5788 |
Neural networks powered by recurrent cells, such as Long Short-Term Memory (LSTM), have been used to construct state-of-the-art document classification models. Their theoretical capability of ingesting variable-length inputs into a machine learning pipeline has made them attractive to the scientific community. Even though models based on the Transformer architecture presented in {{cite:5d69c78026e8a24efb4c407dbc2139b48eb524d3}} tend to offer improved performance for many natural language processing tasks, Recurrent Neural Networks (RNN) cells still act as a solid baseline for many tasks and should be treated accordingly.
| i | 9665a6b0adbd081895171cf4c1d123e7 |
For problems on graphs whose degree grows with the number of bits, the previous arguments do not apply. For the Sherrington-Kirkpatrick model, the associated graph is the complete graph.
Therefore, each clause sees all the qubits at {{formula:4ae0b40c-e4eb-455a-906c-3e038ee1e08e}} , and sees all the qubits as well as all the edges at {{formula:6d8e8b25-51d1-4255-abe3-b7e31e62cc0c}} .
Hence, one may imagine that for large fixed {{formula:678b56bb-dd62-4b98-9b69-c02dfce32a81}} , that in the large {{formula:25d3913e-6f80-42da-b7d3-2307bb44813a}} limit, the QAOA will perform well.
We have explored this prospect in this work by obtaining a formula for the expected value of the cost function for an arbitrary QAOA state in the limit as {{formula:8a253360-1a2f-415a-bcbe-364fa00754b0}} .
The complexity of our formula grows as {{formula:2a46a6d8-1e86-4a6a-b07c-a3015d3b8451}} , so without too much trouble we could optimize up to {{formula:60702af6-88f4-4baa-9e33-58649148a717}} and evaluate up to {{formula:23a2712c-79f6-46b5-841e-7cef1c370b00}} , and our results are shown in Figure REF .
It is currently not possible from the Figure or the data to know for certain if for large {{formula:4896078b-fff3-4984-851d-f84b5d0e9e2c}} the performance asymptotes to the Parisi value (REF ) or something above it.
At {{formula:b87b7135-e8bc-4328-9e4b-a64a24f3a05c}} , we have crossed the energy at the phase transition which is {{formula:e8c26418-964f-44db-9365-bafb52acb926}} .
At {{formula:34f2bc46-fd63-48ca-ab23-ddcf3cafa9e3}} , we have surpassed the performance of the spectral relaxation and the standard semi-definite programming algorithms which yield {{formula:af3f631e-d26a-4928-9155-2dd1e9774d2b}} .
Nevertheless, as we discussed in Section , the recent classical algorithm by Montanari {{cite:8248ce42bdb4e58ce74eaf798f2098113f84c75a}} can efficiently find a string whose energy is between {{formula:14f171ae-91d6-4582-a270-69fa8a719104}} and 1 times the lowest energy, assuming a widely believed conjecture that the SK model has “full replica symmetry breaking.”
Hence, we can only hope to match this with the QAOA.
We want to improve our techniques to determine the asymptotic behavior of the QAOA's performance.
Furthermore, we can apply our techniques to other problems where we average over instances.
For example, we can consider generalizations of the SK model where multi-spin couplings are allowed. Since some of these models are shown to have no “full replica symmetry breaking” {{cite:1b73befe57942b866663aa532626ccfbb147ee14}}, it is believed that classical algorithms like Montanari's will fail to find near-optimal solutions {{cite:17c29b20a07d3a01cd27e7c9f9988f22ba4108c0}}, {{cite:0da5d7fd01a441c7db5528618a8a7453627ecd7a}}.
It would be very interesting to see how the QAOA performs for those problems.
We also imagine the possibility of extending our techniques to {{formula:46494f94-1280-479e-b176-e3b16c552948}} growing with {{formula:0fc3f3dd-b69f-44fb-8490-e698dedb7efa}} .
| d | 6f8ce17c0e06f5f887b0647ad96d5253 |
where {{formula:6886f49f-fb34-418c-93f1-056e3b4dba3d}} is the VOF function that distinguishes the liquid and gaseous phases, taking the value of 1 and 0 in the former and latter respectively. For modelling of surface tension effects, {{formula:4e5863aa-295f-4e13-9ed5-e76aa209ae4b}} in Equation () is approximated as the gradient of the VOF function {{formula:9a01cf48-fe7d-44ee-824d-c388d1336e25}} using an adaptation of Brackbill's method {{cite:959987a5bba09cd2245d4c85226e2dfeb95cf030}}, {{cite:5a9a7682b64c9d0ffbbea1c38603deb63fa679d6}}, and the curvature {{formula:d3909329-f787-4998-9656-e649c0c362d2}} is calculated by taking the finite-difference discretisation of the derivatives of interface height functions {{cite:5a9a7682b64c9d0ffbbea1c38603deb63fa679d6}}. The quad/octree-based AMR scheme based on the estimation of local discretisation errors of the VOF function gradient {{formula:06861d64-8228-4d6d-b7a4-3614872920dd}} and flow velocity {{formula:b3c007e8-0f06-4242-bdbc-134b3be8bbee}} is adopted so as to reduce the computational cost at high resolution levels {{formula:1afd0529-7a7d-4785-8e0e-fee98558ced5}} , which is defined using the minimum grid size,
{{formula:d8e02f6b-7fe2-42ce-9a15-e0b9c1a868b1}}
| m | c124cc2a6513561092941122116f2f1f |
As the name suggests, massive multiple-input multiple-output (MIMO) or M–MIMO architecture relies on a large number of antennas at the base station (BS). This allows for serving a large number of users. M–MIMO has attracted much interest in the past decade because of its significant potential for spectral efficiency gain {{cite:1e12669c5633e7998c0c3a4948636aa4634c683e}}, {{cite:db9ca4be517f340d937b65f50b14ac54361d3a86}}. However, the technological promise of M–MIMO is limited by two fundamental challenges {{cite:0bd4bdef10f0a80830977a9a62062099fbc690f6}}, namely,
| i | 588343925110624b4a88e8c3a6ca211f |
The detection of gravitational waves (GWs) from compact object binaries {{cite:7bbd5941aa79f731256c0d855ff9c57bf9f9d6df}}, {{cite:79278cdb2c7965d4b3a153051a5e327edb9e3ab7}}, {{cite:ca1e8fe2b2e37cbc27881c7757d8b68c18b7c535}}, {{cite:0fbf5777373525a2f21e1d30a7e2daba89af3257}}, {{cite:f2413bce92d2f888ab24cda4220746bfb280b82a}} are providing us the opportunity to test fundamental physics in the strong regime of gravity. With an increasing catalog of detected black holes (BHs) and neutron stars (NSs) {{cite:8d30f0e1dcd1e8e999c48575463638848d6b4ec5}}, {{cite:b1c902ddc23d36a6f0e832658d0c52178cb680d5}}, and sensitivity improvements of current {{cite:7e42c810b240f81a0b054d5c7c8c705b09c807a3}} and future GWs detectors {{cite:e75f08d5a79aa731471c98435f90a7ff2c7dbe33}}, {{cite:f40fbe401b1bece3e16bc88d55aeb096569401d8}}, {{cite:b10f78bb3554c48d199bc05d539d6c90afe9fd43}}, there is a potential to probe the internal structure of the compact objects, which can be achieved by matching the coefficients of the effective theory of compact objects {{cite:e9460b42f16e79d82c30779ce708c2e7d8d1be13}} from GW observations {{cite:ff9e1e882eb65fddeff59018c51da2dbccbd0391}}, {{cite:546930b3f73362353037e83efadbb549248d6710}}.
| i | 587f5c94d56dc2ff11d6bea309707570 |
Impact of the number of symbols per word. Authors in {{cite:5c459768d22ba876fefdf3cfecb350e14a353e85}} consider a fixed number of symbols per word sent through the channel. However, depending on the length of the words (e.g., the number of characters) and/or the conveyed semantic information, different words may not use the same number of symbols. To show this effect, let {{formula:2f064b8b-e844-4912-ad3b-ea3d559fb940}} be a sequence of words to be transmitted and {{formula:52c74dee-2eb9-4ca6-a1b0-30935e9f4743}} the length of each word {{formula:40a6f6d6-9a64-4f90-b9ed-331663f2094d}} on a character basis. Let {{formula:445e9018-b812-4a97-b6e0-b6e07e14950c}} be the total number of characters in sequence {{formula:8a0f4d26-9381-4807-9810-36e7270fab00}} . we first construct the probability vector {{formula:c1694c79-09c8-4e90-a9d6-19b5b6a15227}} such that {{formula:64f6974e-97f2-432b-8e84-b3953e13429f}} . Hence, {{formula:0299d59e-491c-41ea-86fd-f94cd6a9bea8}} defines the weight of the word {{formula:4188a8fb-eb9d-4ddb-980d-834e3c518b65}} in the sequence in terms of number of characters. Now, let {{formula:20e11867-55a5-4083-b3a3-fa00db1e8eea}} be the maximum number of symbols admissible for each word. Then, we encode each word {{formula:11f3312b-2bc7-414d-837d-32411998eabb}} in {{formula:6cf305d6-2dc5-4e49-a0e7-9921feb1fd82}} (instead of fixed {{formula:c370729f-1ac6-4ce8-ba7f-78d1e20bac4e}} as considered in {{cite:5c459768d22ba876fefdf3cfecb350e14a353e85}}) symbols where,
{{formula:cdea2862-1daf-48e9-9fae-84966fdc8da4}}
| r | c2bc4fc7d0e8ddef5651366860b66cef |
The error bars shown in the table correspond to 1-{{formula:8e0600f1-b0f6-4e29-af41-46cb9cfdf6ca}} confidence intervals, whereas the upper/lower limits denote 2-{{formula:73d2b00e-aa31-4bb6-83ee-9c42fcb95c0b}} confidence limits. The table consists of four sections, corresponding to the three mass bins selected for stacking, and the AGN sample. The high, intermediate, and low mass bins are defined as {{formula:254b77a2-ace6-4d36-8731-2d96ccea5f7d}} [10.0, 10.4), [9.7, 10.0), and [9.0, 9.7), respectively (see Sect. REF ). The full table will be made publicly available alongside publication of this paper.
aThe superscript indicates the sets of strong line calibrations (B18: {{cite:50d7682ee5feaa5cee6297d3d2ae0c9905e0ce54}}, C17: {{cite:9d6812a6cb23b57f5a8b627aa1c8e4c6cfd86ac6}}) used in the metallicity inference, with the coefficients presented in Table .
We consider the results based on the B18 calibrations as our default results, for the sake of a direct comparison with the field measurements in {{cite:6bbd2e718950fc824dfad9a7d55386d844931027}} derived using the same set of calibrations.
bFor sources in this section, their {{formula:04ffd62d-2f75-4749-a269-478936cee6a6}} estimates are not trustworthy since their nebular emissions are dominated by AGN ionization and therefore strong line calibrations are no longer applicable.
| d | 4ea44ac6453ddf63505c728640b784f8 |
For the penalty parameter {{formula:c3245734-43a8-4f13-8116-d7a88424c9ad}} and denoising parameter {{formula:df085d7e-3f65-4e96-9ccc-c0557f7431f9}} , generally they should be tuned according to the noise level {{formula:da45a245-9825-4273-94c9-7edf2fe15df3}} . According to the Morozov's discrepancy principle {{cite:fd8777bc7fe5e6d66de808ac68f774ad3dfd65f0}}, the value of {{formula:b62d0c3c-1cc4-4395-84fd-71fb4ebf10cd}} in (REF ) is positively correlated with the noise level {{formula:8c816a2e-1d32-4fb5-b037-8fbf2f7fd043}} , and the parameters {{formula:5c3e084b-4d9f-4349-a9dd-a387d08dffd0}} in {{formula:4bfacff3-4535-466d-96da-4b24ce98f840}} and {{formula:99c2ba8c-1a25-488e-a475-ebe3a123992d}} in (REF ) play the same role of controlling the rate of denoising. Hence, {{formula:38279f84-e64b-4fed-acff-dbf282e40a03}} should be proportional to {{formula:a0ded294-0f41-47c5-b4de-f602dbcfd342}} and {{formula:9ede9a12-f2ff-4743-b1a9-4d4dfbabb93b}} should be positively correlated with the noise level {{formula:2ec13f98-67f7-4e99-bb11-6018763c495d}} . In our numerical experiments, we tune the parameters {{formula:739705d7-e564-4034-b51b-144a92527f09}} and {{formula:ef8937ca-8e0e-40cc-9253-d73326452416}} such that {{formula:6c055693-e754-4b8c-9367-52af68f70b5f}} is proportional to the noise level of the observation, i.e., {{formula:4d8f87e9-1300-4b56-8b62-a8b63df32aa1}} . In Table REF , we list the tuned values of {{formula:0b3d5038-ac50-45c2-bc1f-5f54130c8724}} and {{formula:fd02dae5-66a8-4299-bb34-90e7904213f2}} for the cases where the noise levels are {{formula:50435258-3738-4b61-b303-aaed6c6cd33c}} , {{formula:d6a643d3-3905-4174-9013-8328fda97296}} and {{formula:d013d3b9-e277-4dc5-9065-99b4d5710910}} , respectively. These parameters are kept as constants for different finite element meshes.
{{table:2b47d5a0-e0d4-4596-9ba3-0f56d2cc3c34}}{{table:2339addd-58d9-4304-89bb-f25d672b2495}}{{table:b948a7d2-c709-47f7-a805-5e24d501b026}} | r | bdaea2b954a89a2cdcedd886bc1d700d |
Data Preparation: The data used in this work is taken from tp53 {{cite:91812ab83e4213cf562f1cd695cf07e9b535274b}}, which has a repository of 1054 anonymized wsi of breast invasive carcinoma patients with their genomic, pathologic, and de-identified clinical information. Images with highlighters or other artifacts were excluded, resulting in 708 cancer and 100 normal slides. For patients with multiple slides, the last biopsy slide was used. The standard practice of patch extraction resulted in an average of 3,000 patches per wsi of size {{formula:1488688b-b6b1-46a5-abaa-2d166852930d}} pixels at {{formula:00a0c1d7-d56a-4aa0-b72b-4313ed386e94}} resolution. Color normalization using {{cite:d9a502a91f13b442cc518dad4b9ada6884f4b783}} accounted for the variations due to staining reagents and scanners by different manufacturers, protocols for slide preparation, etc.
| m | 6ec816ed88482412d95b2ab1b6726067 |
In this paper, we present a set of controlled transfer studies for determining what makes cross-lingual transfer hard. We focus on three factors salient to crosslingual transfer: the embedding layer, the tokenizer, and syntactic shifts. We construct a set of systematically transformed versions of GLUE (t-GLUEs) targeting each of these factors, and observe how the finetuning performance of a pre-trained English RoBERTa {{cite:43c7bbfca71e47a8fb9085f46be42f0f6d7698f6}} model degrades as a result of each of these transformations.
Crucially, our method allows us to disentangle the effects of correlated factors: while all factors would change at once if we transfer between natural languages, our transformations allow us to pinpoint what causes difficulties for transfer.
| i | 180606897ae245cfffb89c37ad410fdb |
where {{formula:ee0ea277-22ce-45ed-a2b7-d26fa4a39470}} is the set of quantization levels, {{formula:552e2ddb-2e4c-4d0b-9eb0-723355410794}} is the number of parameters (network weights and biases), {{formula:a81e4833-4f78-4357-a623-72037fbd47eb}} is the training loss (e.g. the cross-entropy or square loss), {{formula:885d7c4c-43da-42bc-b571-353c1f87a19e}} is the DNN prediction function, {{formula:4c742e3c-1720-4b61-9327-efe41d001831}} is the training distribution. The quantization constraints in the above program make it an extremely difficult task: the underlying optimization problem is non-convex, non-differentiable, and combinatorial in nature. Optimization of smooth functions of integer valued variables (and even quadratic ones like the max-cut problem in graph theory) is known to be NP-hard {{cite:f11a1f145d2c536e13b61d0bc0d8454f1f16c90f}}. The challenge is to find algorithms that can produce a sensible approximate solution with a manageable computational effort.
Inspired by mixed-integer nonlinear programming (MINLP) problems, several approaches using geometric, analytic, and algebraic techniques have been proposed to transform the discrete problem into a continuous problem. Examples include the use of global or concave optimization formulations, semidefinite programming, and spectral theory (see e.g. {{cite:e3a68cf1eaa68faffca0bec2b80a47170e162191}}, {{cite:f1aa643eccfd365918b2793da379d93e18ee4de3}}, {{cite:aae288eb23f504905a6233d91c20125d7869fb0a}}, {{cite:3c295d584af1b43c45362160728fce3512706671}}, {{cite:ca54329b6839c658d7ede0e43aa081ab568797b4}}).
However, these types of approaches are doomed to fail in the NN context because the number of parameters is several orders of magnitude larger than for classical MINLP problems.
| i | c29b4d47b4624c52a17e63958af58915 |
To the best of our knowledge, no monocular methods have reported results on Waymo. In order to provide a baseline, we extend the official implementation of M3D-RPN {{cite:16140966492a67568f3f3b36f0c5018b93fdce02}} to support the Waymo Open Dataset {{cite:219ecd4679ee7b3d05be7d642f386aa51d4865c0}}. Table REF shows the results of both the M3D-RPN {{cite:16140966492a67568f3f3b36f0c5018b93fdce02}} baseline and CaDDN on the Waymo validation set. Our method significantly outperforms M3D-RPN {{cite:16140966492a67568f3f3b36f0c5018b93fdce02}} with margins on AP/APH of +4.69%/+4.65% and +4.15%/+4.12% on the LEVEL_1 and LEVEL_2 difficulties respectively for an IoU criteria of 0.7.
| r | 3d248b172d200272d4eccdb97513b478 |
Since Valiant's seminal paper about the complexity of computing the permanent {{cite:8982d8b20451d827393b17885ddcf08948fdd0ab}}, counting complexity has advanced to a well studied subfield of computational complexity theory. By proving that the problem of counting the number of perfect matchings in a bipartite graph is complete for the class {{formula:4210aae5-75dd-4302-bc68-7f23358c62f3}} (the counting equivalent of {{formula:1fb9ac29-54ab-40d2-9d7c-7d102bd3165e}} ), he gave evidence that there are problems whose counting versions are inherently more difficult than their decision versions. For many interesting counting problems, it was shown to be {{formula:0d6fcfb6-3f87-4275-af4a-cc735e051b4a}} -hard to compute exact solutions. Therefore, several relaxations such as restrictions of input classes (see e.g. {{cite:18d214f0ccd5d41c38f217e1d43d3c031084f5c9}}) or approximate counting (see e.g. {{cite:19cfbe00817070e35c7e9994c00e49bd3ddae98a}}, {{cite:a1fe27b1b05cce1dce1c0c3c3b2da0b04fa59221}}) were introduced. Another relaxation, the one this work deals with, is the analysis of the parameterized complexity of counting problems, which was introduced by Flum and Grohe in 2004 {{cite:48995a5432cbdf66a7e663913c6c0c89674ecb3c}}. They proved that, similar to classical counting complexity, there are problems whose decision versions are easy in the parameterized sense, i.e., which are fixed parameter tractable, but whose counting versions are most likely much harder. During the last years, much work has been done in the field of parameterized counting complexity. Important results are the proof of {{formula:d340059e-e471-4059-8bdb-2ea1b7622366}} -hardness for counting the number of {{formula:ef507877-5b9d-4a27-ab9f-1cb036e50f9f}} -matchings in a simple graph {{cite:30fff873d95c62509090830a5cfe4ab8ce86699f}}, and the dichotomies for counting graph homomorphisms {{cite:c0aba217f8a2d7c6b19bb573fc3a0a9db56b0883}}, {{cite:b3cb34ec2958f8224ea2c6e19caebab2890f7a25}} and embeddings {{cite:02ad3c2171e65fa8933e268f503182eea0a15fe8}}.
| i | 2d00bbedc738559697dc16153a127bd3 |
However, bridging the behavior gap between the simulated world and the real world remains an open challenge.
Manually specifying each actor's trajectory is not scalable and results in unrealistic simulations since the actors will not react to the SDV actions.
Heuristic-based models {{cite:ba3768f7a69c2358653aebb0135379f25d0515c8}}, {{cite:31d346b5350250934ea41612ee0cb7fd7d51c83b}}, {{cite:0b23fc71671ba66be2fb72d350060dd9d8693a31}} capture basic reactive behavior, but rely on directly encoding traffic rules such as actors follow the road and do not collide.
While this approach generates plausible traffic flow,
the generated behaviors lack the diversity and nuance of human behaviors and interactions present in real-world urban traffic scenes. For instance, they cannot capture irregular maneuvers that do not follow the lane graph such as U-turns, or complex multi-agent interplays such as nudging past a vehicle stopped in a driving lane, or negotiations at an unprotected left turn.
In contrast, learning-based approaches {{cite:7766593bba10cb97cbabb5d1814d72e7044971af}}, {{cite:4ba385c59e579cc00c1b67edff64ac049b4e03fc}}, {{cite:5be708dfbe0201950a166bfcb58605d1ab3fa1d5}} are flexible and can capture a diverse set of behaviors.
However, they often lack common sense and are generally brittle to distributional shift.
Furthermore,
they can also be computationally expensive if not optimized for simulating large numbers of actors over long horizon.
| i | e5e93a0a77d45f2f6c901f8bc2eed562 |
While BART has been designed to leverage the advantages of unidirectionality in its decoder and multidirectionality in its encoder, it displayed the worst performance in contrast to its reported high relative performance in various generation tasks. While the reasons for it require further investigation we note that the 2 major differences in our setting are the requirement for relatively long generated text along with smaller amount of exposed text in relation to the text to predict. t5 report degradation in performance on several tasks on a similar architecture to BART when raising the corruption rate to 50% which is still smaller than our setting which is 84% on average. Analysing its outputs exposes exceptionally long sentences. The first fact usually appears in the generated text verbatim, while elements of the second fact may appear oftentimes losing their original meaning. All other facts seem to have very little effect if any on the generated text, suggesting the dependencies of the decoder are easier for the model to pick up and rely on, than the more complex, forward looking dependencies of the text supplied to the input. This phenomena of sequence-to-sequence models focusing primarily on the dependencies of the decoder, in settings where the supplied input is substantially smaller then expected output has also been observed in previous works {{cite:754fa0e150f6bc82b75454f3da6af45739765273}}.
| d | 895574d92c01c48dcace1b2f58f44891 |
The soft graviton theorem {{cite:f2c1dc8c44f4ee3ac8044b6be3e3ac4365180136}}, {{cite:b636e0a9dce6e2ce7dbc99caf450862b0be1a700}} is a universal formula relating scattering amplitudes that differ only by the addition of a graviton whose energy {{formula:20aa04d3-dada-40f3-8985-be8d360b86b3}} is taken to zero
{{formula:ac7cbd94-9ce6-4e6d-8938-31ed569d5c61}}
| i | 6f4fd1c9ce656cdc74bc2543352449f9 |
Related work
For bounded {{formula:618ce48f-7a33-4f97-83d4-2abef1e0b259}} , both i.i.d. samples {{cite:8ed5392b2b3d7d0d6c00a1a98c7dfb731eff03c5}} and thinned geometrically ergodic Markov chains {{cite:51907fc26ad47441eb42e23a98e7021f7b14e9c5}} deliver {{formula:7322ec9e-f23c-454c-bc96-5c41fa3af51f}} points with {{formula:de3ea8bb-272e-42a6-bc97-1675714bcc48}} MMD with high probability.
The online Haar strategy of {{cite:aed378f80d3cdaebd471c32106cf04bcc1df4159}} and low discrepancy quasi-Monte Carlo methods {{cite:750e8c2cfd373210feae485eb7e697533c877b37}}, {{cite:8ae674e8a3049c5ed33d93f7e08c9c7f974e9f44}}, {{cite:d74e2340522fafd67d18bad2b11e9607f320afbd}} provide improved {{formula:114b598f-ea1a-4882-bc06-14af24141a7f}} MMD guarantees but are tailored specifically to the uniform distribution on {{formula:fc0e6fa6-7cdd-400b-9bb6-ddc25738a44d}} .
Alternative coreset constructions for more general {{formula:ea0e26aa-f284-4b0f-b844-d1c5e5bbf9d4}} include kernel herding {{cite:8fef5acc2a2dbd5f935f223bcae32233aaee8dbd}}, discrepancy herding {{cite:cbeddfa5453ab00b73fdbd2455f9f5bc0993b006}}, super-sampling with a reservoir {{cite:4fb0c87f77b5f42d0cdb7310a612a53e79bbbb14}}, support points convex-concave procedures {{cite:2e819ec53241b735553f029c21e8fa57830e5ef0}}, greedy sign selection {{cite:056e97399e733e57d0802e02a51c8444f1395749}}, Stein point MCMC {{cite:713530b6952d3fa8379648aa2a3f7296f69dd0c2}}, and Stein thinning {{cite:906e7174fd9d5a0345e5d594082cf4633bf0059d}}.
While some admit better-than-i.i.d. MMD guarantees for finite-dimensional kernels on {{formula:b491b693-c073-42fc-ab01-96c6b3d08db4}} {{cite:8fef5acc2a2dbd5f935f223bcae32233aaee8dbd}}, {{cite:cbeddfa5453ab00b73fdbd2455f9f5bc0993b006}}, none apart from KT are known to provide better-than-i.i.d. MMD or integration error for the infinite-dimensional kernels covered in this work.
The minimax lower bounds of {{cite:79a32ef474ef8b2dea45515cfe7878e3a7dd65ae}} and {{cite:8ed5392b2b3d7d0d6c00a1a98c7dfb731eff03c5}} respectively
establish that any procedure outputting {{formula:50a43c4f-0d9a-4e08-aa01-711997c5e9ec}} -sized coresets and any procedure estimating {{formula:719348be-f441-4bd7-92f4-8fe74f7992b7}} based only on {{formula:8ed47aa8-382d-4224-bab5-13ffe62cb33f}} i.i.d. sample points must incur {{formula:cda6dff1-dae7-469d-8eec-63debbda6167}} MMD in the worst case.
Our guarantees in sec:kernelthinning match these lower bounds up to logarithmic factors.
| i | 59614eb8da77c245f2b61c8423648b45 |
Most calculations of the particle creation rate from black holes assume a stationary geometry. However in an astrophysical context, matter and radiation will be falling into the black hole {{cite:9923a08155d753d993c4ba6faa3b99a04c08ca83}}, {{cite:bde3a31ad5eac72f5f83fa4572cced165e625d5b}}. This has various consequences. Firstly, the usual black hole uniqueness theorems, which depend on a vacuum condition, will no longer hold, as recently pointed out by Hawking et al. {{cite:0b97a5d977b9a6bac8129cf976089d7d8e072a62}}. Secondly, the usual calculations of black body radiation emission need to be amended to take the dynamic geometry into account (they almost all, explicitly or implicitly, rely on the Killing vectors in the vacuum, which are references for defining the mode functions). For the extended maximal geometry including a bifurcate Killing horizon {{cite:cd7be117517186eb7646c1d0b47a25d2f4ab720b}}, one can derive the Hawking radiation formula according to the Hawking-Hartle vacuum, which uses the Killing vector field to define positive and negative mode functions. These definitions implicitly rely on asymptotic flatness of spacetime, which does not hold in the case of a cosmological black hole {{cite:9923a08155d753d993c4ba6faa3b99a04c08ca83}}.
| i | b406af150896fcb61809ce26f2e77d36 |
Mini-val Results. Table REF reports the performance of single-scale testing on COCO mini-val set. With HRNet-W32 as backbone, our method achieves 69.8 AP when input resolution is set as 512 pixels and outperforms the previous bottom-up methods {{cite:0afbe0410846b4446c8e94393dd9e5334810c22a}}, {{cite:41bb302f97b595b2bc3ce99b9e75083eeace5ed8}} with a large margin. In particular, compared with the state-of-the-art HigherHRNet {{cite:89a1419a133aa48eea02451c8ed3a9bef0dc6d0f}} and DEKR{{cite:eaaa2a57ee3b5c038e391caa885b3bb0ac9174a2}}, our network achieves 2.7 AP and 1.8 AP improvements without either multi-scale heatmap aggregation or additional pose scoring net during inference. We further obtain 72.4 AP with input resolution of 640 pixels via HRNet-W48, which is a new state-of-the-art performance compared with all existing single-stage as well as bottom-up methods.
| m | 83621b88c52dcc5d9806a08e52df6f7d |
A book embedding of a graph is an outerplane drawing (i.e., a cyclic order on the vertices) and an edge-coloring such that edges of each color (called a page of the embedding) induce an outerplane subgraph.
A graph or vertex-order with a 2-page book embedding is called subhamiltonian {{cite:06926842efba92948e0afe80ca1ec90c1a7c1431}}, {{cite:2afedf5bc400cf47caf669ee49a4e2dfe9193228}}.
| i | bcef3d128bac32ff09891b0f7c11e5a4 |
We also evaluate on the raw deblurring task in Tab. REF . It is obvious that those SOTAs trained on the sRGB color space have a gap over methods designed for raw deblurring. Even DMPHN_raw {{cite:266707cdb6097b512e80a2fe7278f279ed88144c}} has minor superiority over DMPHN_rgb {{cite:266707cdb6097b512e80a2fe7278f279ed88144c}}. For three raw deblurring methods, ELMformer still exceeds others in all metrics except SSIM in sRGB color space but outperforms RID a lot in PSNR.
| r | 54ca4fe850a918d65fa60bb15058c964 |
Temporal co-occurrence is explored between modalities to increase their correlation. {{cite:a8a06a322e62f0ff38e74f9f74a13074d22a1d30}} and {{cite:b4b94c7db84f23d4c5f7e1f1dadcffdea226d4d7}} leveraged positive samples from paired modality data and contrasted them with others. This has proven superior performance when compared with the approaches for learning alignments/correlations. In fact, representations extracted using NCE (and its extensions) reported top-1 accuracy in several benchmark datasets for audio-visual classification {{cite:87cf0258293519a963808a1763fc865b78bf90d4}}, {{cite:a8a06a322e62f0ff38e74f9f74a13074d22a1d30}}. However, NCE is mainly used for alignment between modalities and semantic discrimination is left to a posterior phase (as a downstream task) {{cite:b4b94c7db84f23d4c5f7e1f1dadcffdea226d4d7}}.
| d | 8346b671644cbaf489927202eaa6bd14 |
Caceres et al. {{cite:3768859cab56bd44480056c91b2c08f4e9972258}} obtained an upper bound for the metric dimension of cartesian product of graphs {{formula:1f22f505-4216-4adb-929d-4dced78b6715}} and {{formula:817b8a59-b459-4220-9a5b-2ec16af93ee3}}
in terms of {{formula:e604a03f-b981-4897-8616-bd4d3f94db29}} and {{formula:3ae065da-95f1-4a86-9915-e090b0bae750}} .
They also obtained a lower bound for the metric dimension of the cartesian product of a graph {{formula:d68a1383-86a1-45d9-979c-9ac29f49f1c3}} with itself in terms of {{formula:f19b478c-2f76-4a3d-92a9-1a0ae2133dcc}} .
Hence computing doubly resolving number of graphs is useful for computing metric dimension of
cartesian product of graphs. Moreover, studying doubly resolving sets is interesting by itself.
It is clear that for each graph of order at least 2, we have {{formula:d1e434fc-1cc8-426d-9fc4-026bf904708f}} . Caseres et al. found the following upper bound for {{formula:67085baf-30ab-4c31-a8db-2b495f33d11c}} .
| i | 8bfc2b9a7509c059c70e3ec82e41a91c |
Earlier studies on balanced neural activity considered a priori the limit of infinitely many neurons in sparse networks {{cite:d9268fb175e3461e9d7161c3882f07bdf288d697}}, {{cite:f330a1719c2aefbeadd954fdf759c985977b51f5}}, {{cite:8b16a9fd19693651bf0d00100cfeb59845be2bc2}}, {{cite:5bc7f019a72553cd2696b3aa9b8481f517d0010e}}, {{cite:573d4b8a1d4f88b677039cfc20e69565aa6a4f44}}. In this mean field limit the collective dynamics is well understood. In particular in infinitely large networks of binary neurons with balanced excitatory and inhibitory interactions the dynamics are chaotic {{cite:d9268fb175e3461e9d7161c3882f07bdf288d697}}, {{cite:8b16a9fd19693651bf0d00100cfeb59845be2bc2}}. Further studies of finite networks found stable dynamics in weakly diluted networks of inhibitory coupled neurons {{cite:ca16597f40cbe5e033a9c8f0b0f93eb6764fa383}}, as well as in globally coupled networks with dominating inhibition {{cite:1ef93f7f40e34cf68e0796b2eb3591d59b3cc4ee}}. Recent analytical evidence confirmed the existence of stable dynamics in inhibitory coupled networks of integrate-and-fire neurons with a more complex structure {{cite:71a3f286ab8e21d346fcabbcf19cd44383e664fb}}. As the inter-event times that underly our analysis shrink inversely proportional to the network size (at a given individual neuron-spiking-rate), the methods applied here, however, are not applicable in a straightforward way in associated mean field models. Thus, one cannot make strict statements about stability in the limit of infinitely many neurons. Nevertheless, as shown above, generic transients and periodic trajectories in arbitrarily large inhibitory coupled networks are stable.
| d | 28ab05a2f8627930890db54b5cbbd91a |
Quantum computers are expected to provide substantial speedups for a number of practical problems of interest, such as factoring, quantum chemistry, and the simulation of physical models that describe nature {{cite:d0824b50a0a1254e67040a01ecbeef730240d8ca}}, {{cite:36b2554b0f5e9a3913203ef8bf3adedfee34594b}}, {{cite:3ce09d981271cffb2bb3f270907891a8902a1548}}. However, many of these applications require a large number of qubits and gates, which is a demanding requirement for present-day quantum computers.
| i | 4c2f634b90c34c5a57e1552faf7194d0 |
One way to insert sophisticated linguistic information may be through the use of contextualized word embeddings. While other works have explored the use of transformer language models to predict prosodic and stylistic features {{cite:252d07795fac4d673e5d8f1f9ee5aea56f273e73}}, {{cite:8db5a9f2deb493494db8b994d5fe15a678bd049b}}, {{cite:4be762ff62f055a02700b6a9365f0a1c19b7053b}}, {{cite:01a06cb9f45c015a90d2a04ecd084b9598cb5caf}}, {{cite:5122e618e459a3fcb6cb5a5870bf7a15803510ed}}, {{cite:9ffb4a9819b614a16a627901e7b8c608852c34d0}}, {{cite:40e83324211b2ceb4df596ce535d1b3dab9fc7fa}}, it has not been fully explored how much the encoded word representations actually imbue high-level knowledge. In other words, do they provide information about the content and context of the message or do they only provide/reinforce low-level linguistic features such as the likelihood of lexical prominence, parts of speech and position in the sentence? For prominence/pitch accent prediction, a fairly high baseline can be achieved using word majority/accent-ratio alone (i.e., if a lexical item is usually prominent in the training set, it is likely to be prominent in the test set) {{cite:bb2bb63981aea0a1db7872276b9959bab4b61ccb}}. Moreover, {{cite:24b22b4aa6dcc886d628cc2baacba7c60b1b625b}} found that in a binary pitch accent prediction task, using broad word category distinctions (open/content or closed/function) could achieve 68% accuracy; more fine-grained division of the closed class category brought that number up to 77%. In this work, we probe a language model, in the present case BERT {{cite:bec2b1cf2c9cf9891d7f0f1fbb2c16f5fec41719}}, by choosing a testing ground that cannot rely on simple heuristics to achieve good results: the prediction of contrastive focus on personal pronouns. Personal pronouns, at least in the corpora typically used to train TTS systems, are majority non-prominent.We conflate the notions of prominence and contrastive focus for personal pronouns; when they are prominent, they typically possess a contrastive meaning.
| i | 7dcf8c67722f94980f2ef35ac29908aa |
which gives us the quadratic variation of {{formula:ed21f930-e6cf-4c1a-afd1-a2da7726210a}} . The conclusion follows from the martingale representation theorem (see {{cite:4bc2edcb3c8ad9657bf1f90a1ca070e29d69d0ea}}).
| r | 032a3f6af0496c9a31711efbdabbb460 |
where {{formula:fe1c43ca-7820-4361-a0f0-552b8e6296e2}} is the synchrotron spectral index, {{formula:95baf284-a98b-4077-8d9a-65b5addd4eff}} and {{formula:9a244913-255a-4a42-8700-3cc03c658d0b}} are the amplitudes of the
synchrotron and free-free component at 5 GHz, respectively, and the weak frequency
dependence of the free-free continuum is taken from {{cite:126be0d425bd509dfc5e5d8d279565a427463912}}.
For the AME component, we adopted the spectral template of {{cite:c4a2317f78721d60d0af1764070341f67bac9530}}, derived
by averaging theoretical predictions over a variety of ISM environments, using the spdust.2 code
{{cite:3f54fc8492f2cfe9bbce3349ed95e61b9dcc27ea}}, {{cite:819fb4af28a065a46f3f8f2403c3dc2396834bbf}} and neglecting the most extreme
cases presented in Fig. 2 of {{cite:53ff562afbf22ef6b58ac55c7c2b3f5fa3a65b88}}.
We scale the template on its amplitude at 24.6 GHz, {{formula:533b1eee-c4e8-4303-bbcd-9450513baa02}} and, as in
{{cite:c4a2317f78721d60d0af1764070341f67bac9530}}, we keep its peak fixed (it is located at {{formula:1d004ba4-6a65-44e5-aadb-1f1a0b59efa9}} GHz).
We do not include thermal emission from dust, negligible in the spectral range we considered (see, e.g., P11).
| r | 07d5e34793aa0658f4a01eb4c25dc5c8 |
However, our experiments showed that convolutional architectures
are not limited in this way, and are capable of generalising
to test data that shares no features with the training data.
Moreover, the underlying principles of this approach arose
from models of visual cortex {{cite:0fcd5cdaaa0de26b810430b46d93dbf3e0d86042}},
and have become standard components in image processing
networks {{cite:fea43181dafb1ba2dc615f5d9fe107c72a4f8f8b}}, {{cite:bc6354ce0600b08db0fa8a298185cd90ce4f1aed}}, {{cite:9781b23d35d1bb34cccd35d6bc74b2c7674bff49}}.
In concrete terms, weight sharing allows what is learned about
features present in the training data to be transferred to
unseen features in the test data.
| d | 39fae4f2af8b78377e40e1bba68f110e |
Definition 2.10 {{cite:5961aed91a7abc81baa51985abc9274a6ad24274}}, {{cite:69724fbd7459e07fe519684f9073c7c3ec56a42a}}
Let {{formula:9ed760f4-4fe8-43d4-a60c-ee1e94643726}} be a Hom-associative algebra and {{formula:fd3281c7-d6ca-4ce1-9b65-91b1c1515a53}} be a Hom-module.
| r | 88f9841c0cc9d98c1a053c64bbe6f96d |
We extend the universal approximation theorem of {{cite:9b4dec58093331e11410f07c454cea484e34af42}} to Theorem REF , where we show that given any measurable operator {{formula:0aa13cbb-e2cb-4850-871b-ec9278a0fa04}} , for {{formula:fdd8e24f-e10f-40c5-b941-23f4f7e91044}} , {{formula:c3e653e1-1142-4a8a-ae98-6003a0e12623}} , with respect to an underlying measure {{formula:3b7d8aae-1254-4057-89a0-b24ef084d5ed}} , there exists a DeepOnet of the form (REF ), which can approximate it to arbitrary accuracy. In particular, we remove the continuity (of {{formula:d03c0041-e906-4d3d-869c-785680af3b7e}} ) and compactness (of subsets of {{formula:0b8423ef-68e4-47c4-91e2-7abbe4452108}} ) assumptions of {{cite:9b4dec58093331e11410f07c454cea484e34af42}} and pave the way for the application of DeepOnets to approximate operators that arise in applications of PDEs to fields such as hypersonics {{cite:cf055b97a1553465881a89d4c4261cb7761b1f89}}.
We provide an upper bound (REF ) on the DeepOnet error (REF ) by decomposing it into three parts, i.e., an encoding error (REF ) stemming from the encoder {{formula:2c3dd7d5-4ff5-46b5-a2d1-d44d317e59fc}} , an approximation error that arises from the approximator neural network {{formula:fa95897a-b50c-4115-b03a-2bdf586a9b1f}} that maps between finite-dimensional spaces and a reconstruction error (), corresponding to the trunk net induced affine reconstructor {{formula:3e243956-c9aa-4f18-a3fd-23819f2c7f6b}} (REF ).
In Theorem REF , we prove lower bounds on the reconstruction error () by utilizing optimal errors for projections on finite dimensional affine subspaces of separable Hilbert spaces (Theorem REF ). This allows us in Theorem REF to prove two-sided bounds on the DeepOnet error (REF ). In particular, the lower bound is explicitly given in terms of the decay of the eigenvalues of the covariance operator (REF ), associated with the push-forward measure {{formula:6f3f7c66-aff3-44cc-a39c-ba1c57e54ae6}} (REF ). Moreover, this construction also allows us to infer the number of trunk nets {{formula:1edf170e-6187-4b34-8cce-9196994e8c41}} and that these trunk nets should approximate the eigenfunctions of the covariance operator in-order to obtain optimal reconstruction errors. Furthermore, we also provide bounds (REF ) on the reconstruction error that leverage the Sobolev regularity of the image of the nonlinear operator {{formula:4b30814c-8554-4394-8674-681316219117}} .
To control the encoding error (REF ) corresponding to the encoder {{formula:07d446f4-9d07-407c-898c-ba202ea3346e}} , which is a pointwise evaluation of the input at {{formula:178e87fe-b56d-4535-8abd-08eb7023d71b}} sensor locations, we construct a decoder {{formula:245c8525-a630-410c-bd4e-2a142cd1e284}} (approximate inverse of the encoder) (REF ). We show in Corollary REF that sensors chosen at random on the underlying domain {{formula:956f8c9d-e8f4-4ecf-814d-bc494288f0fc}} suffice to provide an almost optimal (optimal modulo a {{formula:cefc0300-83af-4f2c-ba6e-160f819c19a3}} ) bound on the encoding error. This further highlights the fact that DeepOnets allow for a general approximation framework , i.e., no explicit information is needed about the location of sensor points and they can be chosen randomly.
Finally, estimating the approximation error () reduces to deriving bounds on a neural network {{formula:1d4270fd-6f28-4886-8585-523e6e8b9bd8}} that maps one finite (but possibly very-high) dimensional space to another. Hence, standard neural network approximation results such as from {{cite:12ee14511bdbef98452a1ad8c72c1e7ae3834a50}} can be applied. In particular, approximation results for holomorphic maps, such as those derived in {{cite:48a5bcefba389a9c0dad3cf31e1c84205a4aee41}}, {{cite:10cc69ea3b728f7df447be8873cb535ebf59191b}}, {{cite:6f8937c39a1a26636dc9d5f65aae2d97ebf87b9b}} are important in this context.
| d | 3c660a55bf72426286aee72817fce2ce |
The framework of learning-based volumetric aggregation of intermediate feature maps has been previously introduced by {{cite:265ee0c2f8247cbbcc68bed7e762effe3a814cb5}}, yielding state-of-the-art performance, but it requires a large number of voxels and high computational load since the final output of the network is a voxel-wise representation. We instead use a human kinematic embedding, as mentioned in section REF , aggregating relevant information with a much smaller number of parameters, which enables our method to make predictions in real-time.
| m | cf7556893e6c69aa1113fe175749ea5d |
First, the recent astronomical observations {{cite:ce9ca19e6b818c577bc37f0c6611196392d46335}} found that our universe is accelerated expanding. This result can not be obtained directly from Einstein's gravity and his cosmological principle. Since normal matters only provide attractive force. The most widely adopted way to resolve it is involving the so called dark energy which provides the repulsive force.
| i | 21194b0d4b5e4c53bdaffc713380fb0f |
From property REF , we know that the continuous Newton flow
(REF ) has the nice global convergence property. However,
when the Jacobian matrix {{formula:3bd09154-470b-4f4e-a94f-ad491a242e77}} is singular or nearly singular, the ODE
(REF ) is the system of differential-algebraic equations
{{cite:0b9aadf0f703b53c7d3e33230b615fee60c4857f}}, {{cite:c7523c00ab2a942a2d700aecd63d3f15f55884a1}}, {{cite:930fef3be32509ac3c085c5fb96aa15b7ac5215e}} and its trajectory can not be efficiently followed
by the general ODE method such as the backward differentiation formulas (the
built-in subroutine ode15s.m of the MATLAB environment {{cite:5c5a11051fe52925d4be1adf27778231ce3f896a}}, {{cite:0cea4b9da48953d5cfcee2a7fa1532e9a174f591}}).
Thus, we need to construct the special method to solve this problem. Furthermore,
we expect that the new method has the global convergence as the homotopy
continuation methods {{cite:640fde1312b52a13f442309d93351071595bc32a}}, {{cite:30689fe736f918589070cd7feb64d2ae6fe29f52}} and the fast convergence rate as the
traditional optimization methods. In order to achieve these two aims, we consider
the continuation Newton method and the trust-region updating strategy for problem
(REF ).
| m | 2b9a72fff157160c187ec7dbeff14afa |
The systematic error budget in our lattice calculation is relatively complete.
The remaining systematic effects requiring further investigation are the neglected disconnected diagrams, the quenching of strange quark and the
use of up and down quarks heavier than their physical values in our calculation. The first effect
is Okubo-Zweig-Iizuka (OZI) suppressed and believed to
only gives a small contribution in the charmonium system {{cite:ab5146ae15e2d180843aa6b7b0b3e7229cd74ff2}}, {{cite:820d66951981494ba3fec27932e2250327b57338}}, {{cite:89f187e6ca236a425d113e6ed372aeadbc3fefcd}}, {{cite:c3d6358b0ad41c9daf2f0691c24f2f929c259b37}}.
For the other two, previous lattice calculations {{cite:e15da03f510a06c5af7002a1a90d36224ee88923}} indicate that they will also result in only small effects. Thus, none of them
are expected to explain the 28% discrepancy between our lattice result and the PDG-fit value.
| d | 17eb104d8fb6c05cc6ce79c9ee5be7fc |
We also evaluated the augmentations for CIFAR-10 and CIFAR-100 datasets, with symmetric noise {{formula:902fffcd-1978-4679-a1c8-fac2ea462df5}} and asymmetric noise of 0.4. For these datasets we included all the 13 basic augmentations and SOTA methods and their combinations. We also evaluated the use of SOTA training strategy DivideMix {{cite:4d21fa5de0791c3d1e1f6a145674c9559b5d0356}}, using different augmentations. DivideMix standard augmentations use Mixup, random cropping and horizontal flip. We evaluate replacing Mixup for CutMix and adding AutoAug.
Results are shown in Table REF . We can observe that the addition of basic augmentations to the baseline highly improve the results over different noise rates. As noticed in MNIST dataset, the combination of rc+translation+shear showed better results than using individual augmentations. Using all basic transformations together showed to be less efficient. We also noticed that random crop (rc) augmentation is essential to improve to results of SOTA augmentations strategies. From the observed results, the combination of CutMix+rc+AutoAug and Mixup+rc+AutoAug showed the best combinations of augmentations when added to the baseline. The relative improvement over the baseline test accuracy is up to 61.39% on asymmetric noise and 177.84% over high symmetric noise by adding augmentations to the training process. When observing the standard DivideMix strategy, denoted in Table REF as DivideMix (w/ Mixup), the replacement of Mixup for CutMix and the addition of AutoAug improve the results for all the scenarios analysed. The improvement of DivideMix just by changing the augmentation strategy increases the best test accuracy by up to 6% on CIFAR-100.
| r | 143ce1f3faf94f28e09b8b1f4e568c33 |
The transformer encoder {{cite:74252f0df5d6d7443487b03b3d1e8267bf7b2ca1}} alternates multi-head self-attention and MLP (multilayer perceptron) blocks. These blocks are then intertwined with layer normalization and residual connections (see Fig. REF ), as follows:
{{formula:226a0538-7a7a-4e33-b5e5-8efd4ab0df74}}
{{formula:46c17053-2e42-40b1-96a9-aad5f1da103e}}
| m | 8722780667446c0651932d3eb2be9679 |
where {{formula:e2661ae5-4cf7-4307-8d70-ba148f207910}} is the number of layers, {{formula:7bc230bc-8c2b-47d0-bf6d-8301823bcdf6}} is the filter size of the {{formula:83454651-352e-4faf-90c0-9f0d131b674a}} th layer, and {{formula:3252c984-af59-41d6-87a3-4964aeecf563}} is the stride of the {{formula:2bb79fc2-3c71-47b8-a281-530d6d72b115}} th layer {{cite:4b17213f7c7652c8500acfa5fcbcae0317a4d8df}}.
| i | b4edfac2c605680ae7cbeb9b9f16c3ae |
Despite its achievements, the theory of dynamical systems now requires new ideas to meet the needs of a scientific community that is increasingly dependent on data. In biology and the social sciences, the availability of huge amounts of data is in contrast with missing or poor classical mathematical models. Recently, combinatorial models have gained attention as a potential replacement for classical smooth models. The advantage of combinatorial models is that they facilitate direct algorithmic analysis, which makes them an ideal tool in the context of data. Forman's seminal work on combinatorial Morse theory {{cite:bc0b09ce6ba6ce34ef4363ce114f2149586f489d}}, {{cite:31d22d8997ec7e0944bbc71f7023270bc9e6fdf6}} introduced combinatorial models to dynamics via combinatorial vector fields. Later, these were generalized to multivector fields {{cite:131d6977903feb2775fea074147ba1b7d0586628}}, {{cite:b36d17c61cbb5582194136925179fb6f9251d5e6}}. In recent years, powerful constructions from topological dynamics, including Morse decompositions and the Conley index, have been adapted to this new combinatorial setting {{cite:c3e3cdca31e29043418e98798e9f7ca1baacafd3}}, {{cite:497ecbd1296800db9f7f4fc724b90caa56edeb00}}, {{cite:131d6977903feb2775fea074147ba1b7d0586628}}, {{cite:b36d17c61cbb5582194136925179fb6f9251d5e6}}.
| i | 41b109ea243e2d4219716b737c653ea4 |
Various methods have been proposed to perform cross-lingual text retrieval, which learns cross-lingual word or sentence representations shared across languages. Cross-lingual word representation methods typically train word embeddings on each language separately, and then learn an embedding mapping between the embedding spaces of different languages {{cite:21ee2bf9af6e810676c27d44122012a3914e2b23}}, {{cite:de4252598326b5e59fdeaf64a77795943198d7d1}}. Then, the bilingual word pairs can be retrieved between vocabularies using nearest neighbor search, which is also known as bilingual lexicon induction {{cite:7d920b641fd48ec4e6ea5c0cba0e3dc3a9ca08c3}}, {{cite:46d79db0cfd5245e51ca1fe4c346df66cd837c96}}.
Cross-lingual sentence retrieval is typically achieved by learning a sentence encoder on multilingual text corpora with self-supervised pretraining tasks {{cite:c6801e3d2185f9fbd2dca43d6db85288af4200b4}}, {{cite:8325c4240b563f7a6a586437bc7c156b3a8fe99b}}, or large-scale parallel corpora {{cite:0b5afab63a724b89c4fc72e5ae3fba592d7e69b9}}, or both {{cite:95c255459895bc99a589a4a61ac5d2eaddcbaffe}}. The trained sentence encoders produce language-agnostic sentence representations, which enables sentences to be retrieved across languages.
| i | d5112f894692a7133e1b7acdc47f5972 |
Other authors have also considered the age distribution of the GCs in M31. For example,
{{cite:dbcc8c1df9ddea95bab5367b7e8f0f1f3600c180}} discovered that M31 contains GCs exhibiting strong Balmer lines and A-type spectra,
from which one infers that these objects must be very young. {{cite:b1b53f1989f279d2907b80cf318b03598830ccb6}} and {{cite:a5731a50d39dffbcb64fa40cf50ad5ae1a522bd0}}
confirmed this conclusion. {{cite:271f290a894a5d0ac3582106b27c337cfc740003}} and {{cite:6056cc140999bcef3ce210b408563def11492d8b}} carefully studied the sample of
young M31 GCs. Very recently, {{cite:91a15b9b52d1ba2d78958eff316deb7f84684508}} determined the ages and reddening values of 140
young clusters in M31 by comparing the observed spectra with models, and found that these clusters
are less than 2 Gyr old, while most clusters have ages between {{formula:0389a1a1-2cc7-4d6a-8546-383026513119}} and {{formula:1392dc4f-47ad-4442-9f74-c2c613775a79}} yr.
{{cite:3465f3cda2a208ba025e22ad3baecacb470b8f32}} estimated an age for VDB0-B195D of {{formula:b6ec748f-fbe7-4ddd-8eb0-1ebc832b6952}} Myr based on HST/WFPC2
color-magnitude diagrams (CMDs). The ages of the M31 clusters determined in this paper are in
general agreement with previous determinations, which we will show in more detail below on the
basis of comparisons between our determinations and previous age estimates for individual objects.
| r | 94d3bf920732b6503063284cbde27f03 |
Shallow clustering-based approaches necessitate the hand-crafting of discriminative features and often require the selection of an appropriate kernel. Recently, with the advances in DL, there have been attempts to extend these classical approaches. Most of these methods, such as Deep SVDD {{cite:c12df7ac09d98ba56c62a1fbd455743c6967c921}} variants, extend traditional methods by learning a kernel that maps data to a discriminative high-dimensional feature space. This is usually carried out by optimizing a Neural Network. These approaches have shown promising results when dealing with non-sequential data. Unfortunately, the temporal modeling of time-series is often disregarded, mainly relying on a simple sliding window. As a solution, Shen et al.{{cite:043cf69302602c150da6d76b5aa446b41117b8c2}} suggest fusing multi-scale temporal features and employing a Recurrent Neural Networks (RNN) to model temporal dependencies.
| m | f2a6d6ce13bb0b5e29151f3504c3b3e2 |
In order to find the optimal central segmentation model, we evaluate several configurations of parameters typical for FL such as the number of local epochs performed by each client during every training round and the fraction of clients selected by the server during each round. The process of training each model consists of 15 rounds. The Jaccard score and loss obtained by each model are presented in Fig REF . For each configuration, we check the number of rounds required to achieve a Jaccard score of 0.92 twice. Results are presented in Table REF . We identify that for a fixed number of local epochs, a greater fraction of selected clients results in a smaller number of rounds needed to exceed the score of 0.92, similarly to the trend observed in {{cite:2349e91478146431d159d11bc7417a2949e9b123}}. The highest score (0.924) is achieved by the model trained with 3 local epochs and 3 selected clients in the 15th round of training. This model is later used to generate masks for classification.
{{figure:85000dac-8f5a-40fe-bebb-dd5ef971cab4}}{{table:da4af323-d14c-4eae-bfad-9e18d94508fb}} | r | 372f9291063c43d3fcdf4170b5e21f35 |
Using the properties of the occupancy measures, the reformulated CMDP problem (REF ) can be rewritten as an LP, where the optimization variables are occupancy measures {{cite:11f621ff50f147ab9c37bd4ac8ba12b3423307c2}}, {{cite:1396e5b05dcd19b6cf47b6ff9b9ee328173b293e}} . More precisely, the CMDP problem (REF ) and its equivalent (REF ) can be written as
{{formula:f554a123-93d7-4117-aee5-31843aceecf6}}
| m | 9219e2572ae2b64558d7c91b3f2f4af7 |
In order to evaluate the performance of the proposed framework for event identification, two different datasets are considered in this study.
The first one is obtained from the dynamic simulation of line trip and generation loss events in the Texas 2000-bus synthetic grid {{cite:3e5b7da540d90d04b340f598211305bd7d6fd1ee}} using the power system simulator for
engineering (PSS{{formula:1b2059d0-7006-481d-96a0-f4cedf4bd4e6}} E).
The second dataset is a proprietary dataset with labeled generation loss and line trip events obtained from a large utility in the USA involving measurements from nearly 500 PMUs.
| r | 281cd15bc62d0208401abda0fed665cc |
We test our de-biasing technique on five data sets: Adult, Compas, German Credit, Medical Expanse, and Bank data {{cite:8e26640194a51ab6ac106cbe04a1712bec70f916}}. These datasets represent the two general cases where the majority is privileged and the minority is privileged. For the baseline learning algorithms, we mainly use Logistic Regression (LR) and Random Forest (RF), and for de-biasing, we mainly compare our technique to Reweighing (pre-processing), Prejudice Remover (in-processing), and Reject Option (post-processing) de-biasing techniques. Additional experiments are also performed on the Compas data to investigate baseline algorithms SVM and Neural Network (NN), and other choices of mitigators such as Disparate Impact Remover (pre-processing), Exponentiated Gradient Reduction (in-processing), and Calibrated EqOdds (post-processing).
We do not choose the adversarial de-biasing technique since it either fails to de-bias or suffers significant accuracy drop. We measure fairness using a number of individual and group fairness metrics, including average odds difference, disparate impact, statistical parity difference, equal opportunity difference, and Theil index. De-biasing algorithms used for comparison are implemented in the IBM AI Fairness 360 library {{cite:1cc906826a039051f8677c6ae4c282e777e3ec7c}}. Synthetic (non-existing) data is generated using SMOTE {{cite:eadf34788133e70b7b66940c13383d20dc8cf145}} until the other group is not disproportionately (dis)advantaged in training set.
| r | 4da70174b4cf8638818079c64649fc8e |
We used 64 parallel environments for the A2C algorithm's sampling. For all the games, we set the roll out number to 5, the frame stack to 4, the learning rate to 7e-4, the learning rate schedule to linear decay, the minimal learning rate to 7e-6, the optimizer to RMSprop, the epsilon to 1e-5, alpha to 0.99, the coefficient of the value loss function to 0.5, the maximum normalization of the gradient to 0.5, the coefficient of entropy to 0.01, and the discount factor gamma to 0.99. The same CNN neural network architecture with {{cite:29a268a7b3aca2570298ab69bdd42c03fb755a11}} is used as the policy and value approximate function model. The same RNN neural network architecture with {{cite:a5475697edad4f48a53a470ff3d4c8a2f943778d}} is used.
{{figure:23cda8a9-f2e2-4e28-acb1-641189246e85}}{{table:bcea34da-9847-493b-8854-5bef69044c84}} | m | 437d0e6e0df944d5207f7035d375a1f2 |
Extensions to robust PCA. While our work focuses on matrix
completion, a natural extension is to further consider partial observations
with outliers, i.e., robust PCA. As mentioned, Zhang et al. {{cite:b5a4e739beb03ea9cb65991e3a06c2a6e20dcf9e}}
has studied this problem (with full observation) and provides an error
guarantee of order {{formula:8e6ac887-3505-4b95-9158-f73859f1b087}} , which is sub-optimal in its
dependency on the problem dimension. By contrast, a vanilla least-squares
estimator with noise-size-dependent choice of {{formula:3b62ba38-73a3-41f3-a5de-daac14c97952}} has been
shown to be optimal {{cite:447ffc2a7d04d56583ed1d8f5a1c730104a333fe}}. It remains to be seen
whether one can devise an optimal tuning-free method for robust PCA
with noise and missing data.
Inference for square-root MC estimator. The current paper discusses
solely the estimation performance of the tuning-free estimator. As
statistical inference for matrix completion is equally important,
one wishes to develop inferential procedures around the square-root MC estimator
as that has been done in the paper {{cite:a2b3f1289c764c04515499ddd00509380597e97d}} for
the vanilla least-squares estimator.
Robustness to non-uniform design. In high-dimensional linear regression, optimal tuning-free methods have been developed to be adaptive to both the unknown noise size and the design matrix. In the matrix completion setting, the design is governed by the sampling pattern, which is assumed to be uniform in the current paper. It is of great interest to develop robust and tuning-free approaches for noisy matrix completion with non-uniform sampling that improved over the max-norm constrained estimator in {{cite:a45033a1ba9f0ab6250219c43b946931c097fc8b}}.
| d | 166d4efc8debf55e9439d374398218c0 |
We have demonstrated that the image representations extracted from pre-trained deep residual networks can be effectively used for benthic marine image classification in general and kelps in particular. These powerful and generic features outperform traditional off-the-shelf CNN features, which have already shown superior performance over conventional hand-crafted features {{cite:64dc39eafe2ba4a5337fa6402fe70246f841aa37}}, {{cite:8e473ce45ebd332ec09be5f49fbef002714ebd9b}}. The sibling and inclusive hierarchical training methods further enhance performance when compared to flat multi-class classification methods. The sibling and inclusive training methods show comparatively similar performance. However, the sibling method is superior because it has lower training time than the inclusive method. Furthermore, estimations of kelp cover by automated DRF classification closely resemble those of manual expert classifications with the added advantage of faster processing times. This work provides evidence that automatic annotations may save resources and time while providing effective estimates of benthic cover.
| d | f22805893ba42d2b7f64e62e7deed9f3 |
Differently from disentanglement methods {{cite:0ebbec87d0a4032b8232e8186cedced51fc0c40f}}, {{cite:ec40d22583c12ec2897b8f5d32260042a7cc55fe}}, the proposed CS-CADA does not need to specifically design modules to extract domain-invariant content and domain-specific style representations, respectively. Our CS-CADA captures anatomical representations through the shared convolutional layers and normalizes each style distribution to a common distribution space by DSBN. The whole procedure is conducted on a unified network. In contrast, disentanglement methods typically need to disentangle out the content and style representations through different networks and additionally introduce generative adversarial networks to discriminate them. These procedures are more complex and difficult to train compared with CS-CADA.
| d | 78e287923517942bd97b2f37b9f377b9 |
The primary modeling parameters for each network architecture are defined in Section REF . The majority of these parameters are constant for all datasets that are modeling. During the training of the neural networks and their OctConv variants, a gridsearch of optimal parameters is not conducted. The objective of these experiments is to compare the augmentation of the convolution layers, not to obtain the optimal model. The only exception is the number of units in the LSTM for the LSTM-FCN and LSTM-OctFCN models. These units have been optimized in previous works and are utilized in our study {{cite:7c992e0eae8f6209f9abdf31e5a52cdf125b5e54}}, {{cite:94dc9602fb410294e087e5fa590e88bab718a8b0}}.
| m | 89243c2614cfb5e6086cc67d9ff09bf3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.