text stringlengths 54 548k | label stringclasses 4 values | id_ stringlengths 32 32 |
|---|---|---|
Previous work on multisensory integration in robotics (e.g., solving audio-visual {{cite:60d630a4fab6cf8cf511da47a064951d0d834778}} or visuo-tactile correspondence {{cite:181e126a19d6a3e05f41f826f0d1710bb293b27f}}) required large-scale data collection on real robot platforms. However there are advantages to a low-cost platform where effective pilot data can be collected in simulation. We present an open-source framework for quick and efficient simulation and recording of synchronised sensory data across multiple modalities.
| i | 3063e7bd4e2ce2b50fa78b97674645ca |
We focus on medical image segmentation for validating the efficacy of the proposed VMP framework. We employ three different datasets and compare VMP with two state-of-the-art segmentation networks, a deterministic U-Net and a Bayesian U-Net {{cite:b7c2b058af5f4183ff8cbd6fe8069a2af999087c}}, {{cite:8c1380a35e9151820ca4806f0087d638a200655a}}.
| m | 12503d6729fc7ba23c8cac7020ff472f |
In this work, we take a Shannon theoretic perspective at the canonical conditional disclosure of secrets problem to seek capacity characterizations where the secret size is allowed to approach infinity while most cryptography work focuses on the scaling of communication cost with the input sizeOne exception is recent work {{cite:fe7fb10ec92a046876686c8f19024095a5288c2c}}, where the amortization formulation essentially considers the same rate metric as our work. The difference is that we focus on each CDS instance and pursue exact linear capacity characterizations (so impossibility claims included) while {{cite:fe7fb10ec92a046876686c8f19024095a5288c2c}} aims at worst case rate approximation for all CDS instances. {{cite:14ec65fa4722be94ed14b989633cd81e5c0b3a2f}}, {{cite:5bdca23c9bac932a09d159d2bc391d9dbb75f9c2}}, {{cite:72a447db933284e7e626cf2a112029591dfff56d}}, {{cite:fedcae52e9076b308720442ff7d35e4ea5488772}}, {{cite:e5bd39d3a06a7b87a81212c0efb570a8a3d47ede}}, {{cite:a7311b4600724f25247ad77e0a8c18fcbe0964f2}}.
This Shannon theoretic perspective follows the footsteps of recent attempts in the information theory community on other cryptographic primitives {{cite:25d76d7391c75b2e21c375ee77b6b359f8c2cafc}}, {{cite:d190f5215fa43bdf190c54db561e6c54787c908d}}, {{cite:24b588301726b350e8154455c9b1cfaafe65ad13}}, {{cite:0b47ef42ea3e124d55c1f975a928ee0ac7c59e93}}, {{cite:2a7c80605fc5404e13920a5c1f443465479c85a1}}, {{cite:ed3279518973b4b8e1c7a328a427fb05079149de}}, {{cite:84b2d360229518df1bbac460eca1274807daa348}}, {{cite:ac9d264175aac4a34143d6b97ee456dcdf93af7b}}, {{cite:ffff2d83c9d0dd84af6968c821ee7a99f84fa049}}, {{cite:41b80c0467a1f669e8eda7086dfa195b0523a875}}.
Towards this end, we further develop the noise and signal alignment approach, which is a variation of interference alignment originally studied in wireless communication networks {{cite:6942d52050fd4f532e67e9b67ed10680dcc2509b}}, {{cite:630f41c6bb6261eb259b74ffd07dd04666021bce}}, {{cite:764213cda77ef138a5025064eae8c226e9fe9f46}} and is introduced in {{cite:4237dba35b94e0ad1cb57dfbfe6355e8c0a8e274}}, to characterize the linear capacity of a class of CDS instances, which go beyond the highest capacity scenarios found in {{cite:4237dba35b94e0ad1cb57dfbfe6355e8c0a8e274}}. Along the line, we identify a general linear converse bound (see Theorem REF ) and a linear feasibility framework that facilitates the design of linear schemes once the target rate value is fixed (see Section REF ). However, these results are not sufficient to fully understand the linear capacity of CDS in general. We conclude by giving an intriguing CDS instance whose linear capacity is open (see Fig. REF ). Note that this instance is only slightly changed from the solved instance in Fig. REF .
{{figure:be82343b-0a0f-4786-b19d-b6b622b12081}} | d | 97d05958773e7853d2d1289455b4ca0e |
We also experiment with other state-of-the-art methods, including Vector-NMN {{cite:4213d39d9e1597f179d7864315270573e8d1ae00}}, MDETR {{cite:9702485d8f177933ebf0e601147be072fa9daf00}}, LXMERT {{cite:05edc8fe0729ffcb77abb11a462f6c4e6886c97a}} and MMN {{cite:d4b123cdb6efede6a81f4e7e85027de272b34385}}. We follow the parameters described in the original paper.
| m | 40ddd83a6991e4f7bc9084f6cbf16c3c |
Neural network pruning is a model compression technique aiming to reduce the size of a model's trainable parameters without too much degradation in performance.
Pruning-at-initialization methods, especially those covered in this paper and listed below, work by assigning each trainable parameter, {{formula:25ac8f96-2bc0-45cc-b04c-5a18beef95d9}} , a score, {{formula:71946d62-4feb-46d6-9273-898550dcb583}} , before any training step is performed. A pruning decision on {{formula:069205fb-2598-414d-9532-2fa47eb0966f}} is made according to the magnitude of {{formula:dea0bec2-51ad-49f2-975e-721cc4d56dbc}} . We follow the mathematical notations used in {{cite:ff7603e4b4972ee6573e27c41c05083f12b9c482}}, while omitting the layer index for simplification.
| m | 2b352ffb992f77caf6f8928a9cafcb31 |
We propose to use the popular L1 approximation {{cite:213b17a696a4f0bc439d399f2bfbc2c3875d788b}} to discretize
the local and history parts of the Caputo fractional derivative at {{formula:1520dbe3-ebd4-4e80-b598-931a67a301e6}} :
{{formula:69c1ef0c-3a25-484d-8451-bb0f8a96ba02}}
{{formula:ec9b9e84-c214-4b32-8364-8e337ba5a448}}
| m | 0efcce3a07865dbffbf7fa9fdfab1cbe |
There is a sizable body of work proposing various attack and defense mechanisms for the adversarial setting. Among them, the current unbroken defenses are based on adversarial training (AT) {{cite:500b92e5dd7056065deae6ed246a8052d9bc2aae}}, {{cite:7f57693a5608ec1640303862947ded481c07e7eb}}, {{cite:8702f2389ec4534aafe45dad033df21f4ba27f0b}}, which uses adversarial examples as training data to protect DNNs against a range of adversarial attacks. For example, projected gradient descent (PGD) is one such strong defense that is able to generate universal adversarial examples using a first-order approach {{cite:8702f2389ec4534aafe45dad033df21f4ba27f0b}}. A more recent work {{cite:7f57693a5608ec1640303862947ded481c07e7eb}} encourages the decision boundary to be smooth by adding a regularization term to reduce the difference between the predictions on natural and adversarial examples. Qin et al. {{cite:d1dcfebbf18a86a74cccf1e9e3ecc56a5cfa0228}} smoothened the loss landscape through local linearization by minimizing the prediction difference between the clean and adversarial examples. While the various aforementioned approaches can improve the AT, they require the generation of sufficient adversarial examples for training. This results in a prohibitive computational cost, which is proportional to the number of steps needed to generate the adversarial examples. In addition, it requires a back-propagation for each iteration. To strengthen DNNs under adversarial attacks, a biologically-inspired approach {{cite:af4b1dc3c149847648bb5cb9e5da0e7fc9c8cb27}} was introduced to learn flat, compressed representations that are sensitive to a minimal number of input dimensions. Unlike {{cite:af4b1dc3c149847648bb5cb9e5da0e7fc9c8cb27}}, this paper introduces a simpler yet effective approach for model regularization that is based on input gradient regularization. A concurrent method is {{cite:1271bb8680c778c9d01f27d9e8d9556deac9c382}}, which can improve robustness by imposing the input gradient regularization. However, performing such gradient with respect to a high dimensional input from back-propagation is quite time-consuming. In contrast, the proposed method approximates the linearized robustness of neural networks via the penalization of a classifier's Jacobian norm. Such a Jacobian norm derives salient gradient maps to selectively activate the most discriminative gradients.
| m | e85d978fd1c340b8a187f97b48f70615 |
The standard way of adjusting mutual information against chance is through random label permutations of one of the clusterings {{cite:5cb81182bbb3ce5b92c1dd8c878dc5566ebf96b1}}. Unfortunately, this adjustment makes the metric computationally expensive. Specifically, the time complexity of the metric is in {{formula:d354d507-72a6-4ab6-b80e-5598c1fb9301}} , where {{formula:cc4a92aa-22a2-4be6-8a98-c245cc706896}} are the numbers of clusters in each clustering and {{formula:60cb970e-47db-442f-b0a5-a73430db3c81}} is the number of samples {{cite:abbe45e5a556b6d2895f83f17ecd4df33bd2987b}}. As a comparison, the time complexity of mutual information is equal to
{{formula:c2b90fcf-d30a-4e33-a1a6-6901f32933d7}} given the contingency matrix of the clusterings, i.e., the matrix counting the number of samples in each cluster pair.
The additional computational effort required by adjustment is significant as the number of samples {{formula:e95e0094-062f-4c9a-bd9a-a1832af8bf31}} is typically much larger than the numbers of clusters {{formula:6123b946-6338-4b7b-9af8-9d4213addff9}} .
| i | c7d1be6ba51e13fc5d7b6422bbe8e71b |
[{{cite:153d4e2f4b67983740412918473f61bdfb7b65e2}}, {{cite:e1307e83227bf8120f5500d604feaff13252cce4}}]
The following conditions are equivalent:
| r | 3f8a8f958fb7b184c3f14c1b5a7dcb0e |
where {{formula:9240b8b3-9ddc-45e9-bdcd-edc7c38b42eb}} , {{formula:45851c95-366c-4faf-827b-94ebc4da3acf}} , {{formula:1c045fa2-bf86-4aba-88eb-635016151744}} and {{formula:29df7be7-f88c-441c-be24-edead5b65469}} is the corresponding pilot matrix for the user device for the group {{formula:e8331150-93ec-4112-85bd-7f9dea3e9d5c}} . We note that the free entropy function of (REF ) reflecting its statistical performances corresponds to (REF ), so that separately using model (REF ) for each group is equivalent to jointly processing all the groups with (REF ), if the mutually orthogonal subspaces condition holds. Such a per-group processing (PGP) idea has been successfully adopted in the case of downlink transmission {{cite:9c0c12b3dacc16090a6be3466ad586bbf501a403}} and our theoretical results provide the theoretical foundations for the PGP strategy in the uplink joint AUD and CE problem.
| d | f1985f6e9a710d347d56966d07212693 |
Although multiple state analysis is not restricted to a specific clustering procedure, we have used in this paper the Ward's hierarchical clustering method {{cite:02cef2ddfbab6821b65f11051c86bdc353f89e82}}. At the starting point of this procedure, each instance is considered as a cluster of its own, then clusters are recursively merged such that the resulting cluster structure presents the minimum cost in terms of the within clusters variance. This objective is reached by minimizing
{{formula:1a6b6e2c-8cf4-4c7e-b831-bb1bacef1b18}}
| m | fa77bf4fc3b7f5441a5b0d9c6f3556f1 |
A pressing open question in the field is therefore to obtain reasonably tight bounds for the asymptotic secret key rate of CV QKD with arbitrary modulation schemes, that can be easily computed, without relying on intensive computational methods. Without this, it seems rather hopeless to try to address the next important challenge which will concern the non-asymptotic regime.
We solve this problem here: we give an explicit analytical formula for the asymptotic secret key rate of any CV QKD protocol. While we focus more on the case of heterodyne detection, our bounds work just as well for protocols with homodyne detection {{cite:d2cb4695a85e492e2bc9c3823be2b0368ccdb10b}}. Our formula matches the numerical bound from Ref. {{cite:7c6caaec84c4eb6c2db7dbc5c8b124417f9e791c}} in the case of {{formula:3d5192d6-064d-4b0a-b5b8-3e765c4aa157}} -PSK modulation of coherent states (except in the regime of very low loss combined with high noise, which is not relevant for experiments) and recovers the known values in the case of a Gaussian modulation.
Our results show that relatively small constellations of size 64, say, are essentially enough to get a performance close to the Gaussian modulation scheme. A major advantage of the quadrature amplitude modulation such as 64-QAM over QPSK (in addition to the much better secret key rate) is that it allows for implementations with large modulation variance, and therefore bypasses the need to work with an extremely low signal-to-noise ratio (SNR).
| r | f2ddfacd8868f77a1080b20aded7479f |
There are reasons to believe that the appearance of additional Gaussian in
(REF ) introduced by the functions {{formula:e627fd09-6188-45d9-815c-e781ac2b950b}} will lead to an
improvement in the convergence of the series () obtained in the
framework of the collective variables method {{cite:d5921515462cebbbcc5a9d06dee6fd7d44f93ce0}} since a
similar procedure was successful in constructing the description using an
effective potential {{cite:5a313736e41becafb762e3f6966b399921560f97}}.
| d | 7106e8caad0ab7ee9b6c6b1583ac2ff5 |
We have proposed a deep learning approach for event-based human pose estimation from a single event-camera. Our method aggregates events into synchronous tensor representations to feed a multi-stage Convolutional Neural Network. Our architecture predicts three orthogonal heatmaps which are triangulated to obtain the final 3D pose.
We validated our approach on the event-based DHP19 dataset, where it showed satisfactory per-movement performance against DHP19 stereo approach {{cite:10261d4af3fe26a3fab2855cb15f68a2741dcb30}}. Moreover, we proposed Event-Human3.6m, a new dataset of simulated events from the standard Human3.6m {{cite:e61dbcfa02277f7912ea06ee0975d80c7d4053a9}}. Event-Human3.6m extends DHP19 with more challenging movements and actions. We conducted experiments on the synthetic dataset and adopted a cross-subject protocol which is comparable to the standard RGB testing. Although we recognize the differences between synthetic and RGB datasets, our proposal achieved an accuracy comparable to RGB approaches. These experiments demonstrated the effectiveness of our method.
| d | 2a9162e196212aee4f00fa6ee558ce4f |
Here we provide more details on the effect of LICE - adding our loss function {{formula:54d0ea0a-8205-4548-aa0c-b37b92c5c277}} to the cross entropy loss in MagNet, DGCN, DiGCN, DiGCN_app, and DiGCN_ib. For a fair
comparison of the methods,
hyperparameters are not tuned; hence the methods may not achieve their optimal performance on the data sets.
For
Cora-ML and CiteSeer {{cite:cdb9265facb67a7f3aba3f4d174d58004149c321}}, we discover that their “ground-truth" labels do not give strong cut flow imbalance between classes, as shown in Figures REF and REF . Hence, as an imbalance-driven method, DIGRAC could not achieve leading performance in classification accuracy, as indicated in the first row of Table REF and that of Table REF .
Note that we train DIGRAC in a semi-supervised manner, applying cross-entropy loss and triplet loss on all training nodes, i.e., treating all training nodes as seed nodes. Details of the semi-supervised settings are discussed in sec:semisupervised.
We use cross-entropy loss for all the comparing GNNs as the loss function.
| r | dbfa45c014025a3d9f89dde92aa9a314 |
Overall framework
The overall framework of our proposed method is shown in Fig.REF where twin Resnet-50 networks are used.
The feature encoder networks with Resnet-50 backbones are trained using the labeled and unlabelled data points using the Sliced Wasserstein Discrepancy(SWD) loss. The latent space of the two networks {{formula:43ce1fdc-57b9-404b-9a6f-0480d6e534d3}} and {{formula:aa533e30-874e-4bfe-8408-ff028f15550f}} are shared to create a domain invariant space{{cite:308ebdeed5e0592a377f34844ed27be4355f5d8a}},{{cite:6c450ef344b3f0e3eac12cac9e21ef61eeaf6213}},{{cite:45b1f3d3ac30df22af3fa7e58478a395f5daa1f3}} and effectively improve the knowledge transfer across both the networks.
The individual classifier parameters are learnt using the combined focal loss{{cite:e5f95c958fa93735f64dcc7b6a95a88291265139}} of both the networks in a supervised manner. Finally, the outputs of the individual classifiers of the two networks are passed to a fusion block to predict the classification outputs by taking the weighted average of the two networks, where the weights {{formula:3459eb4c-7749-46e5-98a1-7af2abf3dcd8}} and {{formula:1fb1b150-b81e-461f-8ce3-402328c850a5}} are learnable parameters learnt using the least-squares approach.
{{formula:2534f07f-4262-4a2e-b484-a7784d7ae20d}}
| m | 5294d484b8ec925ff17fa0811c52ba96 |
Dynamics of active colloidal particles such as natural microorganisms like bacteria or algae {{cite:1ce4a2b539d97afb9cd4776da2596cdf1f94b763}}, {{cite:0d134f9172118ddadc0b94a0b9d7bef3ab35542e}}, or synthetic swimmers, active Brownian particles (ABP)
{{cite:f661e36d4dbfb525b996c0ea9651d1e8de184281}}, {{cite:a00b3ae4caeb63773094bc510018786422ea52aa}}, {{cite:1fdb865caf4fd75ec5fa320b9e57e1e86fd66a71}}, {{cite:1b38958747c32fb7150ba7b8251bcc9ccf02ff7e}}, {{cite:a375c1465c63328980e3cc90a1c80def538e45c7}} described by having a non-trivial dependence of active current to the local curvature
of the underlying density profile. This leads to an activity term in standard binary phase separation in equilibrium system or also
called as passive model B {{cite:760e84eb468a5c0f15fbbf3e2ce5db7f46aa436e}}, {{cite:d27d34b76d5a283e83864ed4e4f9c37625b2ea76}}, {{cite:df8c967b334a4735031959a2dd0aaea6885c20bf}}. The corresponding active model is called the active model B (AMB) {{cite:a6aeddc85b5d731115dee75830a5b9629156523d}}.
The study of passive model B {{cite:760e84eb468a5c0f15fbbf3e2ce5db7f46aa436e}}, {{cite:d27d34b76d5a283e83864ed4e4f9c37625b2ea76}}, {{cite:df8c967b334a4735031959a2dd0aaea6885c20bf}} is useful to understand the phase separation in equilibrium binary systems {{cite:760e84eb468a5c0f15fbbf3e2ce5db7f46aa436e}}, {{cite:d27d34b76d5a283e83864ed4e4f9c37625b2ea76}}, whereas active model B,
gives the understanding of phase separation in many natural and biological systems. Also, many artificially designed active Janus particles in the lab are
also useful candidates for technological and pharmaceutical applications {{cite:2c6ca5967ef70055c55ce59202ca681c0ee2a8ab}}, {{cite:9ed0e102d66757de5dcf95592da27f77c9ccff45}}, {{cite:b5b89d9f6d257eb071dc401bcbdebf1feb352763}}.
Active Brownian particles predominantly show short-range steric repulsion {{cite:df5e9423f396ba5aa30e22d5fb20351045bc78d1}}. The presence of activity shows fascinating behavior like coherent motion,
phase separation without any external parameter or quenching in temperature {{cite:5c6253d6f757df5a215d9db95130f2d5d80ba048}}, {{cite:1ce4a2b539d97afb9cd4776da2596cdf1f94b763}}.
The phase separation of systems with self-propelled particles (active particles) is studied numerically
{{cite:df5e9423f396ba5aa30e22d5fb20351045bc78d1}}, {{cite:cda0d9614382f74f285d612ff2069e8a523c66c9}}, {{cite:51429e00566f513dc340a29e1858f82bc8976cb9}}, {{cite:a74b31ae47dd9fd52e5fbe7d37b0f01839e0c61a}} and to some extent in experimental studies {{cite:ae7a315dadccaa576ba1a626e113d853aea41f27}}, {{cite:b15d6b3877beb8dbf03328592183936e2db323f5}}.
| i | 5c7132efed27e2415e05ca0c9fe39676 |
A good news is that modern E-commerce websites contain heterogeneous sources of information, i.e. numerical ratings, textual reviews, images, which can be utilized to help with recommendation.
Through textual reviews, a user explicitly express her affinity towards the item. In both rating prediction and top-N recommendation, review data has been widely identified to be useful in improving recommendation performance.
Ganu et al. {{cite:a4a8e2cd2ae2504e5aaf6ae33a345a29a6053e7d}} found that text information can be used to assist the recommendation procedure.
Later, researchers {{cite:e2a50ecef84cbaace7b8fbef3748728430f1c724}}, {{cite:6335d6818d9b36acc6d0d72560e21741612388cb}}, {{cite:3d3384fa3b5da3050725c9d9f1e5a27c84fadea0}}, {{cite:12fc22beb5aad8fa7bc7759f063b27c8da5125ff}}, {{cite:2cbbf83f8e7df899ccc71d9f71d944043b0dc1e7}}, {{cite:a0c706e8d1e26a7a79e54ea54fdca991cff1f671}}, {{cite:0d2abfb1b19b8fbe2af86dee5fdeea3d6f0825b6}}, {{cite:4c40d52a5d3d7e92136bc2695718fedeb0111493}}, {{cite:0e50a541d06c0b3dcd0b6f1d414be27dc27eb163}}, {{cite:3057427a140f089d7add13d56241e401cf6af6e0}}, {{cite:c99d91408f82155abaed7303aeadd00972d9cadc}}, {{cite:f3026cc6d1659a39c21b46be4fb27bbbbef8a603}}, {{cite:eb0cfadf46223a893c0aa3196b7382c71390190c}}, {{cite:b8a141dab5c0bdc4b6707d4f24e4fa4ba8760ad8}} kept making progress in effectively incorporating review information into the task of recommendation.
However, as pointed out by Peña et al. {{cite:a5086acffa0e12b0d6951e69a7eb74325810de0d}}, the exploration to utilize textual reviews for recommendation are limited to rating prediction, and the effectiveness of review-based models for top-N recommendation is still underexplored. This gap serves as our motivation for our work.
| i | 53d3947e5a8356215c01db879710db0b |
The {{formula:8519bace-4022-4d9b-8138-11b0f3b2b694}} values obtained with the different reference sources and their average are illustrated in Fig. REF . As already found in {{cite:8f64275bd69ccaff612c5047f4f6d29c2a00c43c}} for the reverberation mapping AGN sample, the reference trends of decaying outbursts yield systematically larger {{formula:c7989cd7-a43b-4242-94e5-ead285953afe}} values compared to those obtained from the rising trends. For each obscured AGN, several {{formula:93b97dd2-1674-405d-aec5-45fe8441abeb}} values obtained from different reference sources and their average appear to be broadly consistent with the value obtained from megamaser measurements (a quantitative comparison is carried out in the next subsection). The only noticeable exception is NGC 1194, for which the X-ray scaling method yields values significantly lower than the maser one. This discrepancy however is not surprising, since this source has a fairly low accretion rate and in that regime the X-ray scaling method cannot be safely applied.
{{figure:0e604407-2f52-4d56-a80c-9d6d725cedb7}}{{table:45c754ce-203b-435a-9afe-ad243abbf545}}{{figure:c8ff4be6-a9c2-4429-ba76-5dfe8c20c4eb}} | m | 651756ac68861e74369a6e5cb6d3576e |
Figure REF A shows the measured invariant mass distribution for selected {{formula:19a4d5b4-2528-4d30-ab4a-d464fe7ae651}} pairs with {{formula:c19559cd-a5fb-42de-bdf3-fa6471771ae8}} MeV from Au{{formula:336480eb-d9f5-4a9d-a4d8-180351d039eb}} Au and U{{formula:cd7160b2-1f72-4931-a92f-147f6ecde317}} U collisions (note, we use capital {{formula:5d515dd2-bc98-4e65-b076-760d0a266bb8}} to denote the momentum of the {{formula:2789ef5f-e5ef-42b0-ab3c-85e2366a8078}} pair, i.e. the {{formula:3d6e3ac8-dcdc-4c69-a567-82684662937f}} ).
The data from Au{{formula:e6e8f7ba-6109-4d38-bb92-a0a2a55e76ab}} Au and U{{formula:0992d60f-e05b-4f0a-9d3a-5667890f4751}} U collisions were collected with the STAR detector at RHIC in the years of 2010 and 2011 while the data from {{formula:405aad5d-b81d-4558-af98-a44eb648bb41}} Au collisions were taken in 2015 with additional tagging of the forward diffractive protons {{cite:f2878e8b184dfad945093f0f56245503f7f4b05e}}.
A broad prominent peak is observed in the invariant mass distribution around the {{formula:ab60e210-596d-468f-bac3-61cfdc344353}} mass of {{formula:061b6ad2-1298-4c30-85b6-884a28df2df8}} MeV indicating that the trigger system and track selection criteria are effective for selecting exclusive {{formula:a7fbfaaf-5643-44a4-94f7-36fa8703b984}} processes.
Candidate {{formula:2d74487b-6752-4d3f-b2bc-3cd7ba75885f}} pairs with an invariant mass ({{formula:ad861f66-610f-4975-80c9-70efdd2aa27f}} ) between 650 and 900 MeV are selected for further analysis. Selecting this mass range results in a sample of {{formula:b4614b5e-e13c-46a2-9652-c56c78e9cddf}} pairs predominantly from the decay of a {{formula:d83f75a3-15a9-4781-a8fe-4ea2bdca3200}} vector meson produced in the diffractive photo-nuclear interaction.
The measured {{formula:4939c186-9471-4cc0-8417-5b80ac8650ee}} final state also contains interfering contributions from other production channels with the same quantum numbers (e.g. continuum {{formula:ec9866e6-58a4-4e90-8da8-03a2603aefc5}} from the Drell-Söding process {{cite:41cbd045b1c3653d976e4079178d562129c482e4}} and {{formula:9f8324ab-71cd-48ac-b7e2-129fe164938e}} mixing {{cite:6261c7237486fed93313c0d55bfc5fdb57a785c4}}). The relative amplitude contribution from each channel has previously been studied in detail by the STAR collaboration {{cite:2dd858d2be48ff5066275fb9a9bcd99d1bda5e8d}} and others {{cite:35b579e049d98d27e2c82f557c02d82ca4f166f4}}, {{cite:a003349ec00f6d40b09b0f3236c5f1369b0358d6}}. In this analysis, the selected invariant mass range is used primarily to obtain a sample of {{formula:5e57136e-9b39-4fd0-9d62-fa1f31322cbc}} pairs with a large signal-to-background ratio and with roughly uniform acceptance. Throughout the article we follow the convention of referring collectively to the inseparable {{formula:889db125-cdb1-4225-aabf-797a5670a70a}} final states with the same quantum numbers as {{formula:854e55c6-b60c-4938-a3ee-02a85bdb09f6}} candidates.
| r | f7eb1ec0324c367ae2975656d82c66b9 |
On learning the importance and the direction of the word vectors. Our model – by learning how to generate and compose word
vectors – has to learn both the direction of the word embeddings as well as their norm. Considering the norms of the used word vectors as by our averaging over the sentence, we observe an interesting distribution of the “importance” of each word.
In Figure REF we show the profile of the {{formula:9f771f30-d10a-453a-8c0f-65bd199511f8}} -norm as a function of {{formula:f91904a5-aa77-455e-9c8b-1ef1516cbdb5}} for each {{formula:06c8799f-1b4d-4e1c-9b9e-a01ece56430e}} , and compare it to the static down-weighting mechanism of {{cite:10a1e7aa7e3a1a9dc0a20bf9db2702ce75bc00c8}}.
We can observe that our model is learning to down-weight frequent tokens by itself. It is also
down-weighting rare tokens and the {{formula:8b962e4e-5ff5-40a3-97e6-2cce8cb504bc}} profile seems to roughly follow Luhn's hypothesis {{cite:bfa16af04a32d46c038cf35b536a5fa704954a54}},
a well known information retrieval paradigm, stating that mid-rank terms are the most significant to discriminate content.
| r | 25dfbf7e7a1cd791296abb2939385100 |
Long-tailed learning is an area heavily studied in classification settings focusing on class imbalance. We refer readers to Table 2 in {{cite:8984a6d42b7ac9c54eb6008366001036694d00cb}} and the survey paper by {{cite:a2be636726283185230ae3fa8725be71642963de}} for a complete review. Most common approaches to address the long-tail problem include post-hoc normalization, data resampling, loss engineering, and learning class-agnostic representations. However, long-tail learning methods in classification are not directly translatable to forecasting as we do not have a pre-defined class. A recent work by {{cite:ba8fea8822114be88577b85a4dd942af00e1e674}} propose to use Kalman filter to gauge the difficulty of different forecasting examples but such difficulties may not directly relate to deep neural networks used for the actual forecasting task.
| i | a6e66bb761ab86619dd43f868c140345 |
Fig. REF shows the minimum on-line training overhead versus {{formula:bc67a089-73d6-4c55-8d92-5d3f7a0405e6}} , with {{formula:3ad83127-a23c-49b7-a931-ca6abbda65ed}} and {{formula:e6537fb1-3aa7-4073-86bc-9d64f17a4471}} . First, it is observed that Scheme 1 outperforms all the other schemes. Second, as {{formula:9569c66e-214e-4590-8725-de4163740505}} increases, the training overhead of Scheme 2 increases more slowly than other schemes. This is because for the cascaded BS-IRS-user channel estimation in Scheme 2, only the anchor (i.e., A2) transmits pilot symbols and all users estimate the channels simultaneously, thus incurring a fixed pilot overhead as {{formula:cfcaefaa-743c-4a0a-8e62-65fe4a288086}} , regardless of {{formula:09c536bb-1f52-4138-b0c6-b1dfa93d45ee}} ; while only the training overhead for estimating the BS-user direct channels increases linearly with {{formula:da0b1691-98da-4623-8a8d-34ad99e0c3b9}} . As a result, Scheme 2 becomes more efficient as the number of users increases. In particular, when {{formula:2802f8c7-6c83-4207-ace9-386046104559}} according to Table I, Scheme 2 even outperforms the benchmark scheme in {{cite:529f41eb7b8a3b1902791b549a5876ebfc054be9}} and is on par with Scheme 1 when {{formula:85af7394-844d-4fa3-b098-a749590708c5}} .
| r | c66b08f0f7fa5230b15c7fa4eb3d515e |
ODIN. ODIN {{cite:2469f981b25719fc153822f0ac3cb692b5b689cf}} got the inspiration from adversarial attack {{cite:7a2ca06dda907d9d598600b0b52a32d1d5026d4e}} and find that including the adversarial perturbed inputs into training improves the final OOD scoring.
Given an input image {{formula:e221f4eb-8a9c-4a6d-8f52-92f684262baf}} and the predicted softmax probability, ODIN first generates the perturbed image by
{{formula:1c950408-8877-4d87-9bf6-6e79c73a9262}}
| m | 62c6efeb5b3a14c50d3edd0cc73ae096 |
It is well known that, for each {{formula:42cd74c9-fc1d-43ce-93d1-200402e17387}} , we have
{{formula:e130bf59-8d93-4765-a304-2462bb52920a}}
for all functions {{formula:7f9f8989-903d-470b-973a-ce7a180cfa0c}} as in the statement on the lemma, see
{{cite:be0c8eb27a7c3c229867df8a4454de439a2c0dda}}, {{cite:645e136e23e6698924bde610d68f2951a236d6c1}} and {{cite:c97511072474a110f03d309ab4bd5da68eb50617}} for details. Since, for each {{formula:bb6b0d49-f266-4594-8fb0-a73950b7b3a1}} , the Maclaurin coefficients of {{formula:fa93b256-f282-4322-b0ff-8c66e3e57b01}} have the same property, we may integrate (REF ) as in the proof of Proposition REF to deduce the assertion.
| r | b8cbf2bbebbcc7f2e03f977fc6201f21 |
Clearly, any useful method should be able to cope with inconsistent or noisy data.
A popular probabilistic model of comparison outcomes is the Bradley–Terry (BT) model {{cite:074b715a3f68c3b3bd4dd13bc6b085eae9b94deb}}, which we shortly describe in Section REF .
The BT model posits a notion of distance, or similarity, between items;
comparisons are more noisy for similar items than for distant items.
As a consequence, the amount of information we can learn from a comparison depends on previous comparisons.
For example, if we already know that two items are very distant, comparing them might be almost pointless.
This suggests that active (or adaptive) strategies, which select items for comparison based on previous comparisons, might perform much better than agnostic strategies.
| i | 989f8b4a91ae9c7f6ee741bf2132ecfe |
We point out two remarks. First, the number {{formula:7eee40f0-606a-4570-bcda-5120f7354d7f}} of neighbors we use for our weights-estimation is always {{formula:03122936-8eab-495f-a1d5-785de88c6a83}} . This is because choosing {{formula:ccfb19b6-6a0b-401c-8d33-55ee5bdb93bf}} , where {{formula:b4dbb50a-841f-4add-a4d5-cb73ebe41083}} is the size of the validation dataset and {{formula:42b2e43a-051e-4c78-b49a-24167ca8fbe2}} is the dimension of the underlying metric space ({{formula:68a7712c-a738-489a-a2cc-093be141f2bc}} in our case), is asymptotically optimal, and {{formula:09276824-eed5-4a3f-87a6-a88f05e86c5e}} is a popular choice for the hidden constant used in practice, see e.g. {{cite:1e27b9991d22e8e6a6863f81a902b2430325a3e5}}, {{cite:16c50098fb59a20a00d1d5cc6702dda6e621e0a7}}. Second, notice that (REF ) implies that the weight of an example could be larger than 1 if (and only if) the corresponding distortion value (REF ) at that example is less than 1. This could happen for example if both the student and teacher have the same (or very similar) inaccurate prediction for a certain example. In such a case, the value of the weight in (REF ) informs us that the loss at this example should be larger than the (low) value the unweighted loss function suggests. However, since we do not have the ground-truth label for a point during training — but only the inaccurate prediction of the teacher — having a weight larger than 1 in this case would most likely guide our student model to fit an inaccurate label. To avoid this phenomenon, we always project our weights onto the {{formula:60bf033d-b020-473e-be0b-9692d4c8521d}} interval (Line of Algorithm ). In Appendix REF we discuss an additional reason why it is beneficial to project the weights of examples of low distortion onto {{formula:173c3a6d-417c-4302-a753-78150d835ce7}} based on a MSE analysis.
| m | b7d1c2f3276ba8ade4f2feebdb6d6dca |
In the Fig. REF it is seen the {{formula:e1164df6-b4bb-40ce-bf5c-88795f84fecc}} dependence of dynamical mass at fixed coupling, {{formula:b81fa462-95ab-4820-bbb4-e353fc28d85c}} . Near the axis {{formula:9b48717b-fdbf-474c-9465-f716d93fd4b3}} , dynamical masses have no distinct {{formula:3d04314c-7a89-4355-8683-33477f9070df}} dependence and rotational magnetic inhibition is excluded {{cite:8202a8196e0374cb117944626e9edf9b29686787}}. This is because the vorticity vanishes at {{formula:30ee96b6-7887-41a6-9441-998c239a4e42}} gap equations {{cite:28bc52a63161ad4dedb302533076f0f9cc1adf84}}, {{cite:ca820677b47f6235ca30815d2d18f9fecd996c1c}}. The condensate falls off near the boundary region, since the condition {{formula:477d5703-4246-492d-92ee-7894109551c5}} violates there. In this region chiral condensate grows with {{formula:fe53cdec-4fe1-4f55-a5ea-51a2cbbb1f40}} due to the mode accumulation in which the magnetic field is enhanced for larger angular momenta {{cite:3a5720f7b3d591741f6729df7db0063f809b3ae3}}. We appreciate Xu- Guang Huang because of helpful discussions.
{{figure:59071fb6-5e0b-4eef-bf31-ad094e984b5c}} | r | e0ab637d9514cd31bff9c027bf6f9df5 |
Humans continually abstract concept classes from their input sensory data to build semantic descriptions, and then update and expand these concepts as more experiences are accumulated {{cite:d12771320c1417ef9bd81f471b6ca411fc5f7075}}, and use them to express their ideas and communicate with each other {{cite:a3303f8cdb4fde4497c2b2537ef89630e2fe6a51}}, {{cite:5d21f73541e00aa70b862503a22f91294e5e793d}}.
For example, “cat” and “dog” are one of the first concept classes that many children learn to identify. Most humans expand these concepts as concept drift occurs, e.g., incorporating many atypical dog breeds into the “dog” concept, and also incrementally learn new concept classes, e.g. “horse” and “sheep,” as they acquire more experiences. Although this concept learning procedure occurs continually in humans, continual and incremental learning of concept classes remains a major challenge in artificial intelligence (AI). AI models are usually trained on a fixed number of classes and the data distribution is assumed to be stationary during model execution. Hence, when an AI model is trained or updated on sequentially observed tasks with diverse distributions or is trained on new classes, we generally need new annotated data points from the new classes {{cite:89e021a94366fce8223e106b6614d943cdbdd1e3}}
and the model also would tend to forget what has been learned before due to cross-task interference, known as the phenomenon of catastrophic forgetting in the literature {{cite:be9184f1a9ace876fdd3ba4fc0fa7e1cedcb970c}}.
| i | 2e06e1672ee32d5a28da1b0ceb24eb19 |
On the other hand, the results of {{cite:c5af3a7612cdd1221dbb7eafdffb1829aff52bb1}} (9 galaxies ranging between quiescent and starbursting, with 6 having measured halo temperatures) and {{cite:8116fd9d4b9a0378d4deff0c5bbbc7a04cbd1965}} (10 galaxies, 7 of which are starburst) appear mostly inconsistent with our results. Those studies found lower temperatures for both components at kT = {{formula:9e3527b4-57fa-4bef-ba4d-afc04bf712ad}} to {{formula:18bda688-3dcd-4da4-a0a0-3ae0b208f56c}} keV and kT = {{formula:5168f8c8-e50d-4007-80e1-fa0d25870633}} to {{formula:44b97c23-1d48-4ef4-9526-9c2ea56f7c5e}} keV. Only two galaxies from {{cite:c5af3a7612cdd1221dbb7eafdffb1829aff52bb1}} have some observed regions that are consistent with the range of temperatures seen in our warm component. However, those papers specifically extracted CGM regions separately from the disks of the observed galaxies, so they might have missed any hot component that is localized more closely to the disk.
| d | 6554ae8cf2c886a96ca77de6ade9b36c |
Taking energy consumption minimization into account results in new
effects due to the additional terms in the learning rule. First, mutual
information reaches lower saturation values, representing the trade-off
between the quality of information inference and smaller synaptic
weights, which limit energy consumption. Also, the probability of
energetically expensive network responses decreases, and these become
less probable than energetically cheaper responses. Regularly occuring
inputs tend to evoke energetically cheap responses involving fewer
neurons. Finally, taking input noise into account modifies learning
so that very rare inputs are ignored, if they are so rare that they
can't be distinguished from the noise. If the number of neurons in
the network is large enough to allow for a unique representation of
every input, i.e. {{formula:cc7b7c2a-1686-4816-a326-3c4dd5bb87f0}} is approximately 1 or 0, then from Equation
(REF ), it follows that {{formula:31435f56-b69b-4991-9dfb-426790abe743}}
with {{formula:3e9277dc-53b7-48f0-95ca-f0db8279f893}} for {{formula:d302c18f-9050-4fdf-9de3-4d98f873519c}} , with {{formula:a8bb8409-fb98-4fe8-9515-f7d9d2fdc612}} only
reducing {{formula:7fae6e18-a83d-4bec-831d-746ab90bdf61}} . A similar exponential dependency is obtained in
{{cite:1c67689b9e4bc70a9dcfbf37aa30576960784afe}} when the network optimizes information representation
under energy constraints for the probability of a random neuron to
exhibit a given rate value. Both results can be seen as a Huffman-like
{{cite:91c1cefe1c30aeb30da49dda5d3927a463da609c}} economical coding scheme, whereby energy savings are
also taken into account.
| d | e1ae3d0653a186fbbd1fa1e7ae597368 |
There are several directions of future work on configuration models of random hypergraphs.
Beginning with theory, many classical asymptotic results on dyadic configuration models invite generalization.
These include probabilistic characterization of component sizes; cycles and parallel edges; and the diameter of the connected component in various regimes.
We also highlight two applications of potential interest.
The first is motif analysis.
A network motif is a subgraph that appears with higher-than-expected frequency in a given network {{cite:828995c9976461f5641dcb14af1f8eca91cb6632}}, relative to a given null model.
Considering the explicit dependence of this definition on the null, we conjecture that motif-discovery algorithms based on polyadic nulls may highlight importantly distinct structure when compared to dyadic nulls.
A second promising application is in hypergraph clustering and community detection.
A recent paper {{cite:f1a7f30129fb0258ea79ec8cc4b5292c2c86792a}} offers a definition of modularity — a common quality function for network partitioning — based on a polyadic generalization of the Chung-Lu model {{cite:8af0098b22d0c4a113ee5171431efbc745f687df}}.
In this case, the modularity of a given partition may be computed analytically.
The same calculations used to prove thm:analytic can also be used to show that the stub-labeled configuration model will give an asymptotically equivalent expression.
However, for the large class of data sets more appropriately modeled by vertex-labeled nulls, other methods may be necessary.
We anticipate that pursuing these tasks will pose interesting theoretical and computational challenges.
| d | acb0f16c931d120cde21f35ad2b76cc7 |
The majority of existing DL based JSCC approaches are designed to operate under specific SNR {{cite:b8f83df083fa8d8aea5161fd0bec438137005ba2}}, {{cite:dbdd45e759b0213daf73c37c18abce15ed0f9b70}}, {{cite:cf3328abb00c6aa86ba6835d0f3adafb3eaf3098}}, {{cite:516225037b3baa8a50fe70a4a461617fe0b4d8da}}, {{cite:c00739f898f564d5727f12c4d099d82553afface}}.
There are also some recent JSCC based DL techniques that operate under a range of SNRs but these involve optimizing a series of networks for specific SNRs during the training stage and using a specific network adequate for current SNR conditions during the testing stage. This leads to serious drawbacks, including higher computational requirements during the training stage and higher storage demands during the testing stage.
| m | 0d8d89cbf5d914841471710b023b5379 |
Quantum computing promises enormous computational powers that can far outperform any classical computational capabilities {{cite:0e462bd9f0b3ea01ad7893b852bd0a9a4fe9aa37}}. In particular, certain problems can be solved much faster compared with classical computing, as demonstrated experimentally by Google for the task of sampling from a quantum state {{cite:ed34e37cac808630194ae1b18ffd8743e8a932a2}}.
Thus, the first important milestone {{cite:ed34e37cac808630194ae1b18ffd8743e8a932a2}} in quantum technology, so-called “quantum supremacy", was achieved as defined by Preskill {{cite:b25382592022bd7d194674a8b526f818d3e4bb5f}}.
| i | 6ef68f4dbf9272e3c6b928d2620936fe |
With the understanding that the black hole spacetime appearing in gravitational path integral represents the (maximally mixed) ensemble of black hole microstates, the results of Refs. {{cite:1538d2390d6ba9a578f2dfd594f96651403632e1}}, {{cite:5b4348b2abe6553774e7dd3c1bfe90c55844b5d4}} can be understood in a simple manner.
| m | 3b6262a8d0b3e9b4795207d8544ae9ce |
We utilize the mean absolute error as it has been found to lead to less blurry images {{cite:9be09589b254c830ac12573e0da0db020418ee4b}}.
| m | e463b06f15799a91b9ba4ecc6707c228 |
Current methods {{cite:3911d168ef22872c786d4e41541790e86808f0cd}}, {{cite:5a58f63d602d78bc6baacb83128e308dbfcf3e4e}}, {{cite:8928b51defa1fdfa716b9507ee7a9e7b3316ae99}}, identify the outlier source classes by introducing soft class weights. This signifies that there exists possibilities of some degrees of transfer from samples in the outlier classes. Our technique eliminates this issue by introducing a thresholding mechanism by maximizing the between-cluster variances that binarizes the quantification of transferability of samples from the source domain. We divide these source classes into two groups : {{formula:f9f742d3-cd5a-4e86-b0fc-9663f2f12643}} and {{formula:da6d317c-17bf-436e-904b-197d5c424a1f}} . {{formula:f13884ac-73fa-49ca-b936-5bfa87718020}} denotes classes with weights {{formula:97fc353f-3d87-4e40-9844-af6582fa2d21}} and {{formula:941b3b59-431f-4c7b-992d-ee12325cec70}} denotes classes with weights {{formula:05d69f08-1cda-4edd-aecf-aae5f279c6a2}} . The probability of class occurrences {{formula:671130d7-0e3a-4b2e-b11d-6bc2d7f4551d}} and {{formula:2c64b285-85f7-4c94-82fe-1f62656a56b8}} and the class mean weights {{formula:c21224bc-a697-4e8e-a2b4-f0c0328fb8f4}} and {{formula:f3dfe883-716d-41cb-a260-c2ea3fd8106b}} are defined as follows:
{{formula:5c990a4d-c47b-4742-87d3-e2fe55128083}}
| m | 95d9655a83525ca18dc3725b07e39311 |
The analysis of long time behavior of infinite dimensional dynamical systems including two-dimensional Navier-Stokes equations (2D NSE) is well investigated in {{cite:64515fccc570a6e86d17604413d83e7acd54839c}}. The existence of a global attractor for 2D NSE on some unbounded domains like Poincaré domains is obtained in {{cite:6a6a09252a7ed728ff443e4d6a0e46ca0542d6a3}}. Following the work {{cite:6a6a09252a7ed728ff443e4d6a0e46ca0542d6a3}}, the existence of global attractors for the two-dimensional convective Brinkman-Forchheimer (CBF) equations in {{formula:2f17208d-020a-44be-969e-09bb01b85511}} for the absorption exponent {{formula:39ff7ed8-51d1-4acd-9b63-c62cecf0b0a8}} and the external forcing {{formula:e07dec2a-a07a-4fbb-a3c1-6cc101461841}} (from the Gelfand triple {{formula:b19616a2-9ca5-4569-8e6b-4bba7ebb3bb6}} , see section for functional framework) in Poincaré like domain is proved in {{cite:c2966b7ba86e04f6ad2712d5593945f4530eda0a}}. The upper semicontinuity of global attractor with respect to domain, that is, when domain changes from bounded to unbounded (Poincaré) domain is also established in {{cite:c2966b7ba86e04f6ad2712d5593945f4530eda0a}}. In {{cite:4487a7471a93a069bb32a0b0253c2881ee1e14ed}}, the author extended the work {{cite:6a6a09252a7ed728ff443e4d6a0e46ca0542d6a3}} and showed that if the external forcing term is in the natural space {{formula:40e056ef-cca0-4599-9fa1-d35b41a393e0}} , then the global attractor for 2D NSE is compact not only in the {{formula:021ead07-9d7a-4129-8e0d-cab9f0edabac}} -norm but also in the {{formula:78273ff4-c696-4243-b061-ad4d8266fe93}} -norm, and it attracts all bounded sets in {{formula:c61a419a-6ed4-47c8-880f-ff3b74d7f5c3}} in the metric of {{formula:79159d23-e4c4-4d7d-ad45-b95b5960ca42}} . Whenever the forcing {{formula:0a799eed-0a3a-423f-82fe-5bec4d9bea19}} , the existence of {{formula:40901e20-8914-43e5-9d39-1f1871636e5c}} -global attractors for 2D CBF equations in unbounded domains for the absorption exponent {{formula:1198ea1f-1ac7-4d98-85c9-f50f1b73048a}} is proved in {{cite:97147b0335a36b8c01e444bbb1a5c57754cf4043}}. It is well-investigated in the literature that a large class of stochastic partial differential equations (SPDEs) generate random dynamical systems (RDS) (cf. {{cite:d3a26017d73fec1f32d8601ca45093bbafed7b0c}}). The analysis of infinite dimensional RDS is also an essential branch in the study of qualitative properties of SPDEs (see {{cite:a4f2df3ca947028ba87d3306aabea6842ed2fc04}}, {{cite:42a8419b59421eaa18161fa73423659650a0b086}}, {{cite:482183e68eb304fdd512c22e0f4a6e950135252d}}, etc for more details). The random dynamics of 2D stochastic Navier-Stokes equations (SNSE) has been carried out in the works {{cite:a4f2df3ca947028ba87d3306aabea6842ed2fc04}}, {{cite:dd7d27ce5f2877ef94cdb90ae34ccc195f9c3914}}, {{cite:09c520950ead5b9865d5d434a7d686b0afb9edb3}}, {{cite:7f43e9982bcb141ada14e6e099aaadc457456963}}, {{cite:cd49fada896caba79c66755ea49cb2be40278cb1}}, {{cite:963d85f31333b159f5eda9a5902f14f9f76fb97f}}, {{cite:9442d9a630d76bb982e7ac3a27467f7b1c496950}}, {{cite:482183e68eb304fdd512c22e0f4a6e950135252d}}, etc and the references therein.
| i | def58fcc2af5f02229218a5d62973d2a |
In this paper, we implemented several existing GNN models, and benchmarked on different datasets for link predictions. We not only
reproduced the results in {{cite:585348bfc8162e21de47c05b7f798a0c6db9150e}} and {{cite:2804d72460614de9b5677f4ae848d03ceaf66382}},
also provided a more fair and systematic comparison.
Our experiments show these GNN architectures perform similarly on other benchmarks for link prediction tasks.
There are several interesting directions to pursue for the future of this paper. First off, the datasets we benchmarked on are still relatively small, in the future we could evaluate the models on much larger graphs, especially with applications in real world. The second interesting direction in which to take this paper would be to implement more recently developed GNN models. Furthermore, we could try to design and develop our own GNN architecture and benchmark on link prediction tasks.
| d | aac4cf8ce86a49103ffc99c75d046c0d |
Recently, the role of magic in certain many-body systems known as symmetry-protected topological (SPT) phasesWe remark that such phases are also classified by group cohomology {{cite:3055c1cf467d8cd1261f2799cafe996feac14590}}. has been studied {{cite:576f89704569bb2d14076f55406b8c005e7f9eea}}, {{cite:605629756d8916e5db6c08a27125f2d6e11af120}}, {{cite:88a1076903099e8a088aae766d297abec5a23bb3}}, whereby all states within a phase of matter possess magic. Such SPT phases have also been identified as resources for MBQC {{cite:1a6b35a8cd0c2672f7864de95aee99bed3a35076}}, {{cite:79239bf13095d1746c14162f3884aee2ccecb9bd}}, {{cite:8bfc21e7055d97e9466d7daac457e308f219f9e3}}, {{cite:7c20f69f4a9875a8244eccc731cc350dedd0e027}}, {{cite:396b8268fa1c4ec7f0aa81279560562392b0a9b0}}, {{cite:abc038f37dc17917d043591bec62c7a304b308c6}}. It would be interesting to study the role of many-body magic for computational universality, particularly with the example of Ref. {{cite:8bfc21e7055d97e9466d7daac457e308f219f9e3}}, which is universal with only Pauli measurements. Further, it would be interesting to consider the role of contextuality in the fault-tolerant setting – particularly fault-tolerant MBQC {{cite:9e12c25f35d538a339fe4d58b55de904727d9224}}, {{cite:fdc08d6f06e9ab7a38232858220bae87226e4a18}}, {{cite:ca614da29fbc2f28953811ca2ef0890b12f58954}}, {{cite:8be86f3bd22b534f986ba2a01ce67642fd125033}}, {{cite:dc25630268e579ef56ae7c7e2262eb2f8614820b}}, {{cite:f728d056508e9c365dd0f84d68573eb04c8c7c3b}} – where non-Clifford operations require vastly more resources than non-Clifford operations (and indeed is the motivation for considering magic as a resource in the present setting).
| d | 718ead3a5262e88323069696089ec3e2 |
On the another hand, in unsupervised detection, detectors are trained with clean images to identify the ae. It is also known as prediction inconsistency models since it depends on the fact that ae might not fool every nn model. That's because the input feature space almost limited and adversary always takes that as an advantage to generate the ae. Hence, unsupervised detectors try to reduce this limited input feature space available to adversaries. To accomplish this, many approaches have been presented in the literature. The fs approach {{cite:e186a3edab7730a54efba7020871bc567556ba62}} measures the distance between the predictions of the input and the same input after squeezing and the input will be adversarial if the distance exceeds a threshold. The work in {{cite:e186a3edab7730a54efba7020871bc567556ba62}} squeezes out unnecessary input features by reducing the color bit depth of each pixel and spatial smoothing of adversarial inputs. As reported in {{cite:e186a3edab7730a54efba7020871bc567556ba62}}, it is not performing well with some known attacks like fgsm. Instead of squeezing, denoising based approach, like MagNet {{cite:2c559f102625c1670c0623253088ddc7a70b16c0}}, measures the distances between the predictions of input samples and denoised/filtered input samples. It was found in {{cite:d4eea043cb84bc3f0a912f479f5b52fc86c0040c}}, {{cite:a4dcd6cc6a2ac160b674f95513c77a2c7472595c}} that MagNet can be broken and not scaled to large images. Recently, a network/model invariant approach is introduced {{cite:809684692361e5d185df152a6aa9b597063ca83c}}. They proposed a nic method that builds a set of models for individual layers to describe the provenance and the activation value distribution channels. It was observed that ae affect these channels. The provenance channel describes the instability of activated neurons set in the next layer when small changes are present in the input sample while the activation value distribution channel describes the changes with the activation values of a layer. The reported performance of this method showed its superiority against other state-of-the-art models but other works reported that the performance of baseline detectors is not consistent {{cite:45dab512a991845246244c88b61f098d6babedf2}}, increases the model parameters overhead {{cite:c017464d9b4ffdcdada4d9da868a3fbcd85b6c28}}, is time-consuming {{cite:809684692361e5d185df152a6aa9b597063ca83c}}, and increases the latency in the inference time {{cite:0651850c74f2dc4ce3c5e05ca239e15f16631c4a}}.
| m | 4f983a0dfde082852f9f157e977fe7a2 |
where {{formula:058e5551-1e3b-4b63-9a6d-e482df64cd62}} is the number of iterations.
If {{formula:3299e99f-8f97-476a-8721-43cde139054c}} is the identity matrix and {{formula:769109e4-51f8-4779-94fd-af67bdce9bd0}} , the resulting procedure is called Gradient Descent (GD) which achieves sublinear convergence for general smooth convex objective and linear convergence for smooth-strongly convex ones. When {{formula:510caa8b-82e0-4684-98a6-8aa7c3a37477}} is large, the full gradient method is inefficient due to its iteration cost scaling linearly in {{formula:ae517130-08e1-488f-8177-0b55fc7b5ec9}} . Consequently, stochastic gradient descent (SGD) has been a typical alternation {{cite:221064aa3b2676059b47fad6ba18bb7400f7f8cd}}, {{cite:8f4a70a7b163ef2e3f7775afef4e9ccbc84455cc}}, {{cite:961ba3fd8dbce179bbd13710d508df67783080d5}}. Such a method samples a small mini-batch of data to construct an approximate gradient to achieve cheaper cost in each iteration. However, the convergence rate can be significantly slower than that of the full gradient methods {{cite:df6a5720921e02ce63f10ccb4292bfaed8764d68}}. Thus, a great deal of efforts have been made to devise modification to achieve the convergence rate of the full gradient while keeping low iteration cost {{cite:7d484769c1ca4d7c3288a7f69d2d7daddda771b4}}, {{cite:7494dfecc91692bfa64b76b08dbd0b0c01c03801}}, {{cite:050a85435e086a0c8bc393b095c93350fe832a28}}, {{cite:8ec488f0d9e316d88d22e89a971904415f59a27d}}.
| i | a1a61d08fcc5b123f3b00e5e16199d42 |
In Mrk 335, the variability of the UV band is not correlated with the X-ray band on the
timescale of days to months {{cite:98e3b3de1405b5dad552b81f405f1dbf4ac0499c}}.
This rejects models that predict a close link between UV and X-rays: For instance,
models where the observed X-rays are upscattered UV photons; or models where the UV
is reprocessed emission from an X-ray corona heating the disk, which have successfully been applied
to several well-observed AGN that show a close UV–X-ray
correlation {{cite:059c8c75174baea83e1ca56469995dd11ffea86e}}, {{cite:57c3473fb1960895308ac1f818bcc4644c1515a4}}; or models where UV and X-rays
are due to synchrotron radiation (unlikely for Mrk 335 even though it is a radio emitter).
| d | 0cae5353a079e29a53e9abe95e82bd89 |
The SCLD approach to calculate the phonon spectrum has the great benefit that it allows us to trace the vibrational entropy change back to its microscopic origins via an eigenvector analysis. This opens up the possibility to use not only configurational but now also vibrational entropy as a design principle. The good match with the experimentally measured phonon dispersion in this paper demonstrates that the SCLD approach can be applied to orientational disorder, and can hence be a powerful tool in studying many plastic crystals. Forcefields for many plastic crystals are readily available in for example the OPLS all-atom force field for organic and ionic liquids{{cite:f3dfe3c806194e0c8bc1ceb67b6ac8629de384d4}}, {{cite:6e07986a2133073275a0a936f74fe3661400da40}}, {{cite:3cae83a9bdc511e5017da492f7d9753d9077da6a}}, which has been successfully used in some previous plastic crystal studies{{cite:321c360991d75c3c0c787da42cf0abdafeea1d0a}}, {{cite:04e08264fba4079132f4001f60cc7eb6ca8eee31}}. The SCLD approach can be readily extended to include quasi- and anharmonic effects by calculating the dynamical matrix from molecular dynamics{{cite:222f938270dcea2580959aa9276da331888fe922}}, albeit at greater computational cost.
| d | f576442d55441d3adf8af250fba28dcb |
In our framework, conviction occurs if the decision-maker's posterior belief about the defendant's guilt exceeds an exogenously specified threshold. Law enforcement is single-minded and only wishes to maximize the probability that the defendant is convicted. In order to obtain a conviction, law enforcement may gather evidence at a cost, which we micro-found via a continuous time Wald problem ({{cite:11b9711015da2c4f13a6500aae7bb7e265292d59}}). Importantly, all evidence must be disclosed, both positive and negative, as law enforcement is prohibited from hiding exculpatory findings.Thus, in our framework, although law enforcement may be biased, it is always honest: it must disclose exculpatory evidence, and it may not misrepresent evidence or falsify evidence.
| i | aabd214e8345a1c8ceb707e2826a4873 |
Agglomerator brings together concepts and building blocks from multiple methods, such as CNNs {{cite:5070ca90b5645a19228dafe124ddbc20baf3631f}}, transformers {{cite:04e3c9eb6fe3a6c31eb530efacdd4cfb9cfa1f17}}, {{cite:a3cded16abbd0917b6740ce5355d79f4bc6fe38b}}, {{cite:3aabe290ace58a33fb94e9b8b74caf82aa214aae}}, neural fields {{cite:1c5a428bc1dd853ca04df81977b9f4d81c3fa475}}, contrastive learning representation {{cite:d53d10483f2f2fa48108df9be68e963497cc919c}}, distillation {{cite:85d1d243685632d59d3f141bfbbe01dd6138ad9a}}, and capsules {{cite:6cecabcd2eccdbe75359dd433a99557c498482e6}}.
Here, we introduce the mathematical notation needed to explain the details of the main building blocks of the architecture.
| m | f02374bc2fbc25b41c2ffafb4b6bace9 |
Our proof builds on the asymptotically exact analysis of AMP algorithms developed in {{cite:6fdf3868df4ea944b7430156e0dd2e9b1c36c322}}, {{cite:c022cc46d26597ceb4ba0e09af5814e46d4a1a56}}, {{cite:5bb09a103d1bdd62a4ee7535e1f7cb955c71b047}}, {{cite:4f2d1dee154903184517d69b294b96bfbdacdbcd}}. However we need to overcome three technical obstacles:
{{formula:fdae3c9d-7c26-42a5-a9ab-a5ff21392459}} Show that any GFOM can be reduced (in a suitable sense) to a certain AMP algorithms, whose behavior can be exactly tracked. {{formula:d39b3887-9b2f-4067-bef2-082f78c44305}} Show that Bayes-AMP is optimal among all AMP algorithms. We achieve this goal by considering an estimation
problem on trees and showing that, in a suitable large degree limit, it has the same asymptotic behavior as AMP on the complete graph. On trees it is immediate to see that
BP is the optimal local algorithm. {{formula:9e511cf1-d2da-4a24-8f9e-d097af3928d3}} We need to prove that the asymptotic behavior of BP for trees of large degree is equivalent to the one of Bayes-AMP on the original problem.
This amounts to proving a Gaussian approximation theorem for BP. While similar results were obtained in the past for discrete models
{{cite:d72c163cb4df2c0a3076a5d56fb371ccdbb43907}}, {{cite:e402f2a08c1903195c28053002729989308caaf8}}, the current setting
is technically more challenging because the underlying variables {{formula:7fe2ef80-aa3c-404c-b8eb-bb01f6182ddb}} are continuous and unbounded.
| d | c892250a6c667257177aa3cba40dc619 |
We run a set of experiments on the CUB200 (CUB) {{cite:098fb2e50aca5af99621e64411fae5ad065809ba}}, CAR196 (CAR) {{cite:2173feeca585748d5a3fafcfe8cea86f069564b0}}, Stanford Online Products (SOP) {{cite:fedbcf201541e93825d158c09c3f59eb6a756f11}}, In-shop Cloth (In-shop) {{cite:c0c3f6ec97a7a2c72639f3f7d2435e26250ad628}} and Hotels-50K(Hotel) {{cite:f69334f5c95dcff70b5362aa458ef7171f870449}} datasets. All tests are run on the PyTorch platform {{cite:faa98f86f1fb8a95607cb11731e6445347a11c43}}, using ResNet50 {{cite:4b4ac658669466c97e11429c7024a766420ab82c}} architectures, pre-trained on ILSVRC 2012-CLS data {{cite:87a74c203e0baeca06353629416c7f845c9b0d79}}. Training images are re-sized to 256 by 256 pixels. We adopt a standard data augmentation scheme (random horizontal flip and random crops padded by 10 pixels on each side). For pre-processing, we normalize the images using the channel means and standard deviations. All networks are trained using stochastic gradient descent (SGD) with momentum 0. The batch size is 128 for CUB and CAR, 512 for SOP, In-shop and Hotel50k. In a batch of images, each class contains 2 examples and all classes are randomly selected from the training data. Empirically, we set {{formula:5f611518-4d9b-42fd-9be3-300a02f88087}} for CUB and CAR, {{formula:5e46702d-90b5-4110-9e03-cc0476a5a2e0}} for SOP, In-shop and Hotel50k.
| r | 587c4fa26d6723d1b2cb14681110c629 |
(1)We prove that the inertial mass of a microscope particle equals it's gravitational mass. This result is an assumption in Einstein's theory of general relativity and is called the principle of equivalence {{cite:dc0685bc1e53f49c526e7fd942170f90a4594c42}}, {{cite:c03a041e73bf3ee325952cb5a79cc4bf79cca231}}, {{cite:3d33c1d4a93c2480767bc1ad25f3656fdb167c24}}.
| d | fac8a39bd28c011517f8faf3e2488b2b |
The motivations for this investigation are twofold. The first is the recent breakthrough observation of gravitational waves {{cite:e44b50a2acc82fdbeaff3eca1c025b9826762f32}}, which makes it realistic to seek their measurable signatures. In particular, memory effects {{cite:39f7425d32027a66fc596b161245761eeb7ecc65}}, {{cite:88f1fcc0c46727ccd120ef28cc8f0f91bcaec968}}, {{cite:764153217faa6cb4a0d51f68b87d1eae55d466dc}}, {{cite:1b6cb67b6c87f11cccc4d8f05fba5d5f33b21fdf}}, {{cite:07a321b85d2e33c228562b2984f4085e9e21ec28}}, {{cite:b77c6bf459e1e8bf0d2d46763bdbd76ebb75bab3}}, {{cite:e9e873aca51d6de4b1da02917a2228d7ac3cff15}}, {{cite:a7465515e8df67296ee403583a5a1e8d5c4c6f81}}, {{cite:6eeadbcf2bfb7ecdbbb6c1fc400116e28eb209c3}}, {{cite:4eccb2e5bb6f8e6f7851920d8cc3021f92cb4efb}}, {{cite:9daf15e9fac88356540e4fb59285ac4162f475b7}} sensitive to the net offset of metric components after a gravitational wave burst (see fig. REF ) may be observable in the near future {{cite:733dd6d11a65f5fb3ecd687ac05783028fd812b8}}, {{cite:bc5b73d2aa4f217ed336825ec0bde745afc00319}}, {{cite:4be0dd2f82804747f7dedf7ce4530f40c5235ac8}}. The second motivation has to do with fundamental symmetries of classical and quantum gravity. Indeed, asymptotically Minkowskian space-time metrics enjoy an infinite-dimensional `Bondi-Metzner-Sachs symmetry' {{cite:b58bedb4328c22b7d1240b7b719bad35b3bff41e}}, {{cite:ff239e34630604f65c0ea2a20ec4a35d0a886bf5}}, {{cite:36428f46e4870f915c9ccda6f2fe35fa0a4e7c7c}}, {{cite:e661f7d9becef872a61bb75dcdadcc32d97c69c3}} whose Noether currents at null infinity were recently related to the displacement memory that affects nearby freely falling test masses {{cite:17fb3e6ce98a207d9aeb1ca6ce6879e1af86c01d}}. In a similar vein, the rate of gyroscopic precession found here turns out to coincide with a current {{cite:5e575a0febe08b02cbd2bf0128959e4322a63719}}, {{cite:9c18f5d6c6bf878d9b3521b513e25e920db62506}} for so-called `dual supertranslations' {{cite:ee7ba1a2e1cc753b3cc27d75da02ccf311120d2c}}, {{cite:8ebab7dc61383be6ff1f7a6ff7f4fe1c8910a08b}}, {{cite:712c75c012d504b582278507583babf5449f1bc0}}, {{cite:56a21c12b9712003c7ad3d4376092d710f5bc0e8}}, {{cite:d62d00ffe298f9f2436e6a92be5bc01f0c0ba680}}, {{cite:45f52d12f651bfb36c61407e7d48b7001b15e924}}, {{cite:a424233bc6a27dd9ebaeaaa0ea1c557b821f7704}}, {{cite:383f7b063f934177a13d35bc8107a88744cc6a7b}}. Furthermore, the net change of orientation before and after the passage of waves involves a `superrotation' charge {{cite:13b6ba4b6fa2d7c8cfe6e87d0193f0d386efbbda}}, {{cite:df683af142319778f244c20b3dac93b32c5948a2}}, {{cite:f7fc487ca9dbebb20a8a375210269c67f46e7945}} and a generator of local gravitational electric-magnetic duality transformations. To our knowledge, this is the first appearance of such dual symmetries in a simple local measurement protocol for gravitational waves.
| i | 4f5865f788b7ea5e7378e85991358b8e |
There is one other point to be made about the loss function. The abstract idea of a loss function was developed by
Wald {{cite:139a8e2c12997b9925a70888f4c81067908e9b6c}} as a formalisation of the notion that when solving a data-driven problem,
one ultimately has some goal in mind, and that can be captured by an outcome-contingent utility
{{cite:633864c81bf601df8669614a911afe56365edab9}}, or `loss'.
Thus the loss is part of the problem statement. In contrast, in the ML literature, such as that arising from
{{cite:9d64587f4329838190428858e9ab8f0e7d4e69e4}}, a loss function is considered as part of the specification of a `learning algorithm' (means of solving the problem).
From Wald's perspective, all of the work inspired by {{cite:9d64587f4329838190428858e9ab8f0e7d4e69e4}} is a perhaps not so surprising side-effect of
attempting to solve one problem (classification using 0-1 loss) by using a method that utilises a
different loss. If one tries to repeat the negative example of {{cite:9d64587f4329838190428858e9ab8f0e7d4e69e4}} without the use of a
surrogate, and always in terms of the Bayes optimal, there is nothing to see. When one adds some noise,
the Bayes risk may change, but one will not see the apparent paradoxes of {{cite:9d64587f4329838190428858e9ab8f0e7d4e69e4}}. Recently, there has been a burst of research around new loss functions whole formulation aims to reduce the difficulty of the learning task, some becoming overwhelmingly popular {{cite:dc31b4326fb2648c875d4bba61183d3ddcabd28b}}. One can see benefits of such a substantial shift from the normative view (of properness) to a more user-centric "à-la-Wald” design, but it usually comes with overloading loss functions with new hyperparameters. Technically, quantifing properties of the minimizers — in effect, answering the question "what can be learned from this loss" — can be non-trivial {{cite:5c0721418ba2a7acc517c30bd23c7be067aec80e}} but it is an important task: Long and Servedio's result brightly demonstrates that we cannot reasonably stick with the choice "classifier = linear" and "loss = convex" (and eventually "algorithm = boosting") if the data is subject to corruption. One would have many reasons to stick with linear separators e.g. for their simplicity and interpretability. In such a case, changing the loss, breaking properness and eventually convexity might just be a requirement.
| d | 618dd8bd4de450f41fb6d235e948552c |
{{formula:6978f4ae-04c3-46ac-aa5c-3b3cfd3436fd}} -order oracle complexity bound. This upper bound corresponds to the {{formula:2bd54e86-4a53-4cc3-8f56-dd3bb7c36ec5}} -strongly convex case lower bound from {{cite:3ddd77b30484bc076f3781862e72f7633d6fa336}} and improves (REF ) on a logarithmic factor. The dependence on {{formula:a9e5234e-f3a1-4b19-ac9c-4729e53fcd5e}} is {{formula:a4ac7fb1-3b7b-47f4-9199-9b3bfb3ae244}} as it should be locally for tensor methods {{cite:7b6b87c82f051eb762efbefd856122a78dd297b5}}, {{cite:b0411c34bcf04ddf2777be5f33fc03911a9d5df6}} (see also {{cite:7ccb188a6319381a6a22ab678d05b6a45b8df1c8}}, {{cite:28d105510f1f14dd79f3afbe60a185cb6e321ed3}}, {{cite:4971252fe767d98acb3f81875f15918e7e30b8d5}}, {{cite:b74ce29ab8ed3a73d42161f3bfe9a5bf34e0fbcc}}). But (REF ) describe two regimes: the first term (complexity independent on {{formula:75098c7f-48cb-4b57-8cc0-2bb28db21686}} ) describe the complexity to reach the vicinity of quadratic convergence, the second term (complexity is {{formula:bd0b2cef-af7a-4196-a170-e0d19f892c22}} ) describe Newton's type local convergence.
| m | f4c44f05607e9dfe63877f5e062cb9c3 |
Subgraph-based GNN training.
The key idea of this line of work is to improve the scalability of GNNs by separating the graph into overlapped small batches, then training models with sampled subgraphs {{cite:a7e0a33caa0d8c48a7940523e7e6ff434868108b}}, {{cite:6c36459183c7ff2b829a9d7158979b49ed73deee}}, {{cite:0b0ee5bb0be9448d12103c14f95dbf85d195e818}}, {{cite:e9377781607131cb502cad2088e76c0ac84be8df}}, {{cite:af0267b636facfcab8f2d334da0b6ca634c061f9}}.
However, this approach reduces the memory footprint but results in extra time cost to compute the overlapping nodes between batches.
Generally, methods in this category are orthogonal to RSC , and they can be combined, which is shown in the next section.
| d | b1065e36a5d7d5e8e51403bb2eb9f540 |
For the first TTS scenario, we train a Transformer TTS model on our Korean male dataset using two GTX 1080 Ti GPUs.
Compared to the original paper {{cite:6b46cdf1f5c66b5b266be86ab69571e088cc47d8}}, we reduce the number of layers from six to three due to the lack of training data and adopt character embeddings instead of phoneme ones.
Also, as in {{cite:53540515c2700c21f88a79c40b77f9339764ede6}}, layer normalization is applied to character embeddings.
We set the maximum and minimum value of {{formula:aa6d8c3e-3031-4236-a233-b94976fb26f2}} (in Eq. REF ) to 90 and 20, respectively, and reduce {{formula:d654fe19-21ba-45b2-afd7-8e590e8d14bc}} by 1 per epoch.
That is, {{formula:5f8ec39a-8093-437e-81f0-297f705a5661}} is defined as follows:
{{formula:66f6cced-a572-43e0-b52f-3533c2201e14}}
| r | 119ced2ee20a8e616335ea5a2a5bc8ca |
To generate robust image representation, AlexNet {{cite:72507a58177009861043e2856be5bf592e5b129c}}, InceptionV3 {{cite:039cf656b45abeb37e26aeb1e0c7a0a1b2b45337}} or ResNet {{cite:43d2da79fff0867a14b424be61a646acfe1cd2c2}} pre-trained on ImageNet {{cite:a744accc699996b310b6f9ba8039d1e321ee3e50}} database are used. Another option would be to use conventional handcrafted descriptors (like ORB {{cite:0ef5c112e19e9f254f16c76142183b60718c9ef7}} or DSIFT {{cite:2129522a585f0bdb09bedfb42cf7512861352f68}}); however, they are usually outperformed by deep features. Considered network architectures consist of two parts: convolutional layers, which are responsible for extracting image features (so-called features' block), and fully connected layers, which are responsible for the classification (so-called classifier's block). Classifier's block cannot be directly used because it was trained for other types of images; however, features' block encodes more general, reusable information. Therefore, removing the classifier block from the network and preserving convolutional layers allows us to generate robust image features. In the case of AlexNet, we obtain a set of points in 256-dimensional space, whose number depends on the input image's resolution (e.g., in case of resolution {{formula:7463d46f-ff17-4869-b5c4-d29dd57bd992}} pixels, 169 points ({{formula:33e2d8c1-f1c8-4ba9-9666-b9f2ffb92121}} ) are generated).
| m | d3ab55661e6b0f106f9edde4797c69e4 |
We also use the full shape of the BOSS DR12 pre-reconstructed power spectrum measurements {{cite:f164cd15fb63632025beceea91e287ac102d4b31}}. In particular we consider the combination of the monopole and quadrupole of the power spectra of the three different sky-cuts CMASS NGC and CMASS SGC at effective redshift {{formula:b9699e3c-6212-4a8b-a621-bcafb94836f0}} and LOWZ NGC at {{formula:42f055dc-3ccc-4e86-940e-4b03adddeebd}} and we follow the conventions of Refs. {{cite:a27086b7c827064ff6fedeb0ec7a23722f5f1522}}, {{cite:e1fc2c22a50fb9a7e67a4bdec4f86110fc05fa77}}, {{cite:7997af46abc4c6f308e8769815589b290749b466}} for the maximum wavenumber that we consider ({{formula:20cabb67-e169-43f6-b48c-9d62708543c5}} for CMASS and {{formula:71cfc8e5-e81c-4c7a-b8e2-d2d5df667a17}} for NGC). We combine this dataset with the {{formula:cbeb6e0c-d50f-45f5-9def-71d20429a486}} and {{formula:8b63fc56-7c83-4a19-910c-04afe645c7ea}} parameters measured from the post-reconstructed power spectra corresponding to the same sky-cuts, see Ref. {{cite:7997af46abc4c6f308e8769815589b290749b466}} for an explanation of how the covariances between these datasets are calculated. We refer to such a dataset, combined with 'small-{{formula:c4dc2873-8e44-4846-add7-adc702fdcf76}} ' BAO mentioned in the previous paragraph as BAO+FS.
| m | 4844d1324a54ea933dba64e39c02c520 |
Theorem 1 ({{cite:aa6cc9e685861f96a6f78f767d44366c0fca9440}}, {{cite:ef268027be16c63b1cba3dd498a1ad0f54f3f728}})
For any graph {{formula:f3c3bded-3cf5-4f66-9cf3-20a00c61496d}} ,
{{formula:d124382b-d6e0-4747-9719-9790a0c2a6a0}}
| r | 0589e7c0899922642f13ef2c90ffd4bb |
The cluster of particles is assumed to be embedded in vacuum, whereas the excitation field is a {{formula:98279a08-23e8-444d-9443-cd88f448b71c}} polarized plane wave whose propagation direction is defined by the incident angles {{formula:f59152b1-03f7-4919-9abe-898bd5c85311}} and {{formula:16d85929-dcc8-45e4-8a87-be7ad5e3bde6}} . The frequency-dependent permittivity of silicon and gold is interpolated from the experimental data provided in Refs. {{cite:fe21d7dc39da4d8f358118f787d1ef54b22c38d2}} and {{cite:0d3fe3b1aadb7b50732036083e28e3b3810f62ae}}, respectively. The tensor components of the nonlinear surface susceptibility {{formula:41d44df8-df40-4b95-9649-d59a3610ec26}} of gold, as well as its bulk nonlinear susceptibility {{formula:e78b9a34-a3df-478a-995c-18833d44f003}} , are calculated using the free-electron hydrodynamic model {{cite:37d28ce82086505680a65012489d57a4fdb7a738}}:
{{formula:22fc4d38-8352-45d5-b056-970dedb58fcd}}
| d | d2ff105e0b8908f0942c8a1563bd33de |
Ensemble Kalman methods, originated from geophysics {{cite:76e5f49c39ee0af243e7c9d83f9353f8340521a9}}, have achieved significant success in state estimation for complex dynamical systems with noisy observations {{cite:30f87e3638d25c5ed13bfbe899ab0ff2ea56266b}}, {{cite:c4401e7dfa7be0aeb03317995fc807a198f74609}}, {{cite:b97d9bc3426c35818f1bab14155d69374afa360f}}, {{cite:356c37bd3d40378ebea8f28257b8052b42e8c02d}}, {{cite:73de3621a07ad8a8b887e54cba273cd86f007947}}, {{cite:9773c07bc48a3416919421b320fe909336e1b300}}, {{cite:3d56a2914a9afaf107528cd1d615bb95304d422b}}, {{cite:f02effbc7b7c55d2527ad20ab56ef4962fe4b94c}}.
More recently, these methods have been used to solve inverse problems with the objective of estimating the model parameters instead of the states {{cite:d71086ada1ff953bb7ee92abfaf9c8d46cd8ba81}}, {{cite:fffce20961bf24d14da0084fb45519378cd9e763}}, {{cite:7ac715c07c0a0b8cc9e7b1b8bce420bef5baf754}}, {{cite:f5ada496c2e657972d39086cf0634dff5c414a5c}}, {{cite:083ae929b6e545889ec33e867b405fba694f3fa3}}, {{cite:acf25d41aae82fd366629534599b1a66886fd2fb}}, {{cite:7af619ad8a475523af0841184ffca26921f26b89}}.
As a gradient-free optimization algorithm based on a small number of ensembles, these methods gained increasing popularity for solving inverse problems since they can be implemented non-intrusively and in parallel.
However, due to the collapse of ensembles {{cite:100b2dd505308dad80721d6d0a5784c902fad767}}, {{cite:894f602e201c29af24041e8c11ac5da6eb18add0}}, {{cite:015c13dd194cefe4c46b3f69addaa21677bda27e}}, {{cite:d1477d01141db32aaf6b6227499ce7086c66e000}}, they tend to underestimate the posterior variances and thus fail to provide a rigorous basis for systematic UQ. Combining Kalman methods with MCMC can alleviate this issue. This approach consists of three stages: (i) calibrating models with ensemble Kalman methods, (ii) emulating the parameter-to-data map using evaluations of forward models, and (iii) sampling posteriors with MCMC based on cheaper emulators. We refer to this approach as Calibration-Emulation-Sampling (CES) {{cite:670e9f798a047ff9327373b2329632543c136766}}.
Two immediate benefits of such framework are 1) the reuse of expensive forward evaluations, and 2) the computationally cheap surrogates in the MCMC procedure.
| i | ccf3fcaa9cc0e8f9820e0805bc8a85f4 |
Since a long-run average MDP can be formulated as a linear program
(refer to Chapter 8.8 of {{cite:c279eb9d843ec8cb8b73f1872c88e0f75bdf27a5}}), we can also rewrite the
bilevel MDP problem (REF ) as the following mathematical
program
{{formula:9fd9726e-cf5a-431d-b1e0-7b93e6509a79}}
| r | f631a1298eba90937dc6ad53de93ec35 |
TD error can be used to update motivation-dependent Q-function directly or to train neural networks to optimize their policy. Q-function depends on the new set of variables {{formula:e047710d-3a82-410d-a26b-aac76caefa04}} that evolve following their own rules. These variables reflect fluctuations in physiological or psychological states that substantially change the reward function and, therefore, can generate flexible behaviors dependent on animals’ ongoing needs. We trained neural networks via backpropagation of the TD error (equation REF ), an approach employed in deep Q-learning {{cite:4ef5e172bf41532c8a293f234438fac1f310d993}}. Below we present several examples in which neural networks could be trained to solve motivation-dependent tasks.
| r | d512d40f0a23ed21321dde251ad16b92 |
The same Lindblad equation could have been derived starting from a completely different model, namely a fully quantum mechanical model for the electron on a lattice interacting on each lattice site with a bath of macroscopically many degrees of freedom (so that their spectrum is continuous) that are uncorrelated between different lattice sites. There are several ways of obtaining an effective model for the electron only, in which the bath degrees of freedom are eliminated. The first one is a path integral approach that uses semiclassical approximations, similar to what was done in the famous Caldeira-Leggett model {{cite:9953376e09f241a05a97a4f35b1adaf045514621}}. Such a path integral approach has been performed by {{cite:dd4f0b2c5341fa1d31cdbb8671bc7ac814b1118b}} to calculate the environmentally-induced loss of phase coherence in quantum transport problems. The second class of approaches starts from the combined density matrix of system and bath and takes the trace over the bath degrees of freedom in order to obtain the time evolution of the reduced density matrix for the system alone. The generic result of such an approach is a Lindblad equation if suitable assumptions are made, namely a short-term memory (justifying the Markov approximation), a rapid decay of reservoir correlations (implemented by using a continuum of modes), and a lack of back-action from the system on the bath (so that a product ansatz for their combined density matrix can be made) {{cite:652e0dff809dfd324819b090e7d8fe2db6b2d938}}.
| d | 377e24b27b552bad931fc398bd064aec |
The binding problem is also an important topic in machine learning
{{cite:95f24b0ff9422c1feec45a1217299b2f72c39712}}. In knowledge graph embedding tasks {{cite:e939f2c450629ba9ba038f9cf44d049640fcf91b}},
Nickel and colleagues showed that HRR yields a better generalization
performance than methods based on nonlinear-projection of a concatenated
vector {{cite:709a4a406c11d095b145071c269fa12d4f8edf9e}}. Moreover, in visual question
answering tasks, binding of an image representation and a query representation
is crucial for solving the task. The Hadamard product is often employed
for this binding {{cite:7a2af1914753df23b951764cd8577a18e948d5e7}}, {{cite:bb00b5e66ad98972256ba6daa199f619595fc565}}, but more
elaborate binding mechanisms, such as self-attention on concatenated
vectors, are suggested to improve the learning performance {{cite:40b97ba8c6f4a23dfae80214003e449afbc37aec}}.
| d | 48b2d193a719cfd0b45aed444b60b430 |
The above intuition is presented by Tian, et al. {{cite:0e3b3c7d4378cbeffdf32931da18cd228876e06f}} as a proposition, which is defined and illustrated mathematically using the concept of mutual information. Based on this proposition, they studied how different choices of the augmented views affect the quality of the learned representation. They concluded that the current contrastive learning is powerful because they reduce the mutual information (MI) between views while keeping task-relevant information intact. Moreover, they proposed a semi-supervised method to learn effective views for a known given task while keeping the SSL process unsupervised.
| m | 2e08faaf61066f62f7b49fba21f245ec |
We provided a necessary and sufficient condition for
{{formula:eb328f4a-a857-4efc-88f0-c3135c6be2e0}} -differential privacy for mechanisms that add
noise from a symmetric and log-concave distribution (Lemma
REF ). This condition is given directly in terms of the
standard distribution function, the needed scale of the distribution,
{{formula:8b8f129c-f50f-47f2-b1b4-3e0c00924039}} , {{formula:11faf8ad-2b37-4545-9d8e-79dd8c387f5a}} , and the global sensitivity {{formula:d62c41c6-c19b-4010-b084-d1f05c6f1a7d}} of the
query in question. Previously, such a condition was only known for the
Gaussian distribution {{cite:bd87823c212baba18367009acc328c249ce36e60}} and was shown to be
essentially identical for the discrete Gaussian distribution
{{cite:906603d83052112f9b9e217e296df1c36d1cb285}}.
| d | 1bf7c5e8c3c0b7c5a8091acd162c3162 |
The notion of discrete conformal structures on triangulated surfaces is a discrete analogue of their smooth counterparts, conformal structures on smooth surfaces. One motivation to study discrete conformal structures is to compute the conformal maps between planar regions in applications. Thurston {{cite:88ab4c4ff2f2855ac6012331d1474bc6d02e6edc}} rediscovered the circle packing theorem initiated by Koebe {{cite:e6e58c3b4e543fce966a6de378bad0eee3d83f5a}} and Andreev {{cite:a4c36b5842ede3b8cd0d6ad65fd11c9b295ac10f}}, {{cite:8d640f0e9cbe0bc071b6d1d22bfbb36447616b5f}},
and proposed circle packings as a natural discretization of conformal maps. This idea was carried out by Rodin-Sullivan {{cite:00d5d678ef697e921417446dae8d0e4137a31cdb}}. Since then, different types of discrete conformal structures on closed surfaces have been extensively studied in the last two decades, including tangential circle packings, Thurston's circle packings, inversive distance circle packings, Luo's vertex scaling, mixed types of discrete conformal structures, etc.
See, for instance, {{cite:1f32143b2ea063db44768811e13e560719e7c341}}, {{cite:c782996ce1162d1404c6f110993120c16140eeaa}}, {{cite:7ecf96867a63750607d6c837dc2dc46b0b5a089d}}, {{cite:100d3bea61a68480f6b7ed60cee6b4d920ba5884}}, {{cite:3707b8e6c7b0bdfdb1df674d82e9aff5a49c1441}}, {{cite:3477ef76d64c8d66dae165a9e0c8afe250874733}}, {{cite:dc8bd0467e5b1b11d304f52c5369d9e83269cf4f}}, {{cite:bb59435c58410a5ee073e0c730731379e6dab552}}, {{cite:ed1ae9527a9e2064d87ed40710c3fa9750b1e191}}, {{cite:150d9133992e7628fe070e2465a9cdffa568e4c8}} and others.
To find circle packings with prescribed combinatorial curvatures on closed triangulated surfaces,
Chow-Luo {{cite:c782996ce1162d1404c6f110993120c16140eeaa}} introduced the combinatorial Ricci flow for Thurston's circle packing metrics on closed surfaces.
Luo {{cite:3477ef76d64c8d66dae165a9e0c8afe250874733}} further introduced a new type of discrete conformal structure called vertex scaling for piecewise flat metrics on closed surfaces,
and the corresponding combinatorial Yamabe flow deforming discrete metrics within a discrete conformal class.
This new notion of discrete conformal structure leads to rigidity results of polyhedral surfaces with respect to discrete curvature {{cite:3958823583ca85404341933e7e877b03f70ba84b}} and discrete uniformization theorems on closed polyhedral surfaces {{cite:2a8b862948b6b00d02afa72c3e3fc97556531081}}, {{cite:25c84a9a752075f94e5ab6a200edffde48006c01}}, {{cite:708a2b86f69c5ccc527e23c829c59c538e5d0b9d}}.
These works give rise to tons of new results on combinatorial curvature flows and discrete conformal structures on closed surfaces, and lead to various applications in surface matching, surface parametrization, manifold spline and others.
See {{cite:a869cb85176c431eda3fde45ab05689917ed33f5}} for a comprehensive survey on this topic.
| i | 6de7301619731e1ff5e28cbfe4ed28d3 |
Fig. REF depicts snapshots of 50 trajectories evolving under the classical counterpart of the Hamiltonian {{formula:587a651d-f9b8-419c-8033-18a055048bd7}} used to generate Figs. REF - REF above, with {{formula:104658df-2964-43b8-88f4-f61a745b0aa7}} .
Initial conditions were spaced uniformly, with respect to the angle {{formula:6c06cec0-a3ba-4ed4-9e46-877900300fdc}} of action-angle coordinates {{formula:e4481c8a-b703-44c6-a112-8a2ab1e5f708}} {{cite:6c40ee19b537a31800ac4c6065a0d47e7157b93b}}, over an energy shell {{formula:dd4de418-aa16-430b-a799-001b91f6dba1}} .
The closed black curves depict an energy shell {{formula:5e259607-ba57-449c-833a-c8c2a12f2c34}} determined by a fixed value of the action, {{formula:9555850a-ad39-47f5-a907-7d691fe1521e}} .
We refer to {{formula:c2bc4b00-d0e4-4983-90c3-315e153abf0f}} as the adiabatic energy shell.
| r | 87fa3e49af21df2d381ee4c90608809f |
Our goal is to compare performance of two algorithms, one based on a learned representation, developed by {{cite:5e3b64768473760aeb9d9efa17f515dff457aa2c}}, and one based on a hand-crafted representation. To analyse these methods, we use two different tasks: the first is to find the mid-point in space between two locations that have been learned (or are `known') already; the second is to do the same in the orientation domain, i.e. to find the mid-bearing between two `known' bearings. These tasks test for geometric consistency within a representation i.e., in this case, whether there is any implicit knowledge in the representation about locations or orientations other than the ones that have been learned about during training. We also probe the representations more directly, looking for systematic spatial organisation in the arrangement of the learned feature vectors when related to corresponding locations in space.
| m | fda59d7defa979d0892a7efdbf9f2d36 |
In this section, we investigate the OOD performance for models trained on CIFAR-10, CIFAR-100 {{cite:18b4bcec05924310d1c37b2b502a2c4744f5f946}} with SupCon, SimCLR and baseline CE using ResNet architecture. We present results with SVHN {{cite:f20729eac76b81704b80a223b3fc300f47a3f655}} and CIFAR-100 /CIFAR-10 as OOD data sets due to limited space, however we have conducted experiments with other OOD datasets like resized ImageNet, LSUN {{cite:5a4af6375d0c218af1d88d44abc56f161382e8a6}} and our observations are also extended to these datasets. The results of our study is presented in Figure REF .
| d | 3dfd6689a82e363e5b8cfa5422ddd1b5 |
Although such works achieved significant success, they mainly relied on classical machine learning approaches, which do not incorporate the graph structure of the connectomes; therefore, the local and global topological properties of the connectomes are not leveraged. {{cite:679aae51ef1d5fd780f72f74f571740fd804654c}} introduced graph neural networks (GNNs) {{cite:37fcb0119cdb93eb098f82960af0b71063bb2e1e}}, a subfield of geometric deep learning, where learning is customized to non-Euclidean spaces such as graphs with complex topologies {{cite:c2993bd5efcd2ab7e0f199ff572a7884cf33449b}}. GNNs are deep neural networks with graph convolution layers. They have already lead to significant increases in performance over existing methods in many fields. For example, they have been successfully applied to classification tasks on networks {{cite:5d38f38cc1393c8f6b319257c2249d7924233f58}}, {{cite:58ecc483806c4d994a18884933d1ef5c5b90902a}}, image segmentation {{cite:9ed2513ca10c173d20ace370831c85ca6c44a1c8}}, feature matching {{cite:4173eb8c59ec278b9177358e9f33d5770c46a1ac}}, few-shot learning {{cite:51da6f4f1f50a7dec5952970323bc8e232db89bd}}, {{cite:07f44f06cd705082099c4a730bf52ada6509d918}}, and various graph mining tasks {{cite:7aa8ccf5e3a68f056cd933843b7c45af5e7718ca}}, {{cite:d326f4c9260837f628cc70b882fd93166e3eef70}}, {{cite:ea9c694f2b199b55687cba00485f77cbc6fc64c7}}. A very recent review on GNNs in the field of network neuroscience {{cite:e4268cb45ac00db70b3cdcb335a9bd9543e2e6a0}} examined a variety of graph-based architecture tailored for brain connectivity classification, integration, superresolution and synthesis across time and modalities. However, none of the reviewed methods were designed for brain graph regression for cognitive score prediction.
| i | e34d1a23910380ff14433b4b9a8c71de |
Many real networks have been mapped so far but there are still complex systems whose network structure information is totally missing. For the later cases, a possible scheme is inferring their topological structure, especially directed or causal networks{{cite:dc47a779e34e430d4e5bd7c4f75e3fed6222f0c7}}, {{cite:2ca8c38526a481790f1e5e5893d35d7ff448ff3b}}, {{cite:d4f8043f4ef665b75a950f815ed25c56eec8745b}}, {{cite:ab0effa89773e0f48f705270a6a78a0b9b116f90}}, {{cite:6596a47b047ebc1e8639ea023fc50ce8f08449a7}}, {{cite:e88eef80ea8cde17b4e2b1c9a40349256edca61f}}, from nodes' activities data first and then applying our approach to infer system dynamics. It is worth noting that inferring network structure from nodes' activities data is also challenging, especially when the number of nodes is large{{cite:cda8af0268c75cf5350a4c3555226b31db3d9af3}}, {{cite:2c87585d0f9d3685c3ec106a15dc4d853c8e89cf}}, because the number of parameters needing to be estimated is about {{formula:e3c381cd-f3f0-4325-ab5e-c762dc99f792}} where {{formula:a1868863-bd47-4024-bbfa-2094041b5ad6}} is network size. Therefore, how to simultaneously infer both structure and dynamics of large complex systems is still an outstanding problem.
| d | 66d0d94a39fc3e59407ccb2aad1994f4 |
determining (nearly) optimal measurement points is beyond the reach of humans for sufficiently complex systems,
it is important to have a diagnosis system provide appropriate measurement suggestions automatically.
To this end, several (efficiently computable) heuristics {{cite:caf837a20cb5b3e66e5bc2916191c19567e35be2}}, {{cite:d1189095e87b9e2ebdc96b3e9e9da46e4cd793ec}}, {{cite:a91de099952e777bec79667761bfc67fdf677519}}, {{cite:fd6219e0d669351ebeecf4691235c0f19287a0b6}}, {{cite:56729b1a2ecf9c46d92c9c3bd63f2bb5bef5efdf}}, {{cite:015b1307a346285de041e51e2b14046cc8ccb1fe}}, {{cite:5a764192fb70002e188ecddf883ed920fd578fe1}}, {{cite:a83e4e34e888c62e21cda8c2ff70bc6955160c39}}, that define the goodness of measurements based on various information-theoretic considerations, have been proposed to deal with the problem that optimal measurement proposal is NP-hard {{cite:5eb8ff623d089803526b6448e06ac4ce36ea0873}}. These heuristics, in principle, assess the impact of measurements by comparing a-priori (before measurement is taken) and expected a-posteriori (after measurement outcome is known) situations regarding the (known) diagnoses and their properties (e.g., probabilities). That is, the evaluation of the expected utility of measurements requires the knowledge of a sample of diagnoses. This sample is often termed the leading diagnoses and usually defined as the best diagnoses according to some preference criterion such as minimal cardinality (where a minimal number of components is assumed abnormal) or maximal probability. Beside such criteria, for computational and pragmatic reasons, a least requirement is usually that only minimal diagnoses
are computed (cf. the “principle of parsimony” {{cite:c24722a850f0bfed68445f332b4f2821e8265c9a}}). A minimal diagnosis is one that does not strictly contain (in the sense of set-inclusion) any diagnosis.
A generic sequential diagnosis process (see Fig. REF ) can thus be thought of as a recurring execution of
[label=()]
| i | 89bd73b1712ec030196e34d3a5ec68af |
for some {{formula:3a599634-0409-440f-b2a5-7eb29794a029}} and {{formula:b50ac852-b8ef-4ac9-9b2d-10d7040877e9}} , and a link function {{formula:aca57599-0c77-463d-911a-540e234a9f63}}
that can be chosen from a variety of possibilities, but is most often
the logistic link function, or the cumulative distribution function of a standard normal random variable (the “probit link”). For more details of such models, we refer to {{cite:b2f26b35e8182de5ab47b520508cb44cdd2c11d2}} and {{cite:57f1ebd6a4b2a6125092a207206244ec9a360479}}. One drawback of note in applying logistic regression in this setting is that changing the set {{formula:0ab7eb87-ead3-428b-a587-9244cdc63bed}} necessitates refitting the model, which can be computationally cumbersome, and further, as a consequence, the resulting estimators of {{formula:1c9d861d-29f1-4bc7-a9c0-5748fa760504}} need not be monotone with respect to increasing (or decreasing) sets {{formula:0c7f83ac-ec75-4b8a-82f3-fd4efd4561bf}} . An approach to adjust such estimators to restore monotonicity is to use rearrangement or isotonization, as discussed in e.g. {{cite:02a948fd901bdf1553158d90b244c32b999c67f1}}.
| m | 37d8423025a5167d1f85958a0ce98c18 |
In practice, one can freely select one or multiple condition attributes for primal attribute editing, depending on the targeted editing goals. In general, to achieve accurate and disentangled editing for a primal attribute, selection of condition attribute is closely related to the data distribution of GAN's training set. For example, elder people tend to wear eyeglasses in FFHQ {{cite:a88d6f540065139e080e04fc52733668464e60e0}}, so age and eyeglass attributes are likely to be entangled with each other in the latent space. One can also follow this strategy for condition attribute selection.
| d | 27d858636f8b6f5db66615f4b558b46c |
where {{formula:aa2ef26c-934f-4e22-ad78-6496fff10d94}} is the maximum number of vertices in {{formula:c4cde88f-84a9-4208-9882-d2b9a8f61d0d}} , {{formula:e51697d6-1dbf-4f66-a100-5b74f33c9b7f}} , {{formula:f485e39c-d017-4d52-937d-d21ac08a1621}} and {{formula:d44f0a8c-da61-4c2d-92e3-9c2d243c4416}} is the matrix multiplication constant {{cite:ba7626dac25860ac2ca5707b98ff526ae696459b}}.
The algorithm succeeds with probability at least {{formula:5233fb70-5b58-43ec-9878-03b706f357f3}} for an arbitrarily large global constant {{formula:8d00d781-d3df-451e-8488-9b5df0aa991f}} .
| r | 94f469fb1a88e7d5c688bb3a3ee5f433 |
The implication of this calculation is that even though particle modes {{formula:2d274208-32ee-4e05-a7b2-362f39d1aa74}} and {{formula:52243357-1664-450c-9fd6-084505934864}} in region {{formula:919baf69-ffaf-4865-aea8-de63eb1465ec}} are initially maximally entangled, the coupling of {{formula:26393c3c-c6ad-4c1b-a18f-88e306eec7b1}} to the modes {{formula:d271971f-e065-4639-9649-a621707aead7}} (BH mode and emitted anti-particle in region {{formula:e0deb435-db9c-4508-88da-65095968748a}} ) by means of the evolution under {{formula:e6709a71-70b8-4164-8282-55ccb60e6834}} , weakens the entanglement between {{formula:82bb6efb-8672-49b7-ac72-2d61189d16ce}} and {{formula:f6e21a82-1ac1-4b4e-84aa-b0bf369da7d7}} as the entanglement between {{formula:9ff59df0-813e-4524-b850-d76b49e544e8}} and {{formula:62931275-8e46-43ca-a1fa-d6fdbee9af15}} grows from an initial zero value. Likewise, the entanglement between the bipartite partition
{{formula:55e94a3d-49db-45ec-9c8d-5d78ad87a8c6}} is reduced from what it would be if mode {{formula:b250db65-4958-4a68-96a1-46bbd1aa3233}} were not initially entangled with mode {{formula:0695f107-6982-4317-9d85-ddaf84a64ac0}} .
This is an alternative way to observe that the BH particle production degrades entanglement (see Brádler and Adami {{cite:9b09318602f4855891ee441befcb24459febfc43}} for further in depth analysis)
as the entanglement is distributed between bipartite mode partitions {{formula:840b60e1-5c74-4a1c-8fb5-8a64b5e2c59c}} and modes {{formula:0ecbfe3b-09f4-43a7-b73b-cc8486b8dbdb}} through the common Hawking radiation mode {{formula:99d88ceb-d8fd-4948-a21a-3353666dc7e5}} . The fact that the full pure quantum state is entangled across the horizon, vs a separable state, argues against the necessity for the concept of a BH firewall at the horizon
{{cite:718dea28e49ff0e060ef835d3492c4e9fea4276e}}, {{cite:ad4e798d90b3b7716961cea2b4983cadd03adef1}}, {{cite:b6c42cc2f260db5e5bab82e8eb58ff03368cc65e}}.
| d | 7ebf12f77a2b315086f29efbcfd348e4 |
We compare the visual quality of the reconstructed 3D surfaces between our approach and the other two camera tracking approaches (TSDF tracking and surfel tracking).
We implement TSDF tracking with two configurations:
(1) `TSDF Low-res' (low resolution) which allocates larger voxels, so that the number of parameters used to represent the geometry roughly matches the storage cost of our approach;
(2) `TSDF High-res' (high resolution, also used in the quantitative comparisons in sec:quantity) with a much smaller voxel size for a higher quality 3D surface reconstruction.
fig:icl-compare shows the visual effect comparison on a scan in {{cite:3868b47090decd805e5bdabc2dc103ba78f945ff}} among different tracking approaches.
Our method can achieve a more complete 3D surface reconstruction while using {{formula:980d3634-b39d-420f-8c82-c78db3e8acfa}} and {{formula:f75e7022-0242-4790-8817-2ed6ef2a2348}} fewer parameters than `TSDF High-res' and surfel tracking respectively to represent the entire scene.
If the same amount of memory is used as ours, both the camera tracking and the reconstruction quality would be severely hampered for TSDF approaches as shown in `TSDF Low-res' results.
| r | b339f544017c8d263de62c3471b0c882 |
In the literature, several modifications of Newton's method that always yield descent directions have been proposed {{cite:c54e42ff148f44f46cf22666ccea4ca655be5508}}, {{cite:0445ca19e58e24a60514cc43c0cae16601ce74d6}}, {{cite:d688627478efc13294808d415cd1593d82416a82}}. The Gauss-Newton {{cite:d688627478efc13294808d415cd1593d82416a82}} method exploits the least-squares structure of the objective function (REF ) and constructs a positive semi-definite approximation to {{formula:3320d52d-4ffb-4b95-85ac-82334d2226c7}} . Levenberg {{cite:c54e42ff148f44f46cf22666ccea4ca655be5508}} and Marquardt {{cite:0445ca19e58e24a60514cc43c0cae16601ce74d6}} independently extended this method by introducing a dampening term in the step equation. This yields the Levenberg-Marquardt method
{{formula:c2e29692-d98d-4b66-b8f3-25d7d0171025}}
| m | 8e48645ee617e7171840d1432481bb8e |
The telegrapher's equation {{cite:b3f3e1c80d69ff08efa328b1f5c2421711b82a12}}, {{cite:4c9ebce0fb7479fc968347352e693b5f1c0ea21d}}, {{cite:2de5b2ee9ea346439f186d466a14ef7dc9d06b6a}}, {{cite:f7c987d91045d0420f1a488130834a043afc7669}} may be considered as a generalization of the diffusion equation that overcomes the described deficiencies. Being a hyperbolic differential equation, it exhibits
wave-like features on short time scales while diffusion-like features dominate on longer time scales. In particular, the signal propagation speed is bounded and well-defined.
Like the diffusion equation, the telegrapher's equation may be viewed from a macroscopic, phenomenological and, at least in one dimension, from a microscopic point of view.
| i | 515d55855b518e939f88b75e58729df2 |
Let us notice that the {{formula:fe0dc627-4c49-4fdf-918d-f4d55175a34b}} family is the same as the one introduced in {{cite:3e2caeb0d6052c44a4672917be4eacfbd2577f98}}. The {{formula:909016cc-9404-49ae-86e8-acba5849dac4}} equations are divided into two subclasses: rhombic and trapezoidal, depending on their discrete symmetries. We remark that all classification results hold locally in the sense that they relate to a single quadrilateral cell or a single cube. The extension on the whole lattice {{formula:3c1762be-78d8-4623-89cc-0bcef64cdf81}} is obtained through reflection considering an elementary cell of size {{formula:5c5d5e67-9545-487b-bbb6-08a5456ab03b}} . This implies that the {{formula:54a784ff-fab3-4c6f-bb10-f41af4adcc77}} and {{formula:813c6bca-ac07-4cd0-9c1b-da85d9472e85}} equations as lattice equations are non-autonomous equations with two-periodic coefficients. For more details on the construction of equations on the lattice from the single cell equations, we refer to {{cite:558cb9b29aee825ff837c905a59e4118bb0ffc1d}}, {{cite:8ac3ff0f251fcabe3871e698f1ae5e6faac127bf}}, {{cite:f6a83116c3cbefbfe3cc7cc888bc14a241080e96}}, {{cite:3134e8fa283f76ddf999efe78c5ab67bec194b01}} and to the Appendix in {{cite:1eead52b47568dc29729333d6791a1b518c52f6e}}.
| i | 85cedc09bef95050c0b745b00e4a166c |
The obtained penalty factor of around 300 for each additional nucleon is consistent with {{formula:8329ba8e-c45a-43a0-a04c-cc593f938889}} MeV in the equilibrium thermal models. The measured yields for {{formula:6dc0fd12-0f84-46c7-824b-fd1e1663e94e}} He and {{formula:834a67d5-6f48-4368-8426-d71de764f0b8}} nuclei are consistent with the predictions from the various (equilibrium) thermal models (THERMUS {{cite:d233ec7d74fba84dbd1536a8ad202c8fa98d9079}}, GSI {{cite:9705c9d9138c768b66c7f183e32078acac120859}}, {{cite:78d3ab7dde2fb6f33895d923e4b44bacc5f58de6}}, {{cite:0625de8812df20029cc754dc109967a694f7fe91}} and SHARE {{cite:61ee4087fd9a9c58e29ca0589043b78c062d694a}}, {{cite:215fcb29024f9db42bc25158a95b25c3074c0e47}}, {{cite:704e9f9ff7f579107f425f64b08a5b69b0d9cbe9}}) with {{formula:d20520a7-d4ee-4438-b2c5-e685dda55387}} = 156 MeV, as shown in Fig. REF for complete statistical thermal model fits using the available light flavour data measured by the ALICE Collaboration. The fits in Fig. REF extend the simple exponential model (Fig. REF ) by incorporating Boltzmann statistics and degeneracy factors for all particles. If instead of all listed particles only nuclei (deuterons, {{formula:e4116d47-7fef-4356-b1a1-0c666ea90ec6}} He and {{formula:eab7406f-44d5-4435-bab1-89b370726665}} He and {{formula:fc60c67f-abfd-488d-8049-558f7ac97e87}} ) are considered for the fit, the resulting temperatures are 154 {{formula:72c48244-f652-4034-8210-549c7f483a90}} 4 MeV. The pure measured yields for {{formula:f55c6bf0-ec32-4a41-a3ca-f063b6d9d611}} He and {{formula:f10a08f3-c893-441d-b029-0e1b92bb1b1a}} nuclei agree, depending on the model implementation, within the determined uncertainties with temperatures from 135 MeV to 177 MeV. Taken together these observations suggest that the relatively heavy {{formula:83dc8f69-b1c0-439d-b606-58fd39c03157}} He and {{formula:4104210d-543a-47b7-bd0a-ce6308ba9f45}} nuclei are also produced statistically at the same temperature as the lighter particles.
{{figure:5a66b1db-142a-4171-ad96-c3324048e3a4}} | r | a012299376f997f7000debecee343d24 |
We propose FedEMD, a light-weight Maverick-aware selection method for Federated Learning. As Mavericks are strategically involved when they can contribute the most, the convergence speed increases, meaning that the learning process can terminate faster and more efficiently. Our emulation results show that the proposed FedEMD has faster convergence in comparison to state-of-the-art algorithms {{cite:e53af260fe2340071f56a096446de276205508ba}}, {{cite:88c00a4c1ff104e5ab44722a097b6ff3c056b571}}, {{cite:cac5742b98cca8bf78b3df88988c5b67d5c7cf89}}, by at least 26.9% and 11.3% respectively, under FedAvg and FedSGD aggregations for a range of Maverick scenarios.
| i | 77c064dc18f516951abc9b2206e45cba |
Now we propose a novel perspective to conduct robustness analysis in FL.
Succinctly, by aggregating the shards first, we can reduce the
non-i.i.d. updates to i.i.d. when the shard size is reasonably large, given some
assumptions on the non-i.i.d. updates. As the first step, we introduce the
assumption on the non-i.i.d. distribution in Definition REF , whose
validity stems from the observation that a major source of
heterogeneity in classification task is the unbalanced distribution of data with
different labels {{cite:fa7f865976161b98c887bbabedc1b8b03aa0d366}}, {{cite:58f1775a401afcc8791d2005dcde2d6c391bd75d}}.
| d | 8172fbfdfa5dd8b0e768fd39042a1f8e |
Many efforts to characterise the dynamics of BANs have already been put forward. For
example, some studies {{cite:f7259b316d32d6d9eee3a6c15d68c4693c17ee58}}, {{cite:4419fcf4c997b135731b1009403c1344c580c841}} examine the behaviour of
networks composed of interconnected cycles. The modularity of BANs has been studied
from multiple perspectives. In particular from a static point of
view {{cite:47a15e22cd30c7648011022bb376266f23dbb3cc}}, {{cite:e4f8e0a5041356ef966e5fb63164771d805a754e}}, and a functional
one {{cite:b270c985d59987c01dcefd8878e08830adf4fb4b}}, {{cite:f9243bff678a41ac3b0a7aca3a31a0d15ea6a685}}, {{cite:9234adb723dedc777295e4b3544f47decd1f1f92}}. In this paper, we explore a
compositional approach to BANs that allows to decompose a BAN into subnetworks called
modules, and to compose modules together in order to form larger networks. We define
a module as a BAN on which we add external inputs. These inputs are used to
manipulate the result of the network computation by adding extra information. They
can also be used to interconnect multiple modules, making more complex networks.
Those constructions resemble the circuits described in Feder's
thesis {{cite:dc116bec18fb42db325e373095af79f70ce9e7a9}}, and modules can be seen as a generalisation of circuits
over any update mode.
| i | 3bac86a610e54e9b0825ba0562ca0426 |
As for unsourced random access, the AP is interested in the transmitted messages only and not the identity of the devices. In fact, unsourced random access is motivated by practical IoT scenarios, where millions of low-cost devices have their codebook hardwired at the moment of production {{cite:fcd97589f3c5ae9942df79c83c0cf09600fb8745}}. Such that all the devices share a common codebook in unsourced random access. Since the same codebook is exploited, the AP can only decode the list of transmitted messages irrespectively of the identity of the active devices. In this context, the computational complexity can be significantly reduced compared with the case that assigns a unique individual codebook to each device {{cite:c41203e748ce86e9cd0787496997bf4d96fb224a}}. Therefore, unsourced random access is mainly applied to content-oriented applications. For example, in quality inspections process of smart factories, a fraction of devices send their current reliability information to the AP, while the AP detects all received messages, computes a weighted average of current informations, then generates a performance index. Recently, a low-complexity unsourced random access algorithm based on the coupled CS framework was proposed in {{cite:91cbd0d9f79a8ff9ce84892148f2226b1a3592c0}}. Specifically, the transmission slot is partitioned into subslots and each active device sends a codeword from a common codebook across different subslots. To this connection, an inner encoder is needed which maps each submessage into one column of a given coding matrix. Then, at the BS, the inner decoder must identify which columns of the matrix have been transmitted. On the other hand, an outer tree-based decoder is applied to stitch the decoded sequences. However, this work assumed only a single-antenna receiver equipped at the BS and the results are not applicable to the case of multiple antennas. Afterward, the authors in {{cite:1b0aa8eea5528a1dd13ae7574bf7c694e818c484}} extended the model in {{cite:91cbd0d9f79a8ff9ce84892148f2226b1a3592c0}} to a case of large-scale receive antenna arrays with Rayleigh fading, where a maximum likelihood (ML)-based activity detection scheme was adopted as the inner decoder of a concatenated coding scheme in {{cite:fcd97589f3c5ae9942df79c83c0cf09600fb8745}}. Specifically, the algorithm in {{cite:1b0aa8eea5528a1dd13ae7574bf7c694e818c484}} avoided the use of a signature sequence longer than the number of active devices. However, it usually requires a large number of antennas at the BS in order to accumulate more active devices {{cite:5a003a58544702ed52a18a7e99f2bb587fbbe334}}. Moreover, for a short subslot length, the number of active devices that can be accommodated is limited by the detection capabilities of the inner decoder {{cite:1b0aa8eea5528a1dd13ae7574bf7c694e818c484}}.
| i | 655d5682efa845a5fdda411b422d2b95 |
These challenges present major barriers to widespread adoption of such learning technology.
To overcome these drawbacks, we propose a new approach to language
induction that takes inspiration from human learning.
We aspire to build a new class of induction systems that adhere to
the constraints {{cite:ef9c7fa33e6302d0d210a4cef7e94a234781f3df}} has enumerated, in
particular that they acquire modular structures in a piecemeal
and incremental way that builds on prior knowledge to guide
learning, so they can acquire expertise rapidly from reasonably
few training cases.
There has been some research into systems that adhere to these constraints
(e.g., Mitchell et al., {{cite:9d5f2298592236976f62068fb6f54b65e4af1cf6}}), but the area
deserves far more attention, especially considering the challenges
faced by popular language learning techniques.
| i | e4105b37d0f95e6941396903f8c90111 |
In the forward process, the network takes a patch of DI as the input. Firstly, the input data go through a normal convolutional layer and a max-pooling (MP) layer, respectively and then the outputs are concatenated. Next, the contact activations go through the decoder with three groups bottleneck layers. The essential structure of a bottleneck of encoder can be illustrated in Fig.REF (b). It is shown that a bottleneck layer is constructed by a small residual block, including a maxpooling path and a convolutional path. More specifically, the convolutional path consists of two 1{{formula:2f1bd9df-9406-4b9b-abfa-ff75334653b4}} 1 convolutions and one main convolution. The main convolution will vary with the various function of the bottleneck. It can be a normal convolution, a dilated convolution {{cite:894c6b3a43f821e11901de86086b1514127b7f65}} or an asymmetrical convolution{{cite:dad60a25b7203ff0ccfcedd74e8db3d21c44c243}}. The tensors inside the encoding bottleneck are listed in Table REF .
{{table:f0a6c533-b5fe-4dd8-acba-ac047cfab2ed}}{{table:cad9c089-ecbb-4ab2-921c-4b9d2cf1cb70}} | m | 4b503ce5b39ce410098913a71e4b35ae |
where {{formula:47992c09-89bd-459e-ad41-c024c76bab3a}} is the associated graded algebra of the universal enveloping algebra {{formula:d4005900-9ac9-4295-8531-1f2e0e3773f2}} of the Lie algebra {{formula:7edd35be-0e7c-4d6d-890e-c18b96269bba}} and {{formula:028ac8c8-40d9-4bc4-8d17-6dca843a1d5e}} is the symmetric algebra of {{formula:46841d5a-9a20-4105-a3f6-bd57d15a7f03}} . For a smooth Poisson algebra {{formula:a5ecefaa-4484-44a8-bed7-d963a1f01643}} , a
similar result holds {{cite:5290cd4a50317e307711dd38332e7d97c57ac991}}:
{{formula:354418ac-e764-4566-9822-1b26f51419e2}}
| i | 6b1405f8cf555fb439c25da57442d095 |
There are several theories with two time dimensions. For example, 11-dimensional extended supersymmetry in M-theory is really a 12-dimensional SUSY with an SO(10,2) symmetry.{{cite:5953d4db4a7b90b62af3a8d60ddf34886cea6f09}}, {{cite:3ee8602aa984a0995959e383bd6135a1fb2d3b97}}, {{cite:1ca9c296be68f15790135adcbe2ca3e76b584d90}}, {{cite:af2649e283c4634086899d2abd30b105fad6649d}} F-theory in twelve dimensions (12-D) is similar.{{cite:22cba272007df69fe5d364e38993bdbbdebfb5ed}} But these second time dimensions are compactified. Köhn has taken a different approach and added a second time dimension to the Einstein-Friedmann equations.{{cite:db00b6d4ad4188724882dcc005c4bb7fefaa05be}} This enables him to solve the cosmological constant problem, but the second time dimension, whilst not compactified, is on the spacial scale of the Planck length. Bars and his colleagues have done the most work in this area of two-time physics.{{cite:0ab037d6d83595c596709d493a4a67c05839a6d0}} However, neither of his two time dimensions help explain the phenomena which interest us. Furthermore, there does not appear to be any GUT which predicts a second time dimension which flows from the future to the past.
| d | 77521358ff270b5a25ad9ad5295fea38 |
To estimate phonon frequencies we employed the “small-displacement” approach {{cite:50201545ff972acdae600d6be2eac3b8521e7221}}, in which the force-constant
matrix of the crystal is calculated in real space by considering the proportionality between the atomic displacements
and forces when the former are sufficiently small (in the present study this condition was satisfied for atomic
displacements of {{formula:9ee897e4-ca48-41e3-8613-1504f743fcc4}} Å). Large supercells containing hundreds of atoms were employed to guarantee that the
elements of the force-constant matrix presented practically negligible values at the largest atomic separations.
The computation of the nonlocal parts of the pseudopotential contributions were performed in reciprocal space in
order to maximise the numerical accuracy of the computed forces. Once a force-constant matrix was determined, we
Fourier transformed it to obtain the phonon frequencies for any arbitrary {{formula:f05f2768-9b7a-4723-9ab9-d4d0cc6ef1cf}} -point in the first BZ. This latter
step was performed with the PHONOPY code {{cite:a48efa36d16adf474292533a1708a53d592819ff}}, in which the translational invariance of the system was
exploited to ensure that the three acoustic branches were exactly zero at the {{formula:21909aff-ae9d-4801-bcc3-23b8d2592c4f}} point. Central differences
for the atomic forces, that is, both positive and negative atomic displacements, were considered.
| m | 55ce4bd139eb051dcd801e0facbcef67 |
Our algorithms universally apply to any single-photon source, regardless of its operating mode. We enable synchronization in post-processing without modifying the quantum source on the hardware side and without synchronization strings. These algorithms can be seamlessly integrated into state-of-the-art quantum communication schemes using correlation events to find the initial absolute timing offset. Processing of the arrival-time statistics during the session helps to keep the clock frequency locked without sharing any secure data bits. We show that the requirements on the clock performance can be more flexible as compensation mechanisms counteract and balance strong drift and clock frequency differences. This feature enables standard computers to be connected to future up-scaled quantum communication networks {{cite:8751d53c3d8a2f09c0d56b4916362eeda0ed3273}}, as ultra-precise clocks become redundant. Here, our methods represent important advances for secure clock synchronization {{cite:75fc4591f6233cb512d55ed0a381416bf937f948}} and global precision time distribution {{cite:af0eab50bff39e5758616f8d5b4c36c935af1d2f}} and show how single photons become both information and timing carriers.
{{figure:94d39a3f-31f6-4161-a4e1-6f020d02075b}} | d | 3ee626be6c15c5021f6831867f89ccf3 |
In this work, we have presented a DICE-based offline constrained RL algorithm, COptiDICE.
DICE-family algorithms have been proposed for off-policy evaluation {{cite:c07e209f7eb346bc26cc74e849b59761bd4d1d35}}, {{cite:a50ddc171ea16825b7e6c93b4173a16262ab191f}}, {{cite:4300c889bace7776ce933a2c43cc4063824bacd7}}, {{cite:0a91c99d0b68e364f984724499c51abf76de167a}}, {{cite:26845a8568c9ae7bc1891dc98a0b664c7ba17fcd}}, imitation learning {{cite:fa8229cca64a30e7ce41249e2183bc2c6481cf61}}, offline policy selection {{cite:6b52d528f29f3e76ac716e60858df3314bea1bd0}}, and RL {{cite:659a091e195bee64a0142825b89c430cb872cd83}}, {{cite:c43918b371f80453f1aaf49b1bbea465d386ffef}}, but none of them is for constrained RL.
Our first contribution was a derivation that constrained offline RL can be tackled by solving a single minimization problem.
We demonstrated that such approach, in its simplest form, suffers from constraint violation in practice.
To mitigate the issue, COptiDICE instead constrains the cost upper bound, which is estimated in a way that exploits the distribution correction {{formula:e7ff1354-ddd5-47aa-9436-a22711a7a4bf}} obtained by solving the RL problem.
Such reuse of {{formula:57d9f4a7-9979-44de-9d1b-0d8176a52b53}} eliminates the nested optimization in CoinDICE {{cite:26845a8568c9ae7bc1891dc98a0b664c7ba17fcd}}, and COptiDICE can be optimized efficiently as a result.
Experimental results demonstrated that our algorithm achieved better trade-off between reward maximization and constraint satisfaction than several baselines, across domains and conditions.
| d | fb7fbe6e2209f0082780aa84fc10aebc |
In this context, there is a natural need to simulate the transient state dynamics of condensed matter systems under laser excitation, both for understanding the underlying mechanisms of out-of-equilibrium properties and for correlating the microscopic electron dynamics with the macroscopic measurements, observables such as current and absorption. The modeling of time-resolved experiments demands an additional complexity. In those, the probe pulse provides information of the out-of-equilibrium system for a specific time delay between the pump and probe pulse. However, the probe pulse must be included in the model, as it contributes to the dynamics and is directly linked to the measured of observables at particular time delays. Furthermore, the typical intensities of IR/mid-IR ultrashort pulses enable nonlinear interactions that must be accounted for in the model. Also, pump-probe schemes can be viewed as nonlinear schemes, as they require the absorption, at least, of two photons at two different times. Last but not least, in all non-metallic two-dimensional materials known to date the optical response is dominated by excitonic effects. This is, to a good extent, due to a suppressed screening of interactions in low dimensions, which facilitates the binding between electrons and holes {{cite:beb1b2cc0c2da8f195056c77ee805e07973c8d32}}. Excitons can be considered as quasi-particles composed of an electron-hole pair bound via Coulomb interaction. Hence, electron dynamics simulations must be able to describe the formation of excitons in order to properly describe the light-matter interaction. In this context, we aim at covering all these demands and we present in this manuscript a theoretical approach that allows us to simulate electron dynamics in realistic condensed-matter systems driven out of equilibrium as well as to model ultrafast/time-resolved spectroscopy experiments.
| i | 7465ca4f92c52523ef151a8901801457 |
Degree Distribution. The degree distribution of a graph is the distribution of the number of edges connecting to a particular vertex. Barabási and Albert initially discovered that the degree distribution of many real world graphs follows a power law distribution such that the number of nodes {{formula:ccff2716-6023-44ba-94ee-6f40da0accfd}} where {{formula:4eb758e7-448d-4da9-b758-7b2c0ef0bcb5}} and {{formula:30c1bc8b-9e5c-4146-860e-2d952b84926b}} is typically between 2 and 3 {{cite:806464862c33d80bdd5932625c98b1cf1ce6fce6}}.
| r | 52dc19f45b64fa3ef6536eadf783d872 |
Therefore, we first analyze the classical part of the QAOA on the Qiskit quantum simulator {{cite:8fafad2cb9eff7095f38a5eed6da9ec44f983013}}. In particular, we analyze what makes good {{formula:1680fdc0-e7db-4c1d-ba82-2fc07dfd3558}} and {{formula:8b70a68b-cf41-4c4d-a16a-a24619caa381}} . Afterwards, we investigate the performance of different classical optimizers. The FOURIER strategy {{cite:172a44478f5385c7610645872ef3bdd536dc61d6}} is a recent suggestion to improve optimization in the high-dimensional parameter space, by using a heuristic.
| r | 63cd1bcd92c9c855b0848e337b1a0af2 |
Conventional ML models, such as SVMs and LDAs, utilized for sEMG-based hand gesture recognition, typically work well when dealing with small datasets. These methods, however, depend on manual extraction of handcrafted (engineered) features, which limits their generalizability as human knowledge is needed to find the best set of features {{cite:6637e0ceba0f222a3b9a5c1099388398d27161ba}}. Increasing the number of utilized electrodes and the number of gestures entails extracting more features, therefore, the feature extraction process becomes significantly complex and time-consuming. This is because more trials and efforts are required to boost the discriminative power of the model. Dependence on engineered features is partially/fully relaxed by utilization of DNN-based models. Among the most frequently used DNN architectures for the task of hand gesture recognition is the CNN-based frameworks. For example, Reference {{cite:3971b4278e7510a51ed909ab0bef59b43bd487b7}} converts sEMG signals to 3D images and uses transfer learning to feed them to a popular CNN trained on a database of natural images. CNNs, however, are designed to concentrate on learning spatial features of the input signals and fail to extract temporal features of the sEMG data. To overcome this issue, researchers turned their attention to hybrid CNN-RNN frameworks that were designed to take both spatial and temporal information of the time-series sEMG datasets into account {{cite:b234cb2205eaed952a346980e2dfb928db728ae8}}, {{cite:a39acc4f7b816b92638e6837f842ba8af7efeb7b}}. For instance, Hu et al. {{cite:b234cb2205eaed952a346980e2dfb928db728ae8}} have applied attention mechanism on top of a hybrid CNN-LSTM model to perform hand gesture recognition based on sEMG signals with relatively large window sizes (i.e. 150 ms and 200 ms). They achieved classification accuracy of up to {{formula:9a7d02bd-29a8-4bdd-81d9-6679fa5ffdb2}} using the largest window size. In {{cite:a39acc4f7b816b92638e6837f842ba8af7efeb7b}}, a dimensionality reduction method is proposed and assumed to enhance the classification accuracy when used with a hybrid CNN-LSTM architecture. In this framework {{cite:a39acc4f7b816b92638e6837f842ba8af7efeb7b}}, the classification accuracy is {{formula:ee8d6384-7b65-468b-9e7e-841dacc17f26}} on the same dataset as that of {{cite:b234cb2205eaed952a346980e2dfb928db728ae8}} for the 250 ms window size. Nonetheless, as well as not allowing entire input parallelization, hybrid CNN-RNN frameworks are usually computationally demanding and reveal important limitations with respect to the memory usage and large training times. In this paper, we aim to eliminate the complexity of simultaneously exploiting CNNs and RNNs by introducing a Vision Transformer-based (ViT) {{cite:4450459891a31660fcb8b07fe8bbe7739f09a9ac}} architecture to be applied on HD-sEMG signals and to efficiently deal with the above-mentioned constraints.
| i | a6bdfe39c8a6cdc36e496b09d639c90f |
Linearisation is done around the current iterate {{formula:0ce5edc2-eb41-448e-a06f-78c13ba6abd2}} and the next iterate is the solution to the approximate problem {{cite:10891aedc9d2862f1eec38e96f66db8970767dfa}}.
| m | a6574a2c6cc4c74d385cc04c5241d023 |
Since the most physical systems are inherently non-linear in the nature, the non-linear field theories are of interest to different branches of mathematical physics. The main reason to consider the non-linear electrodynamics (NLED) is that the structures of these theories are considerably richer than the Maxwell field, and in special case they can reduce to the linear Maxwell theory (LMT). Various limitations of LMT (the self-interaction of virtual electron-positron pairs {{cite:7cdc6ebb6aab1efa637edd8e4d0758602b166a04}}, {{cite:1a8e62bfe39a4712d3e24f4ae3b80dfc6b60b545}}, {{cite:e2d5cb99fa7221aeb596943946c5d5263b6a8a39}} and the radiation propagation inside specific materials {{cite:216e6b359a1e823211aed346b5a26cb0b4162c17}}, {{cite:d3df9c061f9574074bb1670af0409e54eee1d670}}, {{cite:2fa10f48fb3d7967a997980abd479f7b6c7525e5}}, {{cite:576ed2e6b9d29136e7492ef6978fbebfba6617f9}}) motivate ones to consider NLED. The authors in {{cite:fc65c7c8d69fb7ca0b770b2e4eb12e6117d31e3f}} showed that NLED objects can remove both of the big bang and black hole singularities. Moreover, from astrophysical point of view, one finds that the effects of NLED become indeed quite important in superstrongly magnetized compact objects, such as pulsars and particular neutron stars (also the so-called magnetars and strange quark magnetars) {{cite:3824443855b1b4aa99d81651f8fb9bacbd261c97}}. Recently the authors in {{cite:e0358ce00b2e4f167c79093281ff2f747316e691}} presented the {{formula:8b5c6b37-93b0-45d5-9859-3685f91f0c01}} -dimensional topological static black hole solutions of Einstein gravity in presence of the mentioned NLED and checked the first law of thermodynamics. Furthermore, they studied the stability of the solutions in both canonical and grand canonical ensembles and analyzed the effect of the non-linear charge correction on the thermodynamic properties of black hole. Therefore, there is naturally a question: does a dS spacetime with the non-linear charge source have the thermodynamic properties similar to a AdS black hole? In this work, we regard the higher-dimensional dS spacetime with the non-linear charge correction as an ordinary thermodynamic system by considering the interplay between two horizons and mainly investigate the properties of phase transition of the four-dimensional dS spacetime. And the effect of the non-linear charge correction on the phase transition is also analyzed.
| i | 32d4e8f01b27150df05a655b7e801e48 |
Bakshy et al. {{cite:ff028fefcd80cbf26f9c0bc29e7afd7c3b38a118}} propose a partisan `alignment score' that indexes websites on a continuous scale from {{formula:d6633b72-0890-4e12-b203-665afdbddaa4}} (liberal) to 1 (conservative) based on the relative frequency with which webpages of these websites are shared on Facebook by self-identified liberal or conservative Facebook users.
Budak et al. {{cite:46969ecf555c8fcc3811fbfb31f3a5550242f846}} propose a `partisanship score' based on the depiction of the Republican Party and the Democrat Party in political news articles. The score indexes (online) news outlets on a scale from {{formula:319b2e68-c8a2-42d2-8b58-5f9d95e90b27}} (left leaning) to 1 (right leaning).
MTurk Bias Score by Robertson et al. {{cite:a223ab08ac2ac2a8da856218f597589170e84c63}} : The authors use human raters on MTurk to code a subset of websites used in their main index (see below) on a five-point Likert scale from -1 (liberal) to 1 (conservative).
Pew Research Center ({{cite:5cceddbc2e88660a4476df33e1c680c3524d171c}} adapted in {{cite:a223ab08ac2ac2a8da856218f597589170e84c63}}): Mitchell et al. from the Pew Research Center use a survey with several policy-related questions to map survey participants on a five-point scale from consistently liberal to consistently conservative and study which news outlets the respondents trust most. Based on these survey data, Robertson et al. {{cite:a223ab08ac2ac2a8da856218f597589170e84c63}} create an index on a liberal-conservative scale from {{formula:9b76dfb8-0a6c-41b3-822d-c4b18db0990e}} to 1, reflecting which online news outlets tend to be trusted by liberals/conservatives. We use this index as provided by {{cite:a223ab08ac2ac2a8da856218f597589170e84c63}}.
Robertson et al. {{cite:a223ab08ac2ac2a8da856218f597589170e84c63}} propose a `partisan audience bias score` based on registered (Democrat or Republican) voters' sharing of web domains on Twitter. The score scales from {{formula:88dcf47f-ed49-4b0e-9370-8245056d8f96}} (the domain of a website is exclusively shared by registered Democrats) to 1 (the domain of a website is exclusively shared by registered Republicans).
| r | 71629883dc72809598c9a3273d54dd4f |
Modeling Long-term Dependencies. Sst {{cite:7df476d2533e786936263c8c9a344312405851eb}} and SS-TAD {{cite:55ecc75624ffb2801d206a06dfdc8f872ccc2e5e}} which are RNN-based methods achieve relatively lower results as they can not generate flexible proposals. PGCN {{cite:241fa106ef3c03436f53631caac3ee4ed0e990eb}}, G-TAD {{cite:ff8a85b47835b13527f21b90322f1515e547907c}}, BC-GNN {{cite:9cb2bce57fe2d6e79b50b003c3a33f488937c0c3}}, AGCN {{cite:67d70e0bbf25a9ddb8aaa08659c2f21b15a048cc}}, ATAG {{cite:235bf3f3e32b5c8fedac2a6d078ea2ea2afc65aa}}, and VSGN {{cite:62f187aa6d617bd68c179b93f4d1f2f1e630061c}} are graph models that capture dependencies between proposals or video
segments. Among them VSGN {{cite:62f187aa6d617bd68c179b93f4d1f2f1e630061c}} achieved the best performance by exploiting correlations between cross-scale snippets (original and magnified) and aggregating their features with a graph pyramid network. AGT {{cite:a01884f0f12b1a754a6b34538d23c09b3db2fa30}}, RTD-Net {{cite:108e3e28f39826664e9c4dde0e7dad9d95bcf7ab}}, ATAG {{cite:235bf3f3e32b5c8fedac2a6d078ea2ea2afc65aa}}, and TadTR {{cite:46e95e0b7cf3321b71f49262672deeb573bd1479}} use transformers to model long-range dependencies. Among them RTD-Net {{cite:108e3e28f39826664e9c4dde0e7dad9d95bcf7ab}} achieved the best results (on THUMOS14) by customizing the encoder with a boundary-attentive architecture to enhance the discrimination capability of action boundary.
| m | b567d1acb8c8959854b43ef7cdb5e8c7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.