text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Most of the surveyed SSL models contain a stack of Transformer layers following the convolutional encoder. Since previous studies have shown that the last layer is not always the most useful for a given task, we adopt the weighted sum approach following SUPERB {{cite:b286eaeecf0e18f8d846745ac7000d2ef5535da8}}. This allows the model to emphasize or de-emphasize information from different Transformer layers. The parameters of the SSL models are fixed, and only the feature weights and vocoder parameters are optimized during training (unless stated otherwise).
| m | ec9c867db75faca3bb30d358dffc758c |
All graphs considered in this paper are finite, undirected, and have no loops or multiple edges. Let {{formula:aa34b887-966a-473d-b3a8-e6e95975cef2}} and {{formula:33f71035-5f67-4bfe-9ea3-871cd1c3b512}} denote the sets of vertices and edges of {{formula:3efe14cd-d315-477e-8ab5-b388ebe40eb5}} , respectively. If {{formula:f96377fb-fc33-4707-ba8a-6f56e9c164ac}} , then {{formula:81dc5cc9-9d9e-4432-84f6-302d8ec7c687}} denotes the subgraph of {{formula:25ecbc2a-ee6b-4396-a276-4fce598e05c1}} induced by {{formula:f76546b9-7a36-42d6-aabe-76519432666b}} . The degree of a vertex {{formula:417886ed-85d6-4f63-b053-edc0cca9343a}} is denoted by {{formula:5c98a710-01ff-4232-ae37-83974e020e0d}} , the maximum degree of vertices in {{formula:f13fe08a-a609-4551-9833-8046e8fc5db0}} by {{formula:ad903099-c179-4b13-9146-78467d307e40}} , and the chromatic index of {{formula:8323ade7-22d0-4d67-9eb7-c2c971a50c62}} by {{formula:50ff305c-86c5-4eae-8b6f-919559cbef73}} . The terms and concepts that we do not define can be found in {{cite:801a77b34bbf4c00fd2ad8a334bcdb6661f61f59}}, {{cite:fd672c2a300c29937c395028ed49bf6c3471a566}}, {{cite:8922aba2fe9686fb8434ad4029582b8d2b36400f}}.
| i | 63fd16c5e3b3112fc874f0b905a0c2ee |
A generative model is a set of probability distributions that models the distribution of observed and latent variables.
Generative models are used in many machine learning applications. One is often interested in performing inference of the latent variable given an observation, i.e. obtaining the posterior distribution. For complex generative models it is often hard to calculate the posterior distribution analytically. The field of variational Bayesian inference {{cite:d5b2c776fa9d39b583a80231b0eb6529a977cfbf}} studies different ways of approximating the true posterior. One approach within this field is called amortised inference {{cite:612a2603baf22518ab39e6d34717184137bb43c7}}. This approach distinguishes itself through using one set of parameters for recognition that is optimised over multiple data points. This can be contrasted with “memoryless" inference algorithms, such as the message passing algorithm {{cite:7b332d0247755bb71bb591ecc04b82e9af398d7c}}, {{cite:e48a421ec35eb649ef5cbaa0a4b9ac9a4c7a625d}}, which finds a separate set of parameters for every data point. Both the variational autoencoder (VAE) {{cite:82d110a20fff19a08e4dd9ae5a68c93688c45186}} and Helmholtz machine {{cite:01cb7f3852d84786d5822f6ada03057fa5b8e6d4}} are examples of amortised inference. In their most general form these consist of a Bayesian network that is used to model the generative distribution. A second network, called the recognition model, is used to model the posterior distribution. Both these networks have the same set of nodes, namely the union of the observed and latent variables. However, in the generative network the arrows point from the latent to the observed nodes but in the recognition network it is the other way around. The recognition network is therefore in some sense an inversion of the generative network. In many applications, one simply flips the direction of the edges of the generative network to obtain the recognition network. However, as the simple example in Figure REF shows, this does not guarantee that the recognition model is actually able to model the true posterior distribution of the generative model. In this paper, we study the necessary and sufficient properties of the recognition network such that we do have this guarantee.
We first discuss these properties in terms of d-separation, subsequently in terms of perfectness, and finally in terms of single edge operation using the Meek conjecture {{cite:89b8ead96836da03da506951cfa204d4e53b6f76}}.
{{figure:aa1fb591-aa28-41a6-bf72-ea966df3b0f4}} | i | 0d7984c1a0480e0b429f64ee01442592 |
When we glimpse different manifestations of quantum chaos like hydrodynamics and ETH connecting to the emergence of RMT, it suggests to us that a larger synthesis may be possible. Certainly there are many connections between chaos, random matrix statistics, and eigenstate thermalization, e.g. as reviewed in {{cite:8f4c84b1e58c012485fd29493f3bcf11daf1c431}}, as well as connections to notions of complexity, e.g. {{cite:2b9ff823afd8d61fab75399f6b895e4f479c0bb6}}. However, work remains to understand how all the different timescales obtained from various manifestations of chaos fit together, e.g. {{cite:f6245cb9c714fbe997df84e7dbe95182c09f704e}}. We hope to elaborate on these points in future work.
| d | d8c59ba3e096f207e40b0c9345419572 |
We address these issues by using a two-stage training procedure, similar to {{cite:bddcc304887ccba9b439e749d74280b240095d05}}, {{cite:da79d303dbafc5065f8c91ec4b4498a5a3ca5ac4}}:
| m | 1bb2e173c17fb4da468bf7623e33e8c4 |
Table REF and Table REF additionally compares the driving score, road completion and infraction score of the presented approach (InterFuser) to prior state-of-the-art on the CARLA Town05 benchmark {{cite:e5a0d0f1fac96b48033a9bcc849ad7fb92a7676c}} and CARLA 42 routes {{cite:2166f672dbb17261b5f11524f4ba119ea07ec10d}}.
{{table:4c179aef-0aa9-4d4b-af69-9666f8043f68}}{{table:11e848d4-25bc-45aa-b0f3-2819011e2cc7}} | r | 5744f35e2901c021782edf63620da5ea |
Comparing our method to others for the SER, we can see that
we are slightly outperformed by VGGish {{cite:8126b98f165b8ffed6aec5972826c0cd545ec14c}}, {{cite:20f39418d3c6e2b2482c8d3d7d9b1c6c25d22d97}}, according to results taken from {{cite:cb95ccfbde6c3a231a13d49986f0d5cf8741e969}}, which has been trained with million of manually annotated audio files using pre-defined categories. This shows that our approach which only takes advantage of small-scale content
with their original tag metadata is very promising for learning competitive audio features.
However, our model is still far from reaching performances given by OpenL3 or the current SOTA DeepConv with data augmentation.
Similarly in MGC, the sampleCNN classifier, pre-trained on the Million Song Dataset (MSD) {{cite:d6838dc320c98da1cae95f8ea386a4871da99add}} produces much better results than our approach.
But, all these models have been either trained with much more data than ours, or use a more powerful classifier.
Finally, NSynth dataset has been originally released in order to train generative models rather than classifiers.
Still, results from {{cite:62057c15e4be5ea4158129af4379b423548594c0}}, show that our approach training using around 7% of the training data, is only slightly outperformed by a CNN trained with all the training data (smallCNN).
| r | ae2e74e5cbd78f80328f95fcc8d1202a |
Theorem {{cite:c85e857b4272ccf5fe3bfb544c727b71f9190126}}: Each positive definite kernel defines a reproducing kernel Hilbert space (RKHS).
| r | deef278278b88f089e81e6a382451fff |
In general, we observe very similar posterior distributions generated by the two different methods. One can additionally compare the reconstructed observables as shown in Fig. REF exemplarily for the energy spectrum. In comparison with Fig. REF , one can see that both methods yield good agreement between the modeled spectrum and the benchmark simulation. The same applies to the {{formula:519d698d-3318-4c0f-89f8-12dff818cad5}} histograms (not shown here). The level of agreement can be quantified by calculating the deviance {{formula:8047adc2-ecdc-4a40-96ca-227550a25931}} {{cite:bf362e16338265d8192d7aec39ae8bd33d224f53}}, which is two times the negative log-likelihood ratio of the model, and the saturated model that would describe the data perfectly. For the two observables, the energy spectrum and the {{formula:2ca9386c-101c-4867-a090-e91eb6780556}} distributions, we use the likelihood functions which are used directly for the MCMC sampling, as given in sec. . For the MCMC we achieve a deviance of {{formula:6d529c99-885f-48cd-8b01-c32ea375f85b}} and for the cINN {{formula:f389475d-e794-4842-877f-134e949105fc}} . We obtain reasonable, quite similar values for both methods, indicating a good description of the observables.
{{figure:cf6203b5-25e0-4a55-9bce-fa3110a38764}} | m | 68b266b4bdcf5db587a3722acfd94250 |
It is well known that the Blaschke product {{formula:9ba77fbe-d1b2-4380-a236-cc36284f5b42}} given by
(REF ) converges absolutely for {{formula:d85ed874-1347-4f90-9ac2-a2a009da54a9}} and satisfies {{formula:8c81d52c-e818-49ed-a95e-61063bcdbea1}} with {{formula:3b2eef0d-cfd4-4fd8-af03-4a46bd1a8665}} , since {{formula:84827c65-62a9-4200-86de-405eafaeb3b9}}
{{cite:31219dc0516d2a93020c2ee544d46f14e3d956f3}}. The Blaschke product {{formula:d513c27f-4b47-4769-9807-23b97488e729}} has the standard Taylor series
at {{formula:6df8144b-4ed4-419f-9f11-da3d37fea51b}} :
{{formula:c3d5a092-b6d1-4ff9-806a-9a439789b806}}
| r | 97187d736826674bb5b00a2cfaa01bec |
which solves the time-independent Fokker-Planck equation {{cite:4ae8a63c43e9089311df70459be57877885579af}}. Note that {{formula:7bb3ceb0-bfd7-41f7-8b7c-69be168d0a0b}} and, by extension, {{formula:c13ca8fb-daf9-4fbc-b49b-51648e87195b}} are symmetric under the
change {{formula:86c7d963-ac88-4c1a-9d7b-eac9c9ecc57b}} and {{formula:da555554-ecaa-4769-bdf7-714786b43f9c}} due to the symmetric force {{formula:94e0519f-76a5-4e9e-a7d7-6f80b8b7e204}} .
Thus we can confine the analysis to the case {{formula:d85c2362-4f5a-4871-a0a0-06767cfe2715}} without loss of generality.
| i | a16c0f399eb6c2f76f18a45d8c67102a |
Looking Backward. Production systems were one of the first AI research attempts to model cognitive behaviour and form the basis of many existing models of cognition. However, in traditional symbolic AI, both the key entities and the rules that operated on the entities were given. For AI agents such as robots trying to make sense of their environment, the only observables are low-level variables like pixels in images. To generalize well, an agent must induce high-level entities as well as discover and disentangle the rules that govern how these entities actually interact with each other. Here we have focused on perceptual inference problems and proposed NPS, a neural instantiation of production systems by introducing an important inductive bias in the architecture following the proposals of {{cite:48bf3e4265a073298f62bde3fbd61ba3300ea2d0}}, {{cite:869d916d2fb03c83229fc862ff21c48af5121202}}, {{cite:32f149f1c0954d296d479667109ace72a0022e2c}}.
| d | 982955799970697f61daf0a1edbf7191 |
We would like to give some comments to close this work. First, we
notice that the discrepancy between topological phases characterized
by {{formula:0dd78b0b-2c72-4009-8617-13b4938ceb65}} becomes vanished if {{formula:9adcd281-d8a7-4ec7-b877-ee8aa4ed4271}} . And
in the black brane background (REF ), the CS brane is excepted
to be embedded at {{formula:b2b03178-7083-4d38-8952-6b6190729093}} to minimize its energy which leads to
a vanished {{formula:45210c3d-f50b-4fd1-b285-2a9b83019f15}} . Since the black brane background corresponds
to a dual theory at finite temperature, the topological structure
of the vacuum may therefore becomes vanished. So in this sense, our
model might provide a holographic interpretation of that why the topological
aspects of hot QCD by instantons is quite difficult to be measured
in experiment {{cite:80fbc4d1e4de41b9edcdde85ee84ffd0870d716c}}, {{cite:5df3cb7fb28f796a6a972296f55abdb552a70106}}, {{cite:c7034fd42abce2785d1ed49cc0efbdbc1e41b912}}, {{cite:1466de038b5d38c63c23650a535fde3ee61d2ce1}}.
| d | 1ee9892235ef43c961e6f9313a8a1a9b |
Approach overview.
We adopt multi-task unsupervised tracklet learning
with each task dedicated for an individual camera view
as {{cite:a56d4d62dd4cf00337bafbd832bbb0bc943b0ce7}}, {{cite:0ca9b1efb241f410f698787dd5c435097aacb150}}.
In particular, we first automatically
annotate each tracklet with a unique class label
per-camera.
Each camera is therefore associated with an independent class label set.
In multi-task learning, we ground all the branches
on a common re-id feature representation.
This model takes individual frame images {{formula:aa20f246-d9ad-407a-9021-6b3544f1136a}} as input
(Fig. REF (a)) other than tracklets.
This is favourable due to the possibility of
modelling noisy image frames within tracklets.
After extracting the frame feature afte the backbone CNN,
we aggregate the frame features from the same tracklet to the tracklet feature.
Especially, we proposed an adaptive sampler to generate two neighbour sets:
per-camera neighbours and cross-camera neighbours.
Based on these two neighbour sets,
we proposed the per-camera/cross-camera image-to-tracklet selective matching loss functions to learn the feature representation against noisy tracklet data.
An overview of our STL is depicted in Fig. REF .
| m | e9d33d81e9cc5e0ef64668e79422c3f4 |
We also study how the entropy bias is affected by adding a threshold bias term or ReLU-activated hidden layers. One of our main results, Theorem 5.5, proves that adding layers to a feed-forward neural network with ReLU activations makes the bias towards low entropy stronger. We also show empirically that the bias towards low entropy functions is further increased when a threshold bias term with high enough variance is added. Recently, {{cite:b7077e073599b9b2ad525c4c53c4c50b8b08a2cd}} have argued that batch normalisation {{cite:647069f04b34fc03060181a106e4a2c09c4e84c2}} makes ReLU networks less likely to compute the constant function (which has also been experimentally shown in {{cite:67ebeda305366d73ac48dff648facb86ad2d10f7}}). If batch norm increases the probability of high entropy functions, it could help explain why batch norm improves generalisation for (typically class balanced) datasets. We leave further exploration of the effect of batch normalisation on a-priori bias to future work.
| d | 695d41c94291c47466da0b0fd2320d97 |
Our simulations show that massive clusters form rapidly in converging regions on timescales of Myrs, in agreement with previous work {{cite:52e93ede490f7c4e9f34e1aec8a54bc60b3efb92}}, {{cite:ae58d659f58da580508c04648c20736e2266c2ac}}. However whilst these previous simulations only considered simplistic cases of two converging flows (or colliding clouds), we have showed that these are valid in a galactic context. For Region 1, we find that clusters of mass {{formula:92cd9797-3af2-41c0-b407-379a8ff52a74}} M{{formula:5a6ff1f2-f52f-4687-b34b-c34b3eb4f980}} can form on timescales of order 1 Myr. Region 1 represents the singular most extreme part of our galaxy model, in terms of densitythe criteria for selecting the region was based on divergence but Region 1 is also particularly dense. and velocities of the gas, so this is where we expect to form massive clusters. These massive clusters form at the hubs of where large scale filamentary structures join together, as observed for the W49A starburst in the Milky Way {{cite:abe49e6e5a1a65c2d60e4f2e027fdbc174a9072b}}, and also analgous to massive star formation on smaller scales within molecular clouds {{cite:3d2c12f3d2593e68080cda2c1361a6126df130c1}}, {{cite:868656138d2f011ef16b2df7ed14fdfc57dab429}}, {{cite:7b2706daa6122b066fa74611df965e2d6afd6b7c}}, {{cite:037da8d5975b8c3394dcfd8234bfb81fd3b713f5}}, {{cite:39fa243545d67b18095c530d73a1f17ed5e7a0bf}}, {{cite:eba56853539024e910d6dcde9adad2b4dd8bf5c4}}.
For Region 2, we also see massive clusters forming, but the clusters are comparatively less massive and form instead over longer timescales of around 3 Myr. Although dense, Region 2 is not as exceptional as Region 1, and is comparable in properties to numerous massive higher density GMCs in the Milky Way {{cite:5f328d9cf5828486b22e8cd50c3a9dd7df3b5709}}, {{cite:8ecb9270f926c3692c81607a42cc747342ec42cd}}, {{cite:c48c0c89f44db4fb9cade3c8138f7351960478c7}}.
| d | 7f0fee4369c83093f0af79b44290c7bb |
Comparing the three cases: Table REF shows the testing accuracy of the classifiers in the three cases under strong privacy guarantee on STL10 dataset. We have two observations from the results. First, we observe that Case III achieves significantly higher testing accuracy than the other two cases. For instance, the testing accuracies are respectively 0.237, 0.142, and 0.956 for Case I, Case II, and Case III on STL10 dataset. The reason is that the pre-trained encoder in Case III can extract high-quality features, enabling DP-SGD to train more accurate, diffrentially private classifiers.
We note that our observation is consistent with prior work {{cite:aceea868ef5c43fd6f6c99e1d1ed5cebe1d08174}}, which also found that a pre-trained encoder can improve the accuracy of a differentially private classifier. But our evaluation is more comprehensive since we compare training differentially private classifiers using linear probing and fine-tuning when having a pre-trained encoder. Second, Case II achieves much higher testing accuracy than Case I under no privacy guarantee (please see Table REF ), but we find that the testing accuracy of Case I is similar to or even higher than that of Case II under strong privacy guarantee. The reason is that a deep neural network classifier in Case II has much more parameters and thus DP-SGD introduces larger noise during training, which overwhelms the expressiveness advantages of a deep neural network. Our observation indicates that, without a pre-trained encoder, we can train a simple linear classifier instead of a complex deep neural network to (possibly) achieve better accuracy under strong privacy guarantee.
{{table:b961af79-963d-4aee-bbe7-785035fbabcb}} | r | 0ca8ee5f23ed693aeab860437f368b15 |
Generation of the DVS spikes responses along with the filtering was implemented entirely in Matlab, while the SCNN and its various layers were coded in Python using the Nengo-DL library {{cite:9df7a60fdd474690dd95db8e9f45e69b0eb41a55}}. The generated .mat files of the dataset were loaded into the Python network using the Scipy library {{cite:68a064c4a804db638b2d5c826229db07b15f757d}} and the experiments were carried out on the GPU accessed via Google Colaboratory {{cite:aed2c53b25b6f3f1d2215c38c62fb71c61846a58}}.
| m | 09feff37dc7165f5107fa2c8aa446b1e |
Assume that {{formula:2e3092ed-6cff-4677-8723-ecf9deb1ffb6}} is always Hurwtiz and {{formula:721a5007-6f3e-4864-9afe-dd1a5e71bd31}} for some {{formula:9bee70ba-15d3-4448-926d-862c0435316f}} and all {{formula:72cd1a05-24c3-45bf-ab1d-44183e794187}} . In this case we can pick {{formula:080a2132-21e4-480e-b676-fb793b258b23}} , which implies that {{formula:961feb4f-d29c-400f-b93c-99045399263f}} . If in addition we assume that the system is unperturbed, i.e., {{formula:1b63f018-0456-4291-8eda-6ade9b4b162a}} , then (REF ) reduces to
{{formula:2910657a-ab18-4fd7-96cc-4879ae7ce6ad}}
Note that the upper-bound on {{formula:982bf182-210b-40f8-b4d4-501a0bd47ece}} stated in Theorem implies that {{formula:7775a491-ad32-4853-9b8f-6571d1fcadbf}} . Thus we recover exactly the same criteria as in {{cite:380ebe9fb382eb944f3f0803baa35b002af62927}} for testing global exponential stability of LTV systems with bounded total variation. Furthermore, if {{formula:d93cbdb0-1b07-42c0-9a22-433c2aa2fa20}} is continuous, then this result becomes the same as {{cite:342d75632a20dc0f40d79498d47c73f3a8e20b75}}.
Now we assume {{formula:d6e730b1-f8c2-49e1-9b33-1318cadb6611}} is a constant Hurwitz matrix. By picking {{formula:dfbc253e-2dfe-4bbe-9e2d-8220285408e4}} , we have {{formula:b72a3cad-54f2-4610-a93e-2d6469fe9367}} and {{formula:1e359701-91ec-4730-a09a-093cf8de8d96}} so {{formula:aaa1b336-847b-4a11-bb78-4a36c315dc06}} . Moreover, the time-invariant Lyapunov function {{formula:954d5896-6683-44ac-80e6-a09aa712a937}} has the property that
{{formula:b38de734-db9f-49fb-8568-e525e98125cf}}
with the parameters {{formula:59a29347-4ee3-40d2-853c-ef566f30345a}} .
In the presence of perturbation, the condition (REF ) reduces to
{{formula:b6884772-503e-442e-b6ea-51004ad22b4f}}
Moreover, the upper-bound on {{formula:8b5f0258-4e68-4f20-afa4-d009bb136d33}} implies {{formula:a022bd21-778f-4db4-9ef1-ee235d345d48}} . This is exactly the same results as {{cite:ec3a3e2211f082c7f7b4d920c9c690ae6f0714c2}} for showing global exponential stability with respect to a neighborhood of the origin for a perturbed system.
Lastly, consider a switched system with linear subsystems
{{formula:0c2826fa-056b-44e5-a5b8-b1782fb62dd9}}
where {{formula:2eec1e08-b8ae-4ef4-84ee-c52032678cf8}} is a piece-wise constant function. We assume that there exist {{formula:f3963439-0aef-4774-b58c-9828e687603a}} and a partition {{formula:da35ebc3-5235-44f1-8362-7a7171ba5bc9}} such that {{formula:9e48ee51-0eb8-4e6e-869a-9900e8c58bb8}} for all {{formula:96286ec5-3856-4e9d-9add-a966916b6987}} and {{formula:63dfdcdd-8f28-4f38-8572-6c2636cb434d}} for all {{formula:e0940464-29ba-4981-ac36-794c2d0d4410}} . In other words, not all subsystems are assumed to be stable. In this case we pick {{formula:0642151f-bbf9-43ec-ae58-7461cacd0cbb}} . This implies that
{{formula:f07f1ed8-bc4a-4dba-a2a8-1893fc92f317}}
where {{formula:853d9611-802e-426c-a402-7940f1f50b66}} is the indicator function for {{formula:94c9702e-56ec-4b8e-9087-0771b2ffe206}} . On the other hand, since {{formula:bc5a4bae-68ad-4734-b0da-173c9da63034}} is also a piece-wise constant function,
{{formula:090ec5e2-8ed3-45b6-8b4f-09330f1adb43}}
where {{formula:fa5c57cf-976a-417e-a85c-75fbe760157f}} denotes the cardinality of a set and {{formula:5300ed38-4dd7-471e-8748-b898b5847adf}} . As a result, when the switched system is unperturbed, a sufficient condition for (REF ) to hold is
{{formula:8fb187d5-063e-48f2-8ddf-7086abf631d0}}
Note that the two terms on the left-hand side of (REF ) take account of the total time that the switched system dwells in an unstable mode, and the total number of switches over an arbitrary time interval respectively. Therefore, the condition (REF ) has a flavor of mixed average dwell-time and average activation time condition which is used to guarantee stability of switched systems {{cite:9d206c5024dde70e93665b8a78347d013146d462}}, {{cite:866589a60b12a57a05d9c8123064cf15fef0de68}}. Due to the choice of Lyapunov functions, the quantifiers in the estimates may not be exactly the same as the ones used in the other results.
| d | 91f241ea9cd75c060de93bb0e988de1f |
On the other hand, as for the tasks related to the named enity recognition as well as the keyword extraction, we followed the same procedure that was done in our previous work {{cite:327ece3635736cfd274794441d50f59d52801785}}, taking into account that no new layers were added to the model, a linear layer was used to make the word and sequence-level tagging. In these two tasks, the results are more different, and there is a significant difference between the performance of AraLegal-BERT and the rest of the models, where AraLegal-BERT outperformed ARABERT-v2large {{cite:aae3cbc3ccb67b86848f1efa4a7d4c3c797c8618}} with a difference of almost 21% in the F1-Macro average in the task of extracting named entities as depects in Table REF , while the difference is much greater in the keyword extraction task, where AraLegal-BERT outperformed the highest model which is ARBERT {{cite:a947d995c2732183035957a8f8486ff40c804026}} with a significant difference equal to almost 26% in the F1-Macro average as shown in Table REF .
| r | 6319a08136e37cd2404e0e34c8c70a77 |
Finally, interpolation-based re-ranking, which combines the benefits of sparse and dense scores, significantly outperforms the BERT-CLS re-ranker and dense retrievers. Recall that dense re-rankers operate solely based on the dense scores and discard the sparse BM25 scores of the query-document pairs. The superiority of interpolation-based methods is also supported by evidence from recent studies {{cite:b7cf99b288cc64518e02315ab901715ebdf0a2be}}, {{cite:e25e98c6792fb0127bb93dd4dbc1c3b4cce56d49}}, {{cite:956c99613d1136c4f6a746a6a91113330ea22e47}}, {{cite:d8378b54ee7b4cf937812c4c3920b1aa0558f2dc}}.
| m | 44fee945d18cfa124d802e9522d0a77c |
Music Information Retrieval (MIR) is a growing domain of audio processing that aims to extract information (labels, symbolic or temporal features) from audio signals {{cite:cd82cae8a46f6611cc8ec4771907bde8db081deb}}, {{cite:1a3d39807ad390a8003d252ddabac24cbc685b47}}. This field embeds both musical and scientific challenges paving the way to a large variety of tasks. Such abundant industrial and creative applications {{cite:a28e21b30d56b24d22abae3d55afe37003aad8c0}} have attracted the interest of a large number of researchers with plentiful results. Among the diverse sub-tasks included in MIR, music transcription comprises an active research field {{cite:ec58d3e1684f26dbfc9f63d62cbe4a6412f19975}}, {{cite:3c94d895f3b37f86bf53bc354e1b9015e49dd3ac}} which is not only interesting by itself but finds generic applicability as a sub-task for other MIR objectives (cover recognition, key detection, symbolic analysis). Music transcription can be described as associating symbols to audio signals composed of one or more musical instruments. Thus, this field embeds pitch and multi-pitch estimation tasks but also other musical dimensions, such as dynamics. Currently, most pitch estimation techniques are based on fundamental frequency detection {{cite:20ac83daa2cd4c9d99d2c9ae23ccbcee63230e02}}. However, such approaches may prove insufficient in multi-pitch contexts, where the need for more sophisticated approaches appears crucial.
| i | 17585ec07a3626b918e1e853c33c1e5c |
Experiments with relativistic heavy-ion collisions at facilities like the Relativistic Heavy Ion Collider and the Large Hadron Collider provide the opportunity to study deconfined QCD matter which is dynamically evolving in an out-of-equilibrium state.
Over the last years it has become emergent that the bulk dynamics of the evolving QCD matter can be described by relativistic dissipative fluid dynamics {{cite:4923dbda7b95d4f90feeaf0fbb169de0422f050e}}, {{cite:e1aac4bb98bf8700125cd007f8aa894a9559982e}}, {{cite:2227fc4fc2f01f69a8818f79071db364e87f14c5}}.
| i | e43737667e674995a51807d5eac0d491 |
Next, we consider how previously proposed bias mitigation methods {{cite:0870676de9d9c59c5d5c3a780fa7cea04468a80b}}, {{cite:391361a3ec7a4543ceb7f3930fd535fa293d74c6}}, {{cite:f632a5dd25a8279c733ca1d96ae81b89514133c1}}, {{cite:6c47e97f607c191b4285c47bda66666db6ece9b8}} perform when evaluated using multi-attribute bias amplification metrics.
| m | 0af1732e5a2624a3115c7051394c8d3a |
In what follows, we shall also be computing the half-chain
entanglement entropy of the Floquet eigenstates. The procedure for
this is as follows. First, corresponding to any eigenstate
{{formula:b9fbec13-e8d6-4a9a-9722-d043d93a178e}} , we construct a density matrix {{formula:14075f54-a88c-477f-9e46-8f33c3d2dc9c}} which is defined on the full chain
with periodic boundary condition. Next we divide the chain into two
equal halves, {{formula:dbbde3a8-dcf1-484b-99e2-f07de104aa42}} and {{formula:f55f6659-a12e-485a-a574-d1db840ae0e3}} , with open boundary condition and trace
out the contribution of states residing in {{formula:29240080-50da-4fc3-98a3-22cc665b7db1}} . Thus each matrix
element of the reduced density matrix {{formula:090ba29e-30c7-416e-9afa-f6ae595bad71}} after tracing out
{{formula:bfad98d2-25cf-4691-b8bb-96544fece9bb}} can be written as {{cite:7eb228a78ddf9965d5feb3827a428595683da23e}}
{{formula:db16dfcf-19c5-4ba2-a485-1ab16b4d75de}}
| m | 3f9d2527715c04403fc31f46fcb34f14 |
One of the most fundamental tests of the Copernican principle comes from observations of our motion with respect to the cosmic microwave background (CMB) rest frame, which induces a kinematic dipole that has already been observed in the CMB {{cite:bb7047d87f2da309314b5c61cb61399002cf63cb}}, {{cite:b5f9c0cea911a7b266c54a44ac4fb447cedaa229}}, {{cite:d4d4965a1c7ed80869f77f2bc17050233a66d761}}, the local bulk flow {{cite:a19eb0292484ea3702684d1825a7c22f912ce5a8}}, {{cite:7b15df1739874fed3d5d69b7e1115e3eacd4916e}}, {{cite:374d2fe3248d46f040f237c7d93597ee8af30495}}, {{cite:335f56085470d9156e368154b1fa701e49f3047c}}, X-ray clusters {{cite:af5d1763a3112af2948f5c5ba8f39b720d838d8d}}, {{cite:a9b06c2d7e4e879f0e1f17058571d337490f5fe4}}, type Ia Supernovae (SNe) {{cite:59bef8557822410abd5ff47794d8969348ea8559}}, high redshift radio sources {{cite:ac2920330028beb5511156e3053fd06ece3572fa}}, {{cite:68c36d28107f6078262d18dd52bc15b83af4d90b}}, {{cite:2cf68fd742a1836bfb343b947c1e6a09cd4f26a0}}, and distant quasars {{cite:a18cbdf1f8c18ea474943b438cf51bb768ed0864}}. Many of these observations have intrinsic systematic errors that have to be taken into account {{cite:5f4760b0315ef07219bc0075d827a18dc1a03a8d}} in order to avoid theoretical biases. Another route is to perform null tests of the FLRW metric, see {{cite:26e4de9464d5c95a6cc9baf50227c0eacd6cb1c6}} for recent forecasts.
| i | a8e772496ad8b6d4253267d54c7990e4 |
Global contrastive
Global contrastive methods treat every image as its own class, while artificially creating novel instances of said class through random data augmentations.
In this work, we evaluate contrastive methods using Momentum Contrast (MoCo) {{cite:e45a78943275672b459c6ab2a8d8d932eb2e738d}}, and specifically MoCo v2 {{cite:6ffb9e54a6782c912e8a10af2d965de684acf726}}.
These methods formulate contrastive learning as dictionary look-up, enabling for the construction of a large and consistent dictionary of size {{formula:a559f717-91cf-434e-899a-6847bc72a5e1}} without the need for large batch sizes, a common challenge amongst dense prediction tasks {{cite:be0477f0e77f90820b69d8d6fc6fe6aa4459a0f4}}.
MoCo is optimized using InfoNCE {{cite:6419f5a562d53903fb550244df7e1ed51e5a5883}}, a contrastive loss function defined as
| m | 5dbb4f795699e9ba95342f7105cf1018 |
Two key processes allow the NS/BH to accrete mass at a high rate and launch the jets during this common envelope evolution (CEE), the formation of an accretion disk and neutrino cooling by the accreted mass. The density gradients in the envelope and in the core leads to a non-axisymmetrical accretion flow where the NS/BH accretes more mass from the denser (inner to the orbit) side. The higher mass from one side results in a net specific angular momentum of the accreted gas that is sufficiently large to form an accretion disk around the compact NS/BH (e.g., {{cite:3354346739883b33cada68198ffb7d60867ad53b}}, {{cite:043bd2b8f24161c5688c6f2ab825ea85e9429a26}}, {{cite:e8dfa173a80c916e4cd6066879c22cb1d489cc8b}}, {{cite:dab3560a25693be9c79791e5833c49e197d9b1c1}}, {{cite:d838bd4e6ab0c940b157c561bc3f078e03a02ed0}}, {{cite:7b3e76064b34afc6c3bcf40977865596fb0610ad}}).
Efficient neutrino-cooling by the dense and hot accreted mass for accretion rates of {{formula:f1f6c41e-9363-4bbb-a454-e8e84d383361}} prevents the buildup of high-pressure around the mass-accreting NS/BH and allows the high mass-accretion rate to proceed {{cite:80b9d4fe302e150a68bfe9bc9707b1d82ea8103a}}, {{cite:815892d6c13c17aeb4c8a8faa719eb9f803d9845}}, {{cite:98c98e99580485991e134feecd3b8824dc41ca0b}}. The jets themselves remove some more energy from the accreting object (e.g, {{cite:729f07e6fa2198ea32830a3379dae8061a81b767}}, {{cite:b7ee11c841754df4837d1bd1296abf4e4fecd63d}}, {{cite:160c00afb0c7ed357a914980401830bc38c5420e}}). A black hole accretor also accretes some of the energy of the accreted mass (e.g., {{cite:a43ad745e5a8381fd8b7aff6e35b7991a3fd95bf}}).
| i | 43e198dc614dbd433dad2b37dfcec1a5 |
paragraph4.2ex plus.2ex minus.2ex-1emEmbedding Algorithms. We use both fastText {{cite:67bc7f0a1363a87170d01874cecc412829d4cb23}} and Skip-gram word2vec {{cite:b443d6e885a1ccf779642cf913d51a18a03c03f2}} embeddings, as two of the most common embedding algorithms. Word representations in fastText are composed of both a word and the ngrams, or subwords, that it contains, which may cause bias to be acquired and encoded differently in fastText vs. in word2vec (as was found in Lauscher2019AreWC, discussed in more detail in Section ).
| m | 5b1ffda177bbe40ed26ea0c2169e2be7 |
In this section, we describe our proposed method for sequential scene flow estimation and sequential point cloud forecasting.
Our model solves the defined tasks by exploiting several properties of point cloud sequences {{cite:1eabb218f706966be9ac71f3434826803aaa8504}}, {{cite:c9d82c918e7aea44deb704847ac5ff4c10756dc0}}:
| m | 9534fbc7c25bf4ca75963c35b65842d4 |
Due to the use of an optical cavity with the near-perfect temporal and
spatial matching, the memory efficiency of 67{{formula:a26abd2e-1ead-47b7-9fe7-901d7c145d14}} 1% and the excess noise
close to QNL have been directly measured. For set of input coherent states
within the mean photon number range from {{formula:39dabdaf-de8d-48f4-a961-878292e8b790}} to {{formula:d30add7f-51a5-4a29-b73b-51f158b64141}} , the deterministic average fidelities have exceeded the benchmark
fidelities. Thus the memory has entered into the quantum region. Because
the atomic cell is put in the single layer magnetic field shielding barrel
in the present experiment, the memory lifetime is short at the scale of
microseconds due to the influence of the residual magnetic field noise. If
the magnetic field noise is further reduced by employing the multiple-layer
structure, the lifetime must be obviously increased {{cite:9c9fc98229d77792b371db3dcb5feca61af3390e}}.
Alternatively, it has been demonstrated that the cell-wall anti-relaxation
coating onto the inner surface of the cell may provide an effective approach
to extend the memory lifetime of warm atom to the scale of milliseconds {{cite:b057084e3cc0f2e88297c78c6c077d7b081c50de}}, {{cite:d20cdfa3c17faa69e552707f1a59007de4ecc0e4}}. We believe that if above mentioned feasible techniques are applied
in our system, a quantum memory with longer memory lifetime will be possibly
demonstrated based on the cavity-enhanced warm atomic system. The presented
approach is achievable on a variety of other physical platforms, such as in
trapped ions {{cite:e326364bea4dd74e335da7136b75a0e7185a3614}}, {{cite:4ab13840c58d1bd26ff15cab0cb63a9bfda2e4fb}}, {{cite:63196836b8617b5aba0590b78aa0d7025ceb2eb0}}, {{cite:639898400e6552a0cebf4fba337c12e62a198775}}, superconductors {{cite:f92cc300c9d0ac1e1383737b6e0c381593d96a09}}, {{cite:e4ff3227c315bb71917e5c9391d7c47e0c9adf05}}, {{cite:e5726c7537489134f51ed8b276897ed28f50ac97}}, solid
states {{cite:51f5d7c1c26f22c290f1608b1c121837e1392486}}, {{cite:54e9106e52129152d704ef02af66b7c018f1b046}}, {{cite:7b6fc0d3fa4112a23d20cf7053cccdc2bbd87c58}}, {{cite:d9a64ea5f65e4b9f211286bcc197f132fc2af76e}} and optomechanics {{cite:697e9fe4e9a6d0132c042c438470ee305f3bf608}}, {{cite:9b3e3a8e8b691ce9398644545825ad935e3a53bf}}, {{cite:b4272c4265101a1915f68a1c3db6acd70fe393ec}}, {{cite:4efd4b046c474a2e1d0aa3cd8471022484759113}}, {{cite:0c5ad4cae24752b9d51e2c8e47f6207ce592418c}}.
| d | e79eca5a9e3bd4070946ad20c822993b |
Considering the target samples do not have ground truth label, the self-training methods {{cite:15c24148416d61638663e406365a5830ceb67ecc}} utilize the inaccurate pseudo label to calculate the cross-entropy loss. Therefore, optimizing {{formula:4f1ab6e3-0080-4ac7-b2e6-ce060bdd92f3}} can potentially be more helpful for UDA setting. Actually, {{formula:b67af031-d2ca-40e4-9953-3434196e4f25}} is adaptive w.r.t. the input {{formula:ffdbb192-ad9d-4cd7-9e9b-d56a0b2e64df}} and network parameter {{formula:5da1a2f8-9880-4ba7-992b-bf1dca9ca674}} , and irrelevant to the inaccurate pseudo label, which can be an ideal regularizer of self-training based UDA.
| m | ad03a9e20f0a7b73a8d0a587d0ef879a |
The interplay between the predictive distribution and the restriction property of the DP is the cornerstone of the “indirect" proof-method of Theorem REF . This method imposes two strong constraints in the choice of the prior distribution: i) the predictive distribution induced by the prior must depend on the sampling information {{formula:3b1f4280-8093-451c-bd36-6a1b3cd4003b}} through simple statistics of {{formula:e7768940-dde0-49cc-9cf8-2bf0c1e44f75}} , whose marginalization under the prior is doable explicitly; ii) the prior distribution must have a restriction property analogous to that of the DP prior. Discrete nonparametric priors obtained by normalizing homogeneous completely random measures {{cite:85c87cc391581ade0a53a5403f6ca1951c8c6766}}, {{cite:3e9ac5e48a7c3710ea2f5b7545a45ccc78ceb9eb}}, {{cite:29038487b06cbbefdf78a55b27c7268562750be3}}, {{cite:694002e75f5d4bd4cd0ba530d824fcb483606ffd}}, {{cite:fd2729ad0ca31d69bbfc3f921dc776727c83b1bc}} are the sole nonparametric priors satisfying the constraint ii); this follows from the fact that completely random measures have a Poisson process representation admitting the Poisson coloring theorem {{cite:e8ecf8857e4b5a2b4cbac785afc94be7dc3e406e}}. However, from {{cite:bbebce1edde81fbb39ff799bd751acf78e95cc2d}}, it follows that the DP is the unique normalized homogeneous completely random measure which satisfies the constraint i). The DP prior is thus the unique discrete nonparametric prior which satisfies both the constraint i) and the constraint ii). Because of this peculiar feature of the DP prior, the “indirect" proof-method can not be used when considering other nonparametric priors in BNPs. In this respect, the most popular generalization of the DP prior is the PYP prior. From {{cite:4bf54512edf6df9ccaab7225030f40ca4376f3db}}, the PYP is the unique discrete nonparametric prior satisfying the constraint ii). However, the PYP prior does not belong to the class of priors obtained by normalizing homogeneous completely random measures, and hence it does not satisfies the constraint i).
| m | ece406ed65c881b54d99daac77c24d08 |
In the polar coordinate plane (see the left panel of Fig. REF ), the coordinate origin with zero eccentricity is no longer a stationary point. This is different from the prograde case (it is known from {{cite:8affdf86431621e709851efc5125a7f2aefbe86a}} and {{cite:16ca3ce1aa22f8498694998e7fb2dd5cfbc18f50}} that in the prograde case the zero-eccentricity point is always a saddle point in the resonant model).
{{figure:c551d471-caee-4f87-94c4-02d3eff8c82d}} | r | ed3d3c874b061e9e62212cace7aa3e6d |
Evaluation is done on datasets covering a variety of objects and settings (i.e., Named Datasets). CelebA {{cite:f10593d487c39ebee5da6ca41394ddc0feda8d1b}} is a large-scale dataset of faces. LSUN {{cite:562a1b50f866d8f7ddfe777060d8a756b27363c9}} contains ten different scene categories, from which we use the images of bridges, churches, and kitchens. The Stanford cars dataset contains different types of vehicles.
| r | c0f1892691c00f1ff0152a4d2b473f5b |
Although the recursive least squares successfully estimates the elements of the
noise covariance matrices, it does not guarantee SPD estimates of the
covariance matrix. The convergence of the estimates to the true covariance
matrices is guaranteed provided the matrix {{formula:f8cd2dd1-02c1-413b-922f-2a49a93ff844}} is persistently excited.
However, the transients are important when the covariance estimate is
concurrently used to estimate the state vector. In this case, the filter may
run into a problem of loss of observability or worse, provide negative
information updates to the filter by virtue of a non positive definite noise
covariance matrix estimate. As a result, having a SPD noise covariance matrix
estimate is crucial to obtain a stable adaptive Kalman filter. To this end, a
geometric optimization approach that respects the geometry of SPD matrices is
introduced here. A brief summary of the geometry of SPD matrices is provided
below (for a comprehensive review, see, e.g., {{cite:e38c8e2e0c17369b818b8629760a58e243f2cb3c}} for SPD
matrices and {{cite:6cfd5092c7da7a1232d78c8cf9dc4384d3cacb58}} for Riemannian optimization methods).
| m | 8e4747226d526ff4537d3c81a5b84d46 |
Thought weak limits are known for the optimal value in the regularized optimal transport problem, less is known about the distributional limits of the optimizers themselves. In this paper we provide the limits of the empirical solutions of the primal problem (plans/couplings), the dual problem (potentials) and the celebrated Sinkhorn divergence (eg. {{cite:b97065a8bef5b9981c3d1d654115e1d853b61d4e}}).
| i | 3534906ca49d7160c18c36bdc9e3665d |
It has been a long-standing problem in space-time physics to resolve the mysterious relationship between information and gravity. According to this fundamental question, a recent efficient approach is to examine complementarity between quantum entanglement and geodesic structures in the context of holographic principle {{cite:894513dc80cedc6f3d905af342451dc5df27aa4b}}, {{cite:58d8db33b6c20d4b00fb109505524d36c992f7d5}}. The complementarity was strongly motivated by the so-called Ryu-Takayanagi (RT) formula, where entanglement entropy in holographic quantum field theory is proportional to the area of the minimal (extremal) surface in its gravity dual called the RT surface {{cite:debbf0157d5a94687bb6bbd68c1b3a726139d735}}.
The RT formula is essentially a holographic extension of the famous Bekenstein-Hawking formula for the black hole entropy {{cite:f869324844cc45605b78954f60e1998019ac1979}}, {{cite:35c2aeeb4e85847b8bf8ed4e92772c42f72e8b7f}}, {{cite:eb9ebfba29488a43ebe2798191631cad117edb33}}. A very important feature of black hole is the presence of radiation of Hawking pairs inside and outside the event horizon. Then, the theory is described by the Bogoliubov transformation in superconductivity to connect both sides of the event horizon. Thus, the RT surface should be characterized by the condensation of entangled pairs from elementary objects, which have critical information about the holographic spacetime. Therefore, the characterization by extraction of the entangled pairs at the surface is crucial for a comprehensive understanding of the RT formula including huge past research. In this paper, we address this issue in terms of quantum operational techniques.
| i | d8d11d192065524fa9c5a62614a8b5a8 |
Face recognition (FR) has been well investigated for decades.
Most of the progress
is credited to large-scale training datasets {{cite:0b05fbca8d18058a2539aa8ff1052bcaa52c8c1e}}, {{cite:43de7bd21483eaace5d20c907d778ee75ebe069b}}, resource-intensive networks with millions of parameters {{cite:858da22842ad9969a7288cfd170f2120b8e16673}}, {{cite:2439fbc9a7783ae461f894c3517d277587a1e5de}} and effective loss functions {{cite:29805c783803ba3f38d74b34614f05f37ed69fc8}}, {{cite:4c272acf42fb83ca9dcdc9886294dd74e5cee568}}.
In practice, FR models are often deployed on mobile and embedded devices,
which are incompatible with the large neural networks (e.g., ResNet-101 {{cite:858da22842ad9969a7288cfd170f2120b8e16673}}).
{{figure:dfd9d477-8e4a-4eeb-a760-ce5d638bdc87}} | i | cdbadbd92d3b7b94a902b5e9db78115a |
In this work, we have presented a comprehensive exploration of the magnetorotational instability (MRI) in collisionless pair plasmas via fully kinetic Particle-in-Cell (PIC) simulations. With a shearing-box setup implemented in our relativistic PIC code Zeltron ({{cite:d6dc2f0a4139a1edef0afce03bfd3b4f1e5f7815}}), we have carried out a vast array of 2D runs, exploring an unprecedentedly large parameter space. In particular, we have conducted very large-scale 2D simulations with macroscopic-to-microscopic temporal-scale separation up to {{formula:4a0e43a4-4b8d-4876-b6a1-e631cb3c07ba}} times larger than previous works and with system size up to twice larger than the largest simulations presented in literature. In addition, we have carried out large-scale 3D PIC simulations of the MRI in pair plasmas, achieving for the first time a global mesoscale dynamics akin to that observed in MHD works. To study the axisymmetric MRI with PIC, we have resorted to the established 2D shearing-coordinate approach employed in previous publications ({{cite:27416fc64cd1fa963dfef71bb2d9225b79d25e43}}, {{cite:c5a7af251b93ff7f5107b344534cf3efb3f1745d}}); for 3D nonaxisymmetric simulations, we have developed and applied a novel “orbital-advection” formulation of the shearing box that simplifies pre-existing methods ({{cite:ee15d2039ad8ea2dd08381ac6d6948934e67b22b}}, {{cite:8f5a22640cbad2f5c22686a7f3d2ac751c3b8f17}}) and is less complicated to implement numerically.
| d | 762cf812ad7539ad0a50524198c49383 |
The obtained distribution of waiting times (with logarithmic bin) is shown
in Figure REF (left column), while in the right column
is reported the same distribution divided for the binsize;
the last distribution is proportional to {{formula:e3cd1575-d7f9-4ed0-9546-3091cf5ac08f}} .
Fitting functions are superimposed to the data.
The fitting function reported with dashed line represents
a set of overlapping bursts of flares (the multi-loghat distribution described in appendix A). The parameters
of the fitting functions are reported in Table REF .
The multi-loghat distribution is a poissonian process: the events
within a burst are uniformly distributed.
There is good agreement between data and the multi-loghat model for {{formula:aae54413-109c-44c3-b21e-76243bcdfe42}} larger than 1 day. For short waiting
times ({{formula:073f970c-ff14-49f4-b793-c22bdb374de2}} ) the multi-loghat distribution and all the distributions based on Poisson processes {{cite:5eb97354042448597c19eee35e5260502bb104a0}}
shows an exponential profile {{formula:158fb6c8-b613-4258-948a-12f540f97186}}
(where {{formula:b5d52f22-2e2f-43ae-9b98-eb53d3087750}} is the typical timescales of the distribution).
Thence, the fitting distribution reported in the right column of Figure REF tends to a constant.
Experimental data show this trend (Figure REF ), but for
{{formula:8bf83e70-2e46-41a5-924a-177af8767be6}} data deviate from the typical Poissonian profile, and show an increasing trend for reducing values of {{formula:d609a4b3-c6a9-432f-a3c2-288e4d7af9a6}} .
| r | 28c8f9ce2952d8863c4d092772c61d85 |
As described in Sec. , we find a linearly increasing complexity (or decreasing close to the recurrence time). However, the way the linear increase arises from the bulk perspective is different in our case than in previously proposed holographic complexity measures. In the latter case, the holographic dual to computational complexity lies within a bulk region that is spacelike to the boundary time slice in which the target state is defined. The linear increase in this case is essentially due to the increasing size of the wormhole interior.
In our work, we are integrating the quantity {{formula:3155d63b-adb7-420c-b381-d6380ce5373b}} from (REF ), which in this case is constant, over time.
Therefore its bulk dual contains geodesics which probe the bulk region lying between the two time slices where the reference and target states are defined, see Fig. REF .
Another difference is that we find a computational complexity which is UV finite Another UV finite complexity proposal generalizing CA has been studied in {{cite:db56349e38bfdf064cc7f1725a7daa184689bad3}}..
This is related to the fact that our reference states are energy eigenstates or a TFD state at {{formula:5b65c1eb-260b-4f69-98f4-9855711c706e}} instead of a spatially unentangled state proposed as a reference state for the earlier complexity proposals in {{cite:7a28a583e12ad2fc2cc6787bf1e513adf5e1e494}}. Indeed, the analyses in {{cite:9b94b70d8a76d8dbd68e65bc823e6ad2fb9766e4}}, {{cite:ab895e2a0cdc1408a5115ccd8ff48dbf4d1b2f8a}} showed explicitly that starting with spatially disentangled state one can mimic the leading divergence of holographic complexity proposals in the setting of free quantum fields. Therefore, our bulk dual to computational complexity realises the features expected from complexity calculations in finite qubit systems in a somewhat different way than the conjectured holographic complexity proposals.
In contrast to them, however, we have derived the relation between the boundary complexity and its bulk representation from first principles.
{{figure:720a5b54-f1d3-4df4-95c5-643dd163345b}} | d | 4f9dbed92ef3ec54914e5bf2eadc3d64 |
A black hole mass this large in Leo I is not expected from extrapolation of any of the standard black-hole to host-galaxy correlations. Of course, these small systems do not necessarily need to follow the trends seen in normal galaxies, but the black hole mass reported here does stand out. {{cite:aa68442155297b200aff5bd316e29a133173efa5}} explore extrapolations of black hole correlations down to globular cluster scales, and using a velocity dispersion of 12 km s{{formula:8de78d70-d121-4f0b-9fa1-a54b300a4d13}} , Leo I has a black hole mass a factor of 100 more than the extrapolated trends. On the numerical side, {{cite:31efe04751be7afda2e92e41c5096f6cc5de65c6}} consider different scenarios for formation of a black hole in Milky Way satellites and place the likelihood of one of them having a black hole around the size found here to be below {{formula:6b99eb89-ae03-4aed-8dac-b0991a11295e}} , but this result also depends on the initial seed mass (see also {{cite:fca6ef53fd19c266cb26d7e38dedf1d36f8fb7a7}}).
Runaway mergers of stellar mass black holes are unlikely to produce such a black hole in such a small galaxy, since the required initial mass function to reach the ratios seen in the models might be more top-heavy than what chemical abundances and star formation history studies suggest. An alternative explanation for the abnormally large central black hole may come from the recent study of Leo I's star formation history from {{cite:617aa42a0d5b863c56025eaee93977f3ecefe647}}. The authors identify a period of quenching from {{formula:92936648-cf6c-4999-b1fc-d44bfb2c92c3}} followed by re-ignition until almost present day when ram-pressure stripping may have shut it down as it fell into the Milky Way. While the authors speculate this re-ignition at intermediate redshifts could be due to a past merger with a smaller dwarf this could also be consistent with gas accretion and potential active galactic nuclei feedback, lending support to the high {{formula:883cca53-da4a-48a6-8884-af77c6b635ce}} values presented here.
{{cite:a364aede75350fe70f2ec7052fb8719d3c32ec53}} also suggest that dwarf systems may in fact have significantly larger black holes compared to the host-galaxy black-hole relationships. Having a larger sample of black holes limits measured in dwarf galaxies will be important to explore.
| d | 39609f7ae17b06c7d945e728dcaff36e |
Intuitively, the initial term {{formula:81af2ffc-15bf-4c95-ac15-447ac4b6b76a}} in regret upper bound in Theorem REF describes the regret caused by the “burn-in” period of exploring the space of contexts and it does not contribute to the asymptotic regret growth. Note that we consider running Thompson sampling from the very beginning, without an explicit random exploration phase, in contrast to most of the existing algorithms; the distinction between the burn-in phase and the subsequent phase is only a construct of our theoretical analysis. Furthermore, the constant {{formula:ffc50191-5b1b-4fa0-b6e6-0e0e59946853}} plays the role of gap parameter which commonly appears in a problem-dependent regret bound {{cite:d805e7b89175a6c943ccaad794c8a44471b06885}}. Note that, for {{formula:ccfb189a-a6c3-4899-b1eb-cc2eb836e503}} , we get a problem-independent regret bound of the order {{formula:5564ab07-6cb0-4638-81a6-b035b59e16b9}} . The appearance of the {{formula:9b606669-c30b-456d-a69a-1201f651c3ee}} , term is not surprising, as the condition {{formula:33721f9a-c31a-445c-b155-9a93bb1db28a}} poses no prior knowledge on the arm-separability, Thus, in the worst case, the context vectors may fall into each other, making the bandit environment harder to learn. In contrast, as {{formula:1335f2bb-5cdf-4b45-9b6f-27f4bb96aa3e}} increases the optimal arm becomes more distinguishable than the sub-optimal arms and the bandit environment becomes easier to learn. As a result, the effect of the time horizon becomes less and less severe as {{formula:77992f28-cf2d-45cc-8135-e4f39c41bbbd}} increases. In particular, when {{formula:4895c15c-8a56-4945-939d-9f9d93abee47}} , the time horizon does not affect the asymptotic growth of the regret bound. Finally, as we mainly focus on the case when the number of arms is very small, the quantity {{formula:d8ae6f54-8b4e-4f6e-9921-a2011138b982}} roughly has an inflating effect of {{formula:ab764782-7eab-4c00-adeb-24a7d40511a2}} on the regret bound.
| d | 775c84bd85b2ce4f1742dc58990e7239 |
This work was supported in part by National Key R&D Program of China under contract No. 2017YFB1002202. (Corresponding author: Zhenzhong Chen, E-mail:zzchen@ieee.org)In the era of information overload, people have been inundated by large amounts of online content. Consequently, recommender systems play a pivotal role in modern web applications due to their ability to help users discover items that they may be interested in from a large collection of candidates. Based on how recommendations are made, existing recommender systems can be categorized into three classes {{cite:d017a213d7f215b99aa8a713b1f8327f20050ad3}}: collaborative-based methods, content-based methods, and hybrid methods. Collaborative-based methods {{cite:04d1da83da85e106235d8f1632555575854a2a7f}}, {{cite:c97f028d03e96beb0e99770a32d5e7cb670dfacd}} predict user preferences by exploiting their past activities, such as clicks or ratings, where the recommendation quality relies heavily on peers with similar behavior patterns. Content-based methods {{cite:cfa600ff72b565fc8a9d588f4aec79f1530c4703}}, {{cite:e742fbceb7c4fe67aff371741b65761218845c11}}, on the other hand, make recommendations based on users or items that share similar features. Hybrid methods {{cite:2f82b5547b5dd5b1f4849467cad079dc6b1f44a3}}, {{cite:7d10a28a9ae62fb15af4092ef4f3fbe21e3a712d}}, {{cite:78f5111e28291733493d3919d6280877cd3f13dc}} combine the advantages of both worlds where the collaborative information and user/item features are comprehensively considered to generate more precise recommendations.
| i | bd457b2a36670e11572e69f4b4b336a5 |
Computational learning systems driven by the success of deep learning have obtained great success in several computational data mining and learning system as computer vision, natural language processing, clustering, and many more {{cite:e4e8c6dfeaeab086c545d2f21a0ebfec37e9360d}}. However, although deep models have demonstrated promising results on unvarying data, they are susceptible to catastrophic forgetting when applied to dynamic settings, i.e., new information overwrites past experiences, leading to a significant drop in performance of previous tasks.
| i | bfe8e05dd05c962170cd796cc58c1d9d |
where {{formula:6ec19495-8a39-4d56-8cbe-eedaee88886e}} . According to the experimental data for 28 GHz channels in {{cite:2a7557207aa1281fd0e0a47c416406edd91dcd7b}}, the parameters in (REF ) are set to be {{formula:a53cfdef-ae16-4e5c-9168-df2213d25330}} for a line-of-sight (LOS) path of {{formula:8a5087a4-b691-4552-85e3-c8ac20c4781e}} , and {{formula:da70b81a-0329-473d-a927-1ba2e0a81ab1}} for its non-line-of-sight (NLOS) paths. To evaluate the effectiveness of different IRS reflection pattern designs more accurately, we assume that each path of {{formula:8a9d01ce-ba0b-4256-9496-5ec327be9139}} is a NLOS path that passes through tinted-glass walls to experience an additional penetration loss of {{formula:aeb1fb4f-a30d-4ab7-ad3e-a9fddda6b5cc}} {{cite:a168f02cea105336131f44cd8079c9f9e9c64f2d}}. The element spacing and noise power are each set to be {{formula:e12cd823-3961-40c2-b045-c0883b69bc55}} and {{formula:50fda10e-874a-44e3-aa5f-5753492076a5}} . Lastly, all the simulation results are averaged over 10,000 channel realizations.
| r | a5304fd9e19d7a957984fc7a27120b7e |
Another option to reduce spin precession due to the prevalent electromagnetic fields would be to place a foil (e.g. made of Carbon) that is able to shield part of the fields. This setup would, however, be more in line with RPA {{cite:e2adbb4ebf0c109a555adfa00f6485a36d0d276f}}.
| d | 95992065b45c4f9fe7bb8454b1326147 |
Actually, one may go into broader generality and consider similar problems for any monotone property of words (sets of words closed under any alphabet permutation and taking factors). One natural example from outside the pattern avoidance setting, are words avoiding abelian squares (words of the form {{formula:458d6d9a-c368-4e91-ade0-8b68bc6f0ea7}} , where {{formula:822dbe9e-e472-42f1-b8a2-1fbe50b526ba}} is any permutation of {{formula:866eca99-9e15-4fb3-894a-8489d7f5d7a0}} ). It is known that there exist infinitely many abelian square-free words over a 4-letter alphabet, as conjectured by Erdős {{cite:f29482dc3c7d6311dad0d93f4c0ab3cd6a920a21}} and proved by Keränen {{cite:07bb314beb2fe98cba522fcb2709fa635a005fa5}}. Ter-Saakov and Zhang found in {{cite:114a66b14b54e29f9f82ea91309ca43ac4ba53a5}} the shortest extremal abelian square-free word over four letters:
{{formula:4ee273bb-a366-4e89-86c8-7d29c93e6ff2}}
| d | 45c63bd352ed47600198aa9950c4adb8 |
Skyrmion number {{formula:793eccad-0fcd-40b6-a89c-c3fad7f7983c}} and injective scalar value {{formula:8a3fb698-0773-42f6-b9fa-4dc70a59761c}} . In order
to support the discussion of skyrmionic textures, the topological
skyrmion number {{cite:113461f3977d1ef9b928e8ca154677c88c4b238b}}
{{formula:f594ea99-bbed-4f27-94f8-6fd67d8e28fc}}
| m | 66eba197904358011dc4201f7416289c |
As the binary classification method, we choose logistic
regression (LR), and use the implementation of it available from
scikit-learn.https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
We consider LR a good choice, given that it is a probabilistic
classifier that already provides fairly well calibrated posterior
probabilities (which is of fundamental importance in PCC, PACC, and
SLD), and given that, as indicated by previously reported
results {{cite:7433f89dd7c19c770f81fe66d69b832857008962}}, it tends to perform well. A set of LR
classifiers are used when testing the binary relevance (BR) method
described in Section REF .
As the multi-label classification method, we adopt
stacked generalization {{cite:1acdc298acaf7b0334f8b7f310c2981a4c4b49c7}} (SG – see
Section REF ). We use our own implementation
(since the implementation of stacked generalization available from
scikit-learn only caters for the single-label
case)https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.StackingClassifier.html,
that relies on 5-fold cross-validation to generate the intermediate
representations (in the form of posterior probabilities) given as
input to the meta-learner, concatenated with the original input
features. The base members of the ensemble consist of binary
logistic regression classifiers as implemented in
scikit-learn.
As the binary aggregation method Q, we experiment with
all the methods covered in Section REF , i.e., CC,
PCC, ACC, PACC, SLD. For all these methods we use the
implementations made available in the QuaPy open-source
library {{cite:7433f89dd7c19c770f81fe66d69b832857008962}}.https://github.com/HLT-ISTI/QuaPy
As the multi-label aggregation method, we use the
regressor-based strategy for quantification (that we dub RQ)
described in Section REF . We implement this
method as part of the QuaPy framework. For training the base quantifier {{formula:ec89f159-61b7-4fed-a4d8-50ab791166d9}} we experiment again
with all the methods covered in Section REF ,
i.e., CC, PCC, ACC, PACC, SLD, while as the internal
regressor which receives its input from the base quantifier {{formula:2c1ae2d8-5752-44a0-9d30-c28e091ba77c}} we use linear
support vector regression (SVR), for which
we use the scikit-learn
implementation.https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVR.html
As the held-out validation set {{formula:2db8655c-39d4-4495-b0c1-4f74beae73b2}} needed for training the
regressor we use a set consisting of 40% of the training
datapoints, chosen via iterative stratification {{cite:5be2de284afbe3d7d4fb3cc7f1ce0931e247b520}}, {{cite:01dfeea030c605f4cb89e0a397507bd1ed887f98}} as implemented in
scikit-multilearn.http://scikit.ml/stratification.html
We call this aggregation method SVR-RQ.
| m | 32091054f8264418a2453fa2ae619d86 |
The above power inequality was first conjectured by Halmos and a delicate
proof was given by Berger (see {{cite:7977d9f98f0beaf03a278bbdcdbcb03922fdf5ef}}, {{cite:be4716bea5f864687d73be9c6a2f3b149634be59}}). After that,
generalizations for polynomial or analytic functions on the unit disc have been done.
In 1966, Pearcy {{cite:0e4df4ce01a6385f4a5219389712d08bb8edaad2}} gave elementary proof of this. However, the
reverse power inequality does not hold in general. Indeed, for a nilpotent matrix
{{formula:42b7e943-542c-4cf9-8a86-60381e416203}}
we have {{formula:08a18d4e-c705-4108-8ec1-ab7bceda9e44}} but {{formula:3fe44e5f-f767-495f-adfa-f5157ddf73b1}} .
The reader is referred to {{cite:006f54942c18924dcf96549138484a6eca13b6f1}}, {{cite:bdeefcd5968184af21367479c32c45d7a3ab8fed}} and
references therein for various applications of numerical radius inequalities.
| i | 9cbe6951708028f932bbb37ef50b9a02 |
We further validate our proposed co-occurrence based mixing technique on the salient object detection (SOD) task and present a comparison with existing mixing methods {{cite:6360c52c2e70a627413f73639ab73873ef1fcf1c}}, {{cite:35d204dfe0b81ac68bf1ea6c9373fa2d8ef807a7}} in Table REF . Similar to the task of semantic segmentation, we train the DeepLabv3-ResNet50 {{cite:fcd8499967a579777f9791a303ac548d5d0d61d1}} network with various mixing strategies on the DUT-S dataset {{cite:f34d1b669ef25974386156846247cd8506b00076}} and evaluate on ECSSD dataset {{cite:f6862a00caef8135fcb29552cf0f5572d7799ad9}}. Since DUT-S dataset does not provide any semantic segmentation ground-truth, we can not directly apply our co-occurrence based image blending technique during training. Towards this goal, we first generate pseudo semantic labels for DUT-S by simply passing the images to the DeepLabv3-ResNet50 network trained on PASCAL VOC 2012 dataset for semantic segmentation task. While the boundaries of the generated pseudo-labels are not perfect, the predicted class labels can still be used as image-level labels in the image blending process.
| r | fb417fb6a07b3fae4c237caff22c7c94 |
Scale-free graphs can be easily generated using a well-known technique called
preferential attachment {{cite:13bd0ef03de63b67546412163aefc6e173df72d9}}.
In a simple serial model known as Barabasi-Albert (BA) model {{cite:3b8fa15978ec81913ffe77d0837dabc44cd1e08c}},
a scale-free graph is constructed, starting with a small clique, by repeatedly creating a
vertex and attach it to one of the existing vertices with probability proportional
to its current degree.
| m | 9ed9018099f5a821fb9f6a46ccc83701 |
The paper is organised as follows.
Our notation for the 2HDM-II is set up in
Section .
In Section
theoretical constraints like perturbativity, vacuum
stability, and unitarity are studied, while Section introduces
electroweak precision observables in the form of the
oblique parameters.
Constraints stemming from the SM Higgs production and decay at the LHC are studied in
Section .
Most of our observables stem from the quark flavour sector, see
Section .
In Section REF
we investigate leptonic and semi-leptonic tree-level decays, where the charged Higgs is predominantly responsible for the new contributions.
In this section we further study a potential pollution of the determination of CKM elements from semi-leptonic and leptonic decays by the 2HDM-II.
{{formula:1ac94183-7330-4334-8f02-7d3f55f14a94}} -mixing is considered in Section REF and loop-induced {{formula:8a31c976-2da4-4dae-bc6f-5c4ad54e2771}} quark decays are studied in Section REF .
Section REF contains a brief description of the {{formula:b2a1f137-b7ab-4933-8bc3-d9caf01a5661}} transition, known to give a lower bound on the mass of the charged Higgs boson. The leptonic decays
{{formula:e5248f39-59b8-4697-bda3-b763f7f28f8e}} get also sizable contributions from the new neutral Higgses, see
Section REF .
In Section REF
we study semi-leptonic
{{formula:0640bdf5-a9c4-46ab-98ca-d3333ecd334c}} -transitions, where
the so-called flavour anomalies
have been observed in recent
years. These anomalies can be best explained by vector-like new effects, while we consider
here the effects of new scalar couplings.
The results of our global fit, combining all constraints discussed so far are presented in
Section .
In Section REF we comment on the lepton flavour observable, {{formula:a0544c6a-96da-467a-8193-fe546a52ef55}} , the anomalous magnetic moment
of the muon, where recent measurements at Fermilab {{cite:1529e511e5ed867b1a6b81e78262028b10dc31ec}} have confirmed the older BNL
value {{cite:b8267a8476d7cd5ca52c94a1476c38ea1a2ca03d}}. We study two scenarios based on using the SM prediction from the theory initiative
{{cite:c6f8eb3dac74400f551126b49af135fc6c568e9d}} or a recent lattice evaluation {{cite:3c01588e619e1a684438750ab37d8932f020799c}}.
Using the program package BSMPT {{cite:c5e192b4873c12f2e77a2533538d91d09d262384}}, {{cite:6db76f0bf6f63cad5b3180fe2a8243423ddd7fde}} we investigate in
Section REF the question of whether our allowed parameter space can also lead to a
first order phase transition in the early Universe.
Finally we conclude in Section , while all our inputs are collected in Appendix .
| i | 93314d4a711b9b32847c17b149c5af88 |
Numerical methods to acquire the LE spectrum by evaluating Jacobians from the functional form of the governing equations or by reconstruction of state space from time histories do exist for smooth systems {{cite:cd9629dd99c34ad1a4e203982b3c58b315b0edec}}, {{cite:7f53c0977f763bb8ce5cc2a64ff1230836a7152b}}. However, this is not straightforward for a nonsmooth system as the Jacobian matrix of such systems is ill conditioned, resulting in large and unacceptable errors in the computational estimation of LEs. De Souza et al. {{cite:9e102862d6ddde93b20031c133edfeef02f3809f}} proposed an algorithm to compute the LE spectrum for impacting systems by analytically reducing the state equations to a transcendental map. In this formulation, the eigenvalues of this map represent the LE spectrum. Jin et al. {{cite:23fdcb9feb9d54662e5be2cbdfe0e844b12f19ac}} obtained the LE spectrum for an impacting system by studying the dynamics on a suitable Poincaré section, where the Jacobian matrix near border collision was evaluated. The work by Stefanski {{cite:22a13189ea110fa0e637c961d662b7663ec6776a}} proposed computing the state difference vector between the dynamics of a pair of coupled PWS systems and deduced its synchronization state to compute the largest Lyapunov exponent. Following this work, Stefanski {{cite:7cf0501ebda107712cc03cb4e48dbc40ae7c7ed2}} proposed a perturbative approach to estimate the dominant LE of two closely spaced PWS system when one of the systems is perturbed. This method is suitable for systems that possess discontinuities as well as time delays. Recently, Stefanski {{cite:73a2c2d3931bae220626cea75d93bcd2257c45e5}} presented a method wherein the LE spectrum is computed for nonsmooth systems by circumventing discontinuities. This has been done by considering the value of the function at the previous time-step and continuing integration of the perturbed trajectory until it encounters a discontinuity. Balcerzak et al. {{cite:401b729c5c83199d025ce6197aed30d1ff625843}} presented a method to compute LEs applicable to continuous-time dynamical systems as well as discrete maps by estimating the Jacobi matrix at the instant of border collision. A method based on the scalar product of a system’s perturbation and its derivative also exists {{cite:1bcef3a909fd41d561c13de88d246e022001704c}}.
| i | 97514a4628de10c65c748c483fc02991 |
BTL-Uniform. We generate synthetic data using the Bradley-Terry-Luce (BTL) model. Under this model, each arm {{formula:b18387d5-3a96-4c35-96f6-67843d76a960}} is associated
with a weight {{formula:fe05b5ec-bf57-4de3-81eb-5a61d0fde770}} (sampled uniformly in the interval {{formula:bb46df3f-9002-4de3-af58-fb1c20c251f4}} ), and we set {{formula:8b5f323f-4153-4ff7-a19b-225ae16512a1}} . We set the number of arms {{formula:a13f33f5-9dd1-42c4-9bb1-9ac706622457}} . Note that the data generated in this way satisfies SST and STI {{cite:ec9dfd53f976c07c92918573dd2a1d6c67e931ec}}. We refer to this data as {{formula:1a229a67-89f7-4bf3-9e16-34cff9c5abff}} -{{formula:f204559d-18dc-4d29-9675-b9fb65dbb616}} .
| r | 28ff8d626be6d795693d592da6e617f2 |
{{formula:7388c85b-3df6-467b-b48d-72824a8b1371}} Zeilberger's creative telescoping {{cite:145e4b3f67cf7c37dd82942fe1c5c7a2820e22e6}}: Given an algorithm {{formula:f6323b08-7303-4de2-9bb1-1ff36aa994eb}} that computes a bivariate sequence, find an algorithm {{formula:0791248b-a36a-432e-819d-f9e08dd30b50}} that is not more complicated than {{formula:8e0eb07f-70d0-4108-b664-d947252807c8}} , and algorithms {{formula:80990c4c-09b3-48ea-9684-9f6bca6837c5}} (for univariate sequences), such that
{{formula:053d17ee-54a2-49ce-acf5-26beb8e257d0}}
| i | 75d5e0c2c4fc7c8c678677df21d265f1 |
where {{formula:e56ecdd5-3443-4189-b7b4-a6c34405e5f3}} is the electromagnetic coupling constant at {{formula:5f556bb2-bfe1-49c3-9051-a8c53516f497}} . The input values for {{formula:b9f788c9-b770-4c10-b87d-797705309586}} and {{formula:9bbca45c-d5e1-4b29-93d9-f5c91ab3944c}} are taken from the potential-model calculation {{cite:4fb0f126c4d3c21202e3ac17c94118227082cfb1}}. We use the two-loop formula for the running strong coupling constant
{{formula:dc83ffbd-e760-4795-863d-004abb4b4f88}}
| r | cf9d3d7b03cd3da265b80bc36db51665 |
We additionally report results in terms of V-Measure (VM,
{{cite:8988acda1f730ed3f72bd405cd1d7d8aab334234}}) which is an information-theoretic measure. VM
is analogous to F-measure, in that it is defined as the weighted
harmonic mean of two values, homogeneity (VH, the precision
analogue) and completeness (VC, the recall analogue):
{{formula:1aabc041-0052-40ba-b3ab-405a169356ac}}
| m | 35357c29139e5ab523bf80c82f6bcd80 |
The mechanism described here could be valid, however, even if general covariance is broken by quantum effects.
For instance, it could arise dynamically in a cosmological scenario.
In a “long” high cosmological constant phase in the early universe, natural in the slow-roll inflation scenario {{cite:b2f2d19ee3958850348234da25e360bf759ea114}}, the topological sector of the QCD vacuum would have time to decohere if this phase is also deconfined (either due to temperature of a dS constant above {{formula:745f30b8-3797-45b9-b913-06055eca76b9}} ).
Afterwards, the decohered {{formula:4cbbd89f-cd92-4440-9012-a9d03b608f35}} state would be “locked”, since long wavelength colored perturbations which cause tunneling would be below the confinement mass gap.
Such a dynamical non-perturbative QFT problem is of course well outside this work's scope, but qualitatively this scenario might be implementable.
| d | 489ec80414c94721163d0f10c006cac3 |
The audio segments are processed by extracting 32 Mel-frequency cepstral coefficients (MFCC) features. The input was augmented with time shift perturbations in the range of T = {{formula:b36a89b1-e608-4976-9452-a3f8cc7ca3ec}} ms and white noise of magnitude {{formula:9ec6f17b-b106-43a9-a25f-ad0eb5725be7}} dB with a probability of {{formula:886ed2fd-7ae5-4406-8c8b-37f318328852}} . Additionally, SpecAugment {{cite:432edf5b61ef5de4a6030ba861d130e86aaf0d3a}} was applied with 2 continuous time masks of size {{formula:73158076-f56b-4436-9f16-8f9eefcd814a}} time steps, and 2 continuous frequency masks of size {{formula:0a9c02fd-472c-4777-922d-c83165781d26}}
frequency bands. SpecCutout {{cite:87db69d200ec95fbd8a6834ac110d5a2de5dc398}} was also used with five rectangular masks in the time, and frequency dimensions as in {{cite:4e8134703454b7c92fb5ce16205616a869d02373}}. The model was trained with the SGD optimizer with momentum = {{formula:55bf9c48-dd0b-40b6-a3dc-6f81b9ec60a3}} and weight decay = {{formula:5f4a39a5-1346-45e7-87f5-bec266d8763a}} . We utilized the Warmup-Hold-Decay learning rate schedule {{cite:2fd309321d263ddccbfb5870e04033d4b616428d}} with a warm-up ratio of 5%, a hold ratio of 45%, and a polynomial (2nd order) decay for the remaining 50% of the schedule. A maximum
learning rate of {{formula:5414f225-dfc8-4fc0-8165-0d09c642ef00}} and a minimum learning rate of {{formula:822d258e-312f-43c4-9114-4bbc41787877}} were used. We trained all models for 150 epochs on a single NVIDIA GeForce GTX 1080 Ti with a batch size of 128. The model was implemented and trained with NeMo {{cite:93b55e2fca6a7d76384ca960b4c8146e3b03b2b2}}.
| m | e79378a90404bb588b789d7c93fba1a9 |
See § for the proof.
Note the inverse condition number {{formula:1319c2c8-c80f-499d-a30c-4cd3a9c6bffe}} yields a tight threshold on the sensitivity of the population for the stability of .
Our result can be viewed as the multi-agent extension to {{cite:1f552f770c67429cf522505f8d28b13796997aa4}}.
Notice that while {{formula:d504a306-ef10-41a1-96a0-e976e3ef97ab}} may not solve (), it yields an approximate solution to the latter, depending on the magnitude of {{formula:3c53cd57-d4d3-431b-90d9-8e7cec080ecc}} ; see {{cite:1f552f770c67429cf522505f8d28b13796997aa4}}.
| r | 1fc7abf0d44dd62068227c01a180dde5 |
With the upper bound of the Bernstein entropy given in (REF ), the bracketing integral in Theorem 5.11 of {{cite:a6e58cd1726641a71c4ef5cd14ac0c43b6bba093}} may be bounded by
{{formula:33f0e318-7b17-4757-9923-57fdaef0a482}}
| r | 53630a220cd69e264c459b677de6af8e |
Here, “resnet20” is a Hessian for the ResNet20 network {{cite:ab5d63e2af385f17090221fe00fd79867ff92ff3}} trained on the Cifar-10 dataset.
To apply the Lanczos algorithm to this example, we use a slightly modified version of PyHessian {{cite:9875d0ed2ae27b8be859748a6fa14735e59c56ee}}.
The “California” and “Erdos992” examples are graph adjacency matrices from the sparse matrix suite {{cite:a5479bbdaf17268d9389c0cfc7ea1bb618f47972}}, the “MNIST cov” example is the covariance matrix of the MNIST training data, and “uniform” is a synthetic problem with 5000 eigenvalues evenly spaced between {{formula:fef74613-bf5c-4195-814e-d41e3336ac39}} and 1.
| d | 01a0c5de749d3c9dbb0de504a7f3db92 |
Considering that the applicability of traditional two-equation eddy viscosity models such as the {{formula:3103c20d-8303-42c1-9052-3d4ea2edbefe}} and the {{formula:10fdd6fc-90bf-436e-a646-3f9bbe241cbc}} is limited depending on the flow (see {{cite:76f29e3904de37d7bf6034acf9ee85aa56b7ddba}}, {{cite:7d3a5d173bfd87fd2ac14c54a52142f2c5e91e21}}, {{cite:6a03882ce448add40a2e0718f8e6c78680ad43d5}}, {{cite:23bd54a005b1bf6b79e5f7cb9d1b0b29043dfa27}}), researchers have started working on applying machine learning (ML) and artificial intelligence (AI) based algorithms to develop closure models for RANS that can potentially work in different flows, including those for which applying traditional models may be troublesome (see {{cite:252d1c07aaede9801f2f7d3c3b3d1d8b2d68a35a}}, {{cite:0e25cdcbb9d24e4e7a217569a09cf1f586316429}}, {{cite:7b4001bfb6898c140d7625de9ad5bb539e8e033b}}, {{cite:c7c0421debdd58e57584849b922627a2234d1932}}, {{cite:f3b6861354dcccb2399b06c4ac1185b0b8ef32bf}}, {{cite:b8773dfb9995fca89979217bf396db00bec441f8}}, {{cite:164b2696a50371081ee003202c25bd185af49dbc}}, {{cite:a9354497f8ea491bd92c7a711b7eb0a366526002}}). {{cite:0e25cdcbb9d24e4e7a217569a09cf1f586316429}} proposed the Tensor Basis Neural Network (TBNN) to learn the coefficients of an integrity basis for the Reynolds anisotropy tensor from {{cite:a77236f39e0c6503b52c0b97fadf77c65b87b4d8}}. Inspired by the TBNN, {{cite:7b4001bfb6898c140d7625de9ad5bb539e8e033b}} used a fully-connected neural network to model the Reynolds stress for fully-developed turbulent channel flow and incorporated different physics embedding techniques to the baseline model, such as friction Reynolds number injection and boundary condition enforcement, amongst others. {{cite:ab96ac16e03fa215ac9782c2a77c932873478070}} further built on the work by {{cite:7b4001bfb6898c140d7625de9ad5bb539e8e033b}} and proposed a better-performing baseline convolutional neural network (CNN) model which also incorporated physics embedding techniques and tested its applicability to a number of one-dimensional turbulent flows. Additionally, {{cite:ab96ac16e03fa215ac9782c2a77c932873478070}} discussed several interpretability techniques to drive the model design and provide guidance on the model behavior in relation to the underlying physics of the problem. Other researchers such as {{cite:aa6e4cefa50864d352e506b5b21ff84c0a7ccfa7}} have also explored machine learning model interpretability in a turbulent modeling context. A survey on data-driven turbulence modeling can be found in {{cite:6f753e88dd2981767d0fded3187927ba047ffcc4}}.
| i | 8ecd04d4f8613334050ccc3d909ff18e |
with the initial conditions {{formula:256af270-9c6f-4cfc-b610-dec76a201964}}
and {{formula:7dd19cf4-f797-478c-a394-0b747bfc8793}} . We remark here that the first
order derivation {{formula:f7622cd2-303a-453d-82ea-6b5f7d2e7dca}} is not continuous at {{formula:b3191045-4c32-4c0e-9ec3-fa9531997de2}} ,
noticing {{formula:455bc8f3-2e7a-4b1e-9661-d3f9f7c1d43f}} . The initial rate {{formula:5b2039bf-e7e4-43f4-b527-c923410ed16e}}
is updated until the solution of Eq. (REF ) satisfies
the boundary condition {{formula:d738f694-5c7f-42a2-a7a6-96f01acd054c}}
The shooting method can be realized by using the Eular algorithm to
solve Eq. (REF ) and Newton's method {{cite:4c840826449964ec6346ce0b1263b02685030125}}
to approach the final condition {{formula:5d3d6450-9b29-4897-be70-f052a7308cd4}}
To update the initial rate {{formula:19eefed8-73bf-4a0b-b294-02b85a51ac77}} , we treat the protocol as a
function of the initial rate, i.e., {{formula:5f113e36-f8e0-4f58-9132-893cafca193c}} ), and
define {{formula:b43f498d-4655-4ea1-8b54-2d2c2bf2c526}} .
At the final time {{formula:69708a96-37d2-4660-b1b7-6210959dae46}} , it follows that in Newton's method, the
solution of the equation {{formula:b59d5185-bd4b-4bfa-ad2d-89d3550a02ec}}
is approximated as the solution of the equation
{{formula:9e001465-27cf-46ec-a8bc-ea2245ddc00a}}
| m | d8fa2f84e6967226cad6a36f5f9792d4 |
Table REF illustrates the improved performance by visual clues, such as biLSTM-CRF vs. biLSTM-CRF with image and BERT vs. RpBERT.
The inputs of “biLSTM-CRF” and “biLSTM-CRF + BERT” are text only, while those of other models are text-image pairs.
“biLSTM-CRF w/ image at {{formula:3ccf8e53-cc5f-432e-b4e6-6bb0d7046a51}} ” means that the image feature is placed at the beginning of LSTM before the word sequence, similar to the model in {{cite:777b47f78c654d683c16e26fb3a8ea09a2e36729}}.
“biLSTM-CRF + RpBERT” means that the contextual embeddings {{formula:e6d733f3-3a7f-4924-866c-fe0a75daa65c}} with visual clues are concatenated as the input of biLSTM-CRF, as clarified in the section of “Multitask Training for MNER”.
The results show that the best “+ RpBERT{{formula:42d24e11-50ae-4c5f-938e-d084a84f7d01}} ” achieves increases of 4.5% and 7.3% compared to “biLSTM-CRF” on the Fudan Univ. and Snap Res. datasets, respectively.
In terms of the role of visual features, the increase of “+ RpBERT{{formula:6ff8fcf9-ec3d-4cd4-90b5-ae8476601468}} ” achieves approximately 2.5% compared to “+ BERT”,
which is larger than those of the biLSTM-CRF based multimodal models such as Zhang et al. zhang2018adaptive and Lu et al. lu2018visual compared to biLSTM-CRF.
This indicates that the RpBERT model can better leverage visual features to enhance the context of tweets.
{{table:348d6669-b482-47bd-bc1e-4c810001b02e}}{{table:c94a91d3-c4b3-45cd-85cc-1aaccaf11a19}}{{table:23d6561d-1f61-4603-92a5-4357f7d7f0b4}} | r | d1a043f11553d8feedc7a4a603828b54 |
As we see in Table REF , SLICER outperforms all other approaches in literature by a significant margin. Results of COLA and BYOL-A were borrowed from their original papers. SimCLR was proposed as the pre-training approach in {{cite:93047bfe2e1e1de77448520baeb92f8e711c5966}}. We attribute the gap in results from the original paper to the powerful encoder proposed in the paper. However, as stated earlier, measuring the effect of change in encoders is beyond the scope of this paper. MoCo inspired from {{cite:09ec00faeb5dc1da7112bdad28040b60244263d4}}, can be viewed as SLICER without symmetric cross-contrastive instance-level learning and cluster-level contrastive learning. Table REF shows ablations on various novel components in SLICER. Starting from MoCo proposed in {{cite:72d74c6d4f8e09656bdd6a7273bd8a7fa4a75053}}, we get 0.4% average boost by first introducing symmetricity in a cross-contrastive setting, followed 1.2% on adding cluster-level contrastive and finally another 0.4% with k-mix.
{{table:a65736d6-e3dd-4bf7-a8bb-17427c3951f3}} | r | 2449f95652d19d58724c4b8ffca92e46 |
Notice that the complex plane {{formula:cd5bb6f6-5623-42b8-8e55-49ba2518d663}} and compact Riemann surfaces punctured finite points satisfy the assumptions in Theorem REF . Moreover, the objects are analytically stable with respect to the same metric {{formula:db9f4ef2-9b01-4476-8a71-eebd4ffb05fa}} . For higher dimensional case, under some extra technical assumptions(see {{cite:0e5b3aa8f3ba7e6e423613325c8e235b010af8d1}}, {{cite:f7afb2c0be249464baaba41414ed8d20d0afa0ff}}), the direction that from {{formula:51c622ad-011e-42b0-afb0-17568b36f5b4}} -analytically stable and irreducible Higgs structures to {{formula:68e95c92-85ef-4659-ac0d-68a959ae871f}} -analytically
stable and irreducible flat structures can be finished by the results of Simpson {{cite:0e5b3aa8f3ba7e6e423613325c8e235b010af8d1}}, Mochizuki {{cite:f7afb2c0be249464baaba41414ed8d20d0afa0ff}} and the discussion as
that in {{formula:196a33c6-9b0b-45d2-890c-0962617131bf}} of Theorem REF . Conversely, start with a {{formula:e7ec8a17-9dde-4ddf-b0f8-f2249077af60}} -analytically stable and irreducible flat structure,
using Proposition REF (or Proposition REF ) instead, we can obtain an irreducible Higgs structure. In the process, the analytic estimates in Theorem REF remain valid, but currently it is not clear that when the resulting Higgs structure is stable with respect to {{formula:79ea1609-9df7-4b4d-9108-5cd9cbf78035}} . We will further discuss this problem in the future.
| r | 773000e5e006f9d4ed8b55d05cd71056 |
Most of the literature on streaming algorithms implicitly assumes that the stream updates do not depend on previous outputs of the algorithm or on the randomness produced by the algorithm.
This assumption may not be realistic in many situations: for example, when the data is chosen by a malicious adversary in response to previous outputs, or when data characteristics change based on previous outcomes in some complicated or unpredictable way.
As a result, the last couple of years have seen substantial progress in the systematic investigation of adversarially robust streaming algorithms {{cite:e05aee3f5dd9df40e753d0524de53d9ffb663287}}, {{cite:a77f4fc5cb6f46613c9e3606860e4a60c871f870}}, {{cite:c96f8827e5f5385ac3533220c4285366f40e2cce}}, {{cite:bf58ba9562b38dacb7695b7cb6a257c411ca70ee}}, {{cite:abddad153156bbf399551f57e7762d63d4f402a9}}, {{cite:5d4926aa42c7f1146f5c0bb70bfc39feb06c2835}}, {{cite:ec469bc21aeb58e72fb364d53dfb7db11f284cf8}}, {{cite:89b3fcb7bc5941921fd722bdc5fe9c372ede7a72}}, which preserve their correctness guarantees even for adaptively chosen data and are thus especially suitable for these interactive settings.
| i | b570ff26857bcc652a57ece934d0976b |
If the problem (REF ) - () has
a global solution, then the feedforward controllers are given by the
equation (). However, it seems challenging to show it
has a global solution even though it has a unique classical solution within some time from 0 to {{formula:8924d056-437d-40d9-b393-c7453ec668ff}} (see, e.g., {{cite:1b0ba9bc0e92e632a6cbe09f740b1bcab9e7e00d}}).
| d | 2045784e8e4aab12f1aa7ce0880c7e47 |
We attempt to simply combine methods for learning with label noise including Co-Teaching {{cite:7ecd9d4ce9c8f497f6a6c84d340eecc6499b5648}}, Co-teaching+ {{cite:28fd0149fb7c0888b914af038f992db195471dec}}, JoCoR {{cite:8a00e132dd5b7eaa1613d906e1715160195b014b}}, Co-Matching {{cite:67386ae4d851eacdb2386eece3e5d22ec9701909}}, and methods for learning with long-tailed distribution including RS {{cite:c363b92a50f7ab5ca31b08d6caec69664d48166c}}, RW {{cite:5291f39df9ae9c0ba7a033d68eb350d569c8b571}}, LS {{cite:c2ce96c174e0b732069a7a6fd80096086c64d689}}, and LDAM {{cite:f3e8cfe2d57aa5e32930518ad07f70b7008bf140}}.
We also implement variants of our method via combining the cross-augmentation matching strategy with the above four methods for learning under long-tailed distribution.
Experimental results on CIFAR-10 with {{formula:b1fa92f2-8ab2-4563-bbaf-356373f8b95a}} and {{formula:081e9f2d-1c3e-43cd-93ce-28881f9c56da}} are reported in Table REF , and variants of our method outperform other methods consistently. For example, when incorporated with LDAM, our method achieves 34.44%, 29.43%, 18.54%, and 17.47% higher accuracy than Co-Teaching, Co-Teaching+, JoCoR, and Co-Matching, respectively.
{{table:e808331a-b222-4a05-a1bb-00b325531d0a}} | m | 0c3dcd086f606d20dc2faf589f669e61 |
General relativity can be considered in space-times of various
dimensions. Gravity is richer in higher dimensions as black-hole solutions develop non trivial properties in general
dimensions {{cite:b28f11433d64bded21a5a24e9efed2f90719023e}}, {{cite:2557a43f12a3ff7e651c77791a29beaa531a6203}}. It is therefore important to validate our current
understanding of the connection between the quantum scattering amplitudes and
classical general relativity in general
dimensions {{cite:2f31f2dfce737d5e5eebaa4481f7b4911d473484}}, {{cite:b7f9784c24567f21d341ddde11cfaa4f50b94a0d}}.
By reproducing the classical
Schwarzschild-Tangherlini metric from scattering amplitudes in four,
five and six dimensions, we validate the procedure for
extracting the classical piece from the quantum scattering amplitudes.
The method can be applied to derive other black-hole metrics, like the Kerr-Newman and Reissner-Nordström metrics by
considering the vertex function of the
emission of the graviton from a massive particle with spin and charge. {{cite:e2fd0e2b5e74f0fded1dfc9faa5dc966c008cedc}}, {{cite:921692d55723a37a364fd5031722c5feba644928}}, {{cite:be6c7f5c06f72cca15037aaeca48c36a4de47255}}, {{cite:ce2ffff4e315db58d3096c20146666c91249f626}}, {{cite:969ca584285cd97a4b02e875e38f4ec5836b1ac6}}, {{cite:84135d54b6ddf81f54e40477e5de716eae29d72b}}, {{cite:23056aca626f9b6b0686f82397bb4ea1e094b3b7}}, {{cite:31f0e9df37edca5ee1ece9b11e5d9e40c11d97e8}}.
| d | 0717f9e41e52242c92bfd2da2009989f |
which is the best reconstruction according to the chosen kernel. The kernel can also be interpreted from a Bayesian perspective; in fact, the same solution would be obtained by assuming that {{formula:7cc74057-006f-4b7c-bec9-d2ff9e1c128d}} is a Gaussian Process ({{cite:2916aefba0f4325ce392254bb1141090469a98cb}}) with zero mean and covariance function {{formula:252070e1-900c-4cda-aecf-c9b699f127b8}} .
| m | 14989d1d53a68c91b37e6697e2bb569f |
Therefore, we can simultaneously optimize {{formula:193afc11-f800-499c-a87c-759f77bfea0e}} with standard cross-entropy loss, and optimize {{formula:5607c5a9-29a9-4bb8-b2b8-df021388dc69}} with Stochastic Gradient Langevin Dynamics (SGLD), where gradients are taken with respect to {{formula:5f45b4d0-429b-4c91-9176-93d78615f985}} {{cite:ec13428f5435a1aed91ea056eddc8186423f1dcf}}.
{{table:e896be06-aafc-40c9-b11a-705af288be22}} | m | c014fd1f4c3f71d11681647b454e1730 |
For general {{formula:1536ed28-b554-433d-b289-b41985eccf67}} , {{cite:c22dc34ac70c06e8e0992df6b025c1c5d3325a23}} derived the LSD of the sample covariance matrix whose Stieltjes transform {{formula:6198eb51-c249-4b3e-9adc-c6999dd0dfe0}} is given by the Marc̆enko-Pastur equation
{{formula:bac2d1fa-3fdd-4e1f-8540-55e47f7d7935}}
| d | 7394a2c61b87723ab254966bf30d2bf8 |
Comparing Case III-LP with Case III-FT:
Figure REF compares the testing accuracy of Case III-LP and Case III-FT on STL10 dataset when {{formula:ded3dc1f-6671-4fb2-8757-b8c45d0b59fb}} decreases from {{formula:94b2974c-8a9b-4138-99a1-bc8d538d6b70}} (no privacy guarantee) to 0.2 (strong privacy guarantee). In these experiments, we use the image encoder {{cite:dac2102238b4161d41e178df4412bd6d0aaff3cd}} pre-trained by Google on the ImageNet dataset, which is publicly available on GitHub {{cite:a7be16bf9cca2b71bac242f4ab7e4d6d689cf359}}.
We did not use the OpenAI's CLIP encoder because its neural network architecture is a vision transformer, whose attention pooling layer is not supported by the public implementation of DP-SGD. In particular, Case III-FT uses DP-SGD to update both the pre-trained encoder and the last fully connected layer. Since the public implementation of DP-SGD does not support vision transformer, we were not able to perform experiments on DP-SGD for Case III-FT using the CLIP encoder. We note that we use the Google's pre-trained encoder only in the experiments in Figure REF , and we perform all other experiments in Case III using CLIP.
| r | 353bf16c62bfb4f4786d540005d5fdd0 |
DA is divided into variational and sequential methods {{cite:0603e18cf26e7cafc18e30f6e183888171f78f13}}. In the variational approach, an adjoint model needs to be formulated and solved, which is a computationally expensive approach. Kalman filter as a sequential DA approach is a recursive algorithm that estimates the state of a system by using the model of uncertainties, measurement noise, and process noise {{cite:0603e18cf26e7cafc18e30f6e183888171f78f13}}. In different applications, the Kalman filter is extended to multiple types such as the unscented Kalman filter {{cite:48af827b1a07d3a974d3fc299e9d74f13ea459d6}}, the extended Kalman filter {{cite:2ee50ca324165e9b9cb36944342910d7d96faf6c}}, and the ensemble Kalman filter {{cite:2db30fad09216537a3322c0d40cc496f2202f0ca}} (among others). In the cardiovascular fluid mechanics community, DA has been used in different problems. A popular application is using DA to estimate the boundary conditions needed in CFD simulations (e.g., lumped parameter networks) {{cite:3bd2519a303f1273272657a2e28122c35d43efe9}}, {{cite:eed9c554790469c7c0c55118b1f8419391b46000}}, {{cite:82e2b700247788260f4d19eba38faf39b0661081}}. Merging 4D flow MRI and CFD data using variational {{cite:4543860930f56c7721e6b18f8919948ff9d9977d}} and sequential {{cite:4fd846af010fae75a8c1edcb363ffbcd1c2b1fe9}}, {{cite:fc953933327d8f5a4cf98c20aba6c47168298665}} DA approaches is another trending research area. Marching large nonlinear hemodynamic systems forward in time in DA modeling is a major challenge. Unscented Kalman filter is proposed to overcome the nonlinearity issue {{cite:48af827b1a07d3a974d3fc299e9d74f13ea459d6}}; however, it requires several sampling of large nonlinear systems. The ensemble Kalman filter method is proposed to solve the computational storage problem of large systems; however, it relies on random Monte Carlo sampling and therefore has a high computational cost {{cite:4fd846af010fae75a8c1edcb363ffbcd1c2b1fe9}}.
| i | 9460b3c0fc69b541c0e96cccdccf9279 |
By Young's inequality, we have {{formula:e5baddaf-8d0f-4b0d-8e01-48e17fd939cb}} for any {{formula:e39cd092-2c2d-48b3-885e-9cf24743adb6}} , and {{formula:61f83a6f-0b69-46bb-a8b5-de9eae3043bb}} with {{formula:c521fdba-b1ce-4f8a-85b3-389f6b163982}} (c.f., {{cite:71d8c55b139128a982b4cfbfb7966a1cf39cb783}}). Set {{formula:db7caa2c-2a11-44d2-a70d-2d8456ec10e1}} . Then, {{formula:46e1a73c-176e-4d4c-9c21-f8d11fcc8b2a}} , and we obtain
{{formula:9ce74d7f-8460-4e99-82e6-123af8bcfd42}} . Therefore,
{{formula:b0979d0a-4204-426a-93d4-8b0a9e2e92b3}}
| r | f49c523f28eff4a59b140f290ba965c9 |
Basic GNN.
As Gilmer et al. {{cite:355cd36627bab358614699a5e614b9ff86357d66}} point out, the critical aspect of a basic GNN is that the benefit of neural message passing is to exchange and update messages between each node pair by using neural networks.
| m | ff7d050e69e5013c16a25c6bcbd8ccca |
We reproduce VKD, PKD, MiniLM, and RKD but MiniLM {{cite:f89956cb20271d78a4f6789c4031c022138309da}} performs KD on the pre-training stage.
For consistency with VKD, PKD, and FSD, we apply MiniLM method on the fine-tuning stage.
Previous work {{cite:8069f39df498809acbef735990cfc53bfae6180a}} uses 6-layers of BERT model (BERT{{formula:b9688bdc-f2aa-451e-92a3-b3d009d7ad52}} ) as a student and we implement distillation experiments with the same student architecture.
We utilize parameters from 1st to 6th layer of pre-trained BERT{{formula:b84eed34-d8ef-406b-91f7-01f1994f763e}} to initialize BERT{{formula:5806a57b-5e79-4cf6-a6f8-4c85a683aa7e}} .
Fine-tuning for VKD, we conduct each task with {{formula:5c27f9e9-c400-44d7-bada-6672b81e7727}} from {{formula:362ee736-7b1d-4a34-ac98-8456d854b8f6}} , temperature {{formula:ef2327a4-b91a-472f-bdbd-e8fdcfe55cbe}} from {{formula:600aaac0-2a75-4b46-9a28-a53d40646fad}} , and learning rate from {{formula:55fcb720-04c5-4e14-841c-cc35428defdd}} 1e-5, 2e-5, 5e-5{{formula:8b85506c-0b90-41a1-b055-8ef01acccb49}} to search for the best model.
Additionally, we set angle and distance loss hyper-parameters introduced in
To reproduce RKD {{cite:4a1c686e16877813512d07da4ae720feb7d664ce}}, we set hyper-parameters for its angle and distance loss from {{formula:57b119c0-d812-4e7d-ac9f-80c53199e622}} , and {{formula:8f070950-27c7-4af0-960b-44c771779d4d}} .
| m | bb7d863ed87d5a12125f75ec91634cdc |
In this paper, B-splines are used to approximate {{formula:77cd7311-20e7-49bf-9dbd-2a5253d024db}} . For notational convenience, assume that without loss of generality, all B-splines of order {{formula:70450a9f-b366-43f6-a918-db851f11b68d}} are defined on an extended partition associated with a uniform partition of {{formula:7e79eb15-9c35-416e-a255-ccb47500ec67}} knots. Following {{cite:080eb3e959b2934b2f3110d0e4e444304d5d9daa}}, we denote the B-spline basis functions on the {{formula:1cb1f25b-cdb9-4c71-b210-fc4d75f70c4f}} th component of {{formula:2cfe4277-cc93-49da-bc2e-fd1d5505e9ba}} as {{formula:a69e6b8c-eaf4-45ac-8938-4bc515ff6d0a}} . Furthermore, define
{{formula:b4245afc-17d0-4606-bc0d-5bac8b9c0e23}}
| m | eff6b4c15134bbb0b3eb005f6df4a6fb |
Our method of orthogonal correction is easy to adapt to different types of biased associations, such as the good-bad notions attached to different races {{cite:864617e74f9a36bf6aa70ad348f11d604682043d}}, {{cite:790dccd5ce71241498e94208d9fd129eec4d404c}} or religions {{cite:3379b3c009bc3e3791877e5dc1eaa0131f0d62d8}}, etc. Creating metrics is harder with not as many words to create templates out of, making a comprehensive evaluation of bias reduction or information retention harder in these types of biases. Though we see the correction method being widely applicable to other associations and embedding types,
we leave that for future exploration.
{{figure:be9cd85f-79f7-4479-946d-df68b484056b}}{{table:a957eed2-775d-404a-9a04-a1ada8df310a}}{{table:a3360f47-7e5c-4aa7-b7e7-733ee740ce77}}{{table:dc3a4106-e915-4749-9167-a26ca029e29e}}{{table:9ec17f0e-881d-43ac-95b0-d1204ad6d7aa}}{{table:ce14f856-2c14-4031-8f0d-5cfd566e35c3}}{{table:aacc3c20-de85-49a7-9d16-29b8b8ee901f}}{{table:8281b753-56ad-4f7b-a9c0-eeb37b99e503}}{{table:78a91a42-7b70-4799-a03b-26d511b26357}} | d | 9093172b08f316e0a642b7426705c247 |
For paragraph generation, as shown in the upper part of Table REF , it is clear that models with a single LSTM decoder perform much worse than those with a hierarchical LSTM decoder. Note that the only difference between Ours-no-Attention and CNN-RNN {{cite:0320bf640ee1fd2e542cb9b48105124e81611c8b}} is that Ours-no-Attention adopts a hierarchical LSTM decoder while CNN-RNN {{cite:0320bf640ee1fd2e542cb9b48105124e81611c8b}} adopts a single-layer LSTM. The comparison between these two models directly demonstrates the effectiveness of the hierarchical LSTM. This result is not surprising since it is well-known that a single-layer LSTM cannot effectively model long sequences {{cite:f714e12ec0e59b1f65bf1836d2e270ee2cf3268f}}, {{cite:8b06c453e74bebde78efb101227ecb8d8ec593b2}}. Additionally, employing semantic attention alone (Ours-Semantic-only) or visual attention alone (Ours-Visual-only) to generate topic vectors does not seem to help caption generation a lot. The potential reason might be that visual attention can only capture the visual information of sub-regions of the image and is unable to correctly capture the semantics of the entire image. Semantic attention is inadequate of localizing small abnormal image-regions. Finally, our full model (Ours-CoAttention) achieves the best results on all of the evaluation metrics, which demonstrates the effectiveness of the proposed co-attention mechanism.
| r | b458043f11e273b33f88465a38ebdf47 |
With Lemma 3 at hand, Theorem 1 follows now by a direct application {{cite:763e6b3c65550bc547f5e97e92fffb2a4527672e}}. {{formula:59a336ba-7dff-4845-8579-af0445da1f55}}
| r | bca2767020c7d4761206e5e8c1913947 |
Theorem 3.1 (Length distortion: Mean)
Consider a ReLU network of depth {{formula:c57dd68a-7a19-4cb9-bae5-e8fc15608423}} , input dimension {{formula:57cc0e4e-4a78-4e0e-b5bb-54c39bdb9cf0}} , output dimension {{formula:23690649-6c03-4bd7-be3f-83ad96506a01}} , and hidden layer widths {{formula:3e5bac8a-e952-4558-badb-5895b913fd5e}} , with weights given by standard He normal initialization {{cite:0823cac81c0ca73772c0f1ec0bea7410aa440c67}}. The expected length distortion is upper bounded by {{formula:4342afd1-45e7-4184-ad30-24485a5a45ad}} . More precisely, we have:
{{formula:abea3af8-8977-41c3-8a48-28b382ff2e8e}}
| r | b2327e893e93c9c506c7a1974d689513 |
In order to verify that predictions made by our architecture are based
on salient features relevant to the classification task, we utilize
Class Activation Maps (CAM, {{cite:60b9bfc8afeff7e715a645a9ced6c508b8f64cf6}}). CAMs highlight the most
important image areas in the model's classification process. Examples,
shown in Figure REF , show that CAMs indeed focus on
relevant image regions. For instance, coal-powered plants are
identified based on the presence of coal piles, oil and gas power
plants based on storage tanks, solar plants based on their unique
shape and spectral behavior, and hydroelectric plants based on the
presence of gorges, water bodies, and dam structures. In the case of
the background class, activation maps are typically more uniform than
for power plant images due to the lack of characteristic features.
| d | 7288bcce3008d9a59110512df3fc397d |
Attention mechanism has greatly advanced the representation learning, and the Transformer {{cite:cf11631a2c1ad98c62d01bfb6a2878a48193e21f}}, {{cite:583e94f47edb797fcf6f666d498c2df6b4a30abe}}, {{cite:3fbe60df268a129fdc59350b276f80f90cfb64b5}}, {{cite:a77454d57be3a95725eafb8b4072b268821bbcba}} built upon self-attention has established new state-of-the-arts on multiple visual understanding tasks such as object detection, image classification, and semantic segmentation.
{{cite:e2ffe3f9a9762cdbabe697f146090d7db225f037}} presents a cascaded Transformers performing end-to-end regression of human and keypoint detection, which first detects the bounding boxes for all persons and then separately regresses all joint coordinates for each person.
| m | 06e38d02968eaa98e57b2b1add9cb63d |
Several clean and faithful corpora are collected to tackle the challenges from data infidelity.
TOTTO {{cite:0dde08822ea96bfd6f2144ecad7789c526eb4390}} is an open-domain faithful table-to-text dataset, where each sample includes a Wikipedia table with several highlighted cells and a description.
To ensure that targets exclude hallucinations, the annotators revise existing Wikipedia candidate sentences and clear the parts unsupported by the table.
Moreover, RotoWire-FG (Fact-Grounding) {{cite:7fed499b79ab02c96d7e3e32c9574c29623a5fab}} is a purified and enlarged and enriched version of RotoWire {{cite:6ce3706edceb08f0a1d979ac30f07db3981ad0f6}} generating NBA game summaries from score tables.
Annotators trim the hallucination part in target texts and extract the mapped table records as content plans to better align input tables and output summaries.
| m | 82772501859b449189fafae11435240e |
Base Models.
We use two conventional conversation models, Seq2Seq and Transformer. We use the transformer as our base conversation model for all the following methods as it performs better than the seq2seq.
Fine-tune.
We fine-tune the transformer on the support set of the target speaker to obtain personalized models, noted as Transformer+F. We increase the hidden dimension of the transformer to keep the same parameter scale with the following comparing models, noted as Transformer{{formula:1361a5e1-c05d-405e-8449-da746d85bba0}} +F. We employ MAML {{cite:54f9e65d1f32e22a58cea8c3eae3df084afb0aa5}} to train the transformer and note it as PAML {{cite:d67473d95ce8683d288213dfe124c45459576626}}.
Fine-tune+Social Network. {{cite:2e17407e8d9109a521a474d3449bdb9436b4364b}} encode speaker preferences with speaker embeddings. {{cite:61fd83dbca3c0b21de82f292f55333411a1bdbcf}} pre-train node2vec embeddings over a speaker graph as the initial speaker embeddings. Since new speakers are unseen during training, their methods cannot obtain new speakers' embeddings. We adapt their original methods to our task by aggregating its neighbors’ embeddings and then fine-tune the embeddings on the support set. We denote them as Speaker+F+SN and VHUCM+F+SN, respectively.
Ours.
Ours denotes our full model, and Ours{{formula:ee337d1c-6209-4a5b-b0bb-40e7c4448677}} SelfEmb is our model without using the conversation-conditioned embedding of the target speaker.
{{table:cfa9e0c8-5329-4665-bf3b-53382cfa3744}} | m | 5ddfe5e7948b38486bf56df31e7b0184 |
Our analysis resolves an apparent contradiction in research on outlets that share misinformation. If false stories and clickbait generate greater engagement simply because they are more attractive to readers, as some studies suggest {{cite:f27b810f7337a3a7c5a43e978e2d2865e6758d55}}, then there is no motivation for a news outlet seeking clicks to share anything true, and readers who engage with their output should not value accuracy. Yet misinformation sites often share true news items, and many people who share misinformation nonetheless strongly value accuracy {{cite:293bf884548f7fa8d5115435b5c643d363e3f124}}. Our model resolves this contradiction by demonstrating that greater engagement with false news – which appears to indicate a preference for misinformation – can actually arise as a consequence of coercive dissemination strategies by fake news outlets,
rather than indicating reader preferences or news content appeal. This phenomenon does not exclude the possibility that novelty may also drive engagement with fake news {{cite:f27b810f7337a3a7c5a43e978e2d2865e6758d55}}, but our work shows that high levels of fake news engagement can arise even in the absence of intrinsic appeal or motivated reasoning, and especially when transmitters can micro-target their content.
| d | 880b7120857623b74789b485b810e3ab |
In practice, however, due to the complexity and the slow generation of EMRI waveforms computed using BH perturbation theory, almost all parameter-estimation studies done so far made use of approximated – but fast to generate – waveforms {{cite:6e168dcc98cc3a28d4aa4f0812aa12c6906f25ee}}, {{cite:8b8a3ab014ca984bbe4fcbeea9ecf4b24cd968d9}}, {{cite:76ef3c9f9a601b6979c285f29d91c348af8d9e27}}, {{cite:0c06fbe0ae69957bab753da0d46be82680891d58}}, {{cite:472be223914121c6e894d7b6f8eedfaeef0ed2be}} (commonly known as “kludge” waveforms {{cite:6e168dcc98cc3a28d4aa4f0812aa12c6906f25ee}}, {{cite:27d29b93c6f88051afb2771595f83635f7c5f9d4}}, {{cite:a74b3ac067fbe8e6fc792f4b5339828c6a361ff8}}). In fact, techniques to generate fast and fully relativistic EMRI waveforms have only recently started to be developed {{cite:163807dfe55609d7c491afae67198b90d51242ee}}, {{cite:ce555a32fc85b99f6b75457922c0a8d31cacdd64}}, {{cite:b0e14bc4588cbd0b57da3ceb1a820205dfb353f8}}, but so far fully Bayesian studies with these waveforms have only been done for a nonspinning secondary in eccentric orbits around a Schwarzschild massive BH {{cite:b0e14bc4588cbd0b57da3ceb1a820205dfb353f8}}.
| i | d1bfb17f3f4eb022808830cbb0d5ee87 |
This result exemplifies the widely known fact that multi-fluid systems are intrinsically non-adiabatic. While dissipative physics is a customary feature in modelling the dynamics of real fluids such aspects are not present in the standard cosmological model. However, the phenomenology associated to cosmological bulk viscous models is quite abundant in the literature {{cite:948cacb3a474019bbb5e21767b262c22d52fdbe4}}, {{cite:af79d15900d052472ac4e0853dc101f6de0a92d1}}, {{cite:e32c775fe40f3e57a4739565307ffa57bb24f72d}}, {{cite:7204474d3e705c898f30bd47dc6cb1c0d94d4c06}}, {{cite:025d6c8d3a3926204b28bdb82da299e39a744441}}, {{cite:b33e5d9cb7c0392fcbf9c911c77a3f4d8e2ef735}}, {{cite:eb78290b1c8c6e39350c06874e12cf1939bc7ba5}}.
| i | 88e83af4d44173f448dae32935ec1dc8 |
Concept-based methods aim to address interpretability in DNNs by extracting human-understandable concepts from a model's latent representations. Liu et al.{{cite:07a8efb007238079cb79eeae12ded09606b5121b}} propose a model distillation method based on unsupervised clustering that produces an Intrinsic (interpretable by design) surrogate model. Kim et al.{{cite:d2ac1541d5735f8ff4d823b66bc9307431764f2e}} introduce Concept Activation Vectors (CAVs) that use directional derivatives to represent human-understandable concepts from a model's activations and quantify the influence of a concept on the predictions of a single target class. Pfau et al.{{cite:0a13101c19fda32b319308734fa321be717458d0}} build on TCAV by providing global and local conceptual sensitivities and accounting for the non-linear influence of concepts on a model's predictions. Lucieri et al.{{cite:dc589f871a41f5ee9a296225c4ba7e64e386ed3b}} explore TCAV in the context of skin lesions classification using an InceptionV4 model built by the REasoning for COmplex Data (RECOD) Lab. Ghorbani et al.{{cite:ad4c29e543c3b37ee0416aa42557fbe3b795273d}} propose Automatic Concept-based Explanations (ACE) - a novel method that uses image segmentation and clustering to extract visual concepts used by a model.
| m | f401d7c4dfe61c047ee563e1361069b7 |
where the second equality holds because {{formula:a55e832b-e345-48b6-aa4e-c3020711afd0}} , and the third equality follows from the substitution {{formula:47830bae-e7e0-4033-ac5e-5ea2abbe02d0}} . We know from {{cite:f993d3ad017c9bebd32d2919877651ac88543224}} that the conjugate of the Burg entropy is given by
{{formula:fb388e93-7eb1-4cb1-8b22-7eedfaa79f51}}
| r | dce121ff2109e271a6cbef77a8f346f2 |
The equilibrium state of a beam in plasma is difficult even to characterize analytically {{cite:c2479bb4c46bc24b2e7c45249523bd86cb42ff21}}, let alone analytically examine its response to perturbations.
So the only way to study beam resilience is through numerical simulations. For this, we use the quasistatic axisymmetric code LCODE {{cite:870ff3e412293f6d026c44f1700fdfe3083d7526}}, {{cite:50cfaa2364356b323474c6c2a44b09505c4a6948}}.
To create an equilibrium beam, we inject a beam with a Gaussian radial profile into the plasma and simulate its propagation until its shape stabilizes.
This brings our study closer to experimentally realizable conditions in which an equilibrium beam cannot be pre-formed outside the plasma.
To produce a train of several bunches, we inject a long constant-current beam into the plasma and allow the seeded self-modulation to cut the beam into equilibrium micro-bunches (figure REF (a)).
The longitudinal plasma density profile {{formula:291f7250-1eca-4230-bf71-0fa2640b5d0c}} in this case must have a small density step-up at some distance from the entrance (figure REF (b)).
Otherwise, in a constant-density plasma, the beam loses too much charge, transforming into a bunch train {{cite:00afb65a2027dd1b5d853cbebbb22996643d7561}}, {{cite:fc9455cfa3ed7883549b11c49edf14ff13c70d4f}}.
| m | 7c77e59b08ed59f03e2e018e9e5bd9a5 |
Finding the maximum clique is a NP-hard problem {{cite:ba9214719534a46c13d22eda9c101ee76fbe85a5}}. We ran the maximum clique algorithm {{cite:6b1d6f6637f1c41c5c35cc18db518ec9b3f7bf53}}, {{cite:2e0e1da543eb605f8297409b406649cc9a9a235e}}, {{cite:fbc4722f4a5aaee141a5a8167295d384ebee00b6}}, {{cite:f2aedef203ae248116b50aa02badac2bf8e20d07}} as implemented in NetworkX {{cite:0a44a49ebffaed19d608588ef72abb8ea9f0fca4}} on all three social network graphs. However, when running on {{formula:7becaa1a-9c02-4c53-96ab-2f3890e132e2}} , the program ran for several days and then crashed due to insufficient memory in the computer. A similar problem occurred, when using the maximal clique algorithm {{cite:6b1d6f6637f1c41c5c35cc18db518ec9b3f7bf53}}, {{cite:2e0e1da543eb605f8297409b406649cc9a9a235e}} implemented in NetworkX.
| m | 2cf9effb8332ad7afae6f411ec02a6e4 |
Using the fixed-point formulation of ADMM, (REF ) can be written as an unified {{formula:a2c67aaf-a7a5-4e62-ba29-59a107615ce2}} , let {{formula:2c50b410-266b-4f4a-b014-2a122ef47247}} , then {{formula:0841773c-8545-4d34-8596-d676c99dc2f5}} . We can obtain the convergence of (REF ), iff {{formula:b3f2852b-63b0-4b3c-bd73-2621f02ed19e}} converges to 0. The convergence of formula (REF ) is based on the convergence of inexact Krasnosel'skil-Mann fixed-poin iteration ({{cite:bc7b595e66fd28fcbe0bf170b08896ba37d2b56f}},Proposition 5.34). The detail convergence analysis of {{formula:b8692738-4274-46d3-902b-9fb511de6022}} has been discussed in {{cite:a415c6dbf4f86f572a7259c6323e73db28198ef5}}(Proposition 4.2).
| m | e038669ac119c8a4dcbe074e047cfa28 |
The second ingredient entering our theory is the
requirement that the degrees of freedom in phase space
evolve in time satisfying the constraint determined by the constant thermodynamic temperature.
The temperature control of the classical degrees of freedom is achieved in silico
by means of the Nosé-Hoover chain algorithm {{cite:0b2dfa0cd87cb187f1862d5aceb0db92abb19684}}.
We find such an approach advantageous because the formulation
of the Nosé-Hoover chain algorithm can be realized within the theoretical framework given by
the quasi-Lie brackets {{cite:72265facd3438fc5eb5f05fc8dbf8170f501dc53}}. Such brackets allow one
to generalize the Nosé-Hoover chain algorithm, originally formulated for
classical systems only, to the more general case of the operator-valued
formulation of quantum mechanics {{cite:fe27980f3b131fbbf496a2602039b9ce27cf989a}}, {{cite:4ae1bb2127a68a9871de78e130041b420b30fb2f}}, {{cite:9e0b3991d85006344ad90fdcf0f72b280c8a3d7a}}.
| i | 9d0f66954d9e44f2651b4fc58f21f322 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.