text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
In many of the above mathematical works that study the dynamics of collapse, both in dispersive systems such as the NLS {{cite:3545a9ea6be6137ddc444c5b74fbe376b840e949}}, {{cite:c39b57f18b8a26a03ec56035e67985272da3cc5c}}, but also even in dissipative systems such as reaction-diffusion ones {{cite:2ef4036177681cd5b4393e4ed00719daa0cde1e2}}, the emphasis is on identifying the solution in a frame where it becomes steady, namely a self-similar (or “co-exploding”) one {{cite:2be2622e635f85bccfb9c8c5993137fee2121080}}, {{cite:7a706970a968bb3a0dc55332acb4f571b80dbeb0}}, {{cite:c4e4405074265b51d8709c95d1223c747e1b0f02}}, {{cite:3a5424f8fd716ad3fd01214c55df1541aa274c77}}, {{cite:8c715cfc016f742a4a2011ec6179c2f2ca617d88}}. A similar approach is leveraged in dynamical systems and partial differential equations (PDEs) when exploring traveling waves which are identified as steady solutions in a so-called co-traveling frame. In such settings, a natural next step is to explore the spectral stability of the solutions in such a frame {{cite:e6344d536e4c63ddc81f738573aca9df71516ee4}}, {{cite:51cb0956784b22eadae87515ab09646d180a1e98}}. However, in the realm of the self-similar solutions, far fewer studies appear to be exploring the spectral properties of the wave in the co-exploding frame {{cite:ce3848e1d976764e0384b97fad63e007e4f2ff98}}, {{cite:ce6d717b6b5ec52a68bc2f369b4e1f6f9de12e63}}, {{cite:3889e4f271b3fd342322e7f8a5514a40d771bb13}}. Indeed, in the context of NLS, the only earlier approach to spectrally explore the collapse problem concerns the earlier work of some of the authors {{cite:3889e4f271b3fd342322e7f8a5514a40d771bb13}}. In a recent work, we revisited this topic, attempting to examine the self-focusing problem as a bifurcation one, identifying its effective normal form {{cite:16855c0b340af644eb60f4d137d68e8e49df9b5a}}. In the present study, we complement this approach by systematically examining the spectrum of the self-similarly collapsing solitary wave.
i
21bf655f3e18e825817800ec9781f4f2
In this section, numerical results are illustrated to evaluate the performance of the proposed algorithm. All results are obtained by averaging over 500 channel realizations. Unless stated otherwise, we assume {{formula:6de6945b-f0d3-4589-a6e4-1292afe1617c}} , the BS, RIS 1 and RIS 2 are located at (0 m, 0 m), (40 m, 10 m) and (40 m, -10 m), respectively, and the users are assumed to be randomly distributed in a circle centered at (50 m, 0 m) with radius 5 m. The carrier frequency of the mmWave system is 28 GHz, and the corresponding large-scale fading parameters are set according to Table I in {{cite:c547f80a7b1dd21ec246ed6fb2c7bce6fdabb1ef}}. Other simulation parameters are {{formula:605fcc8b-2ff1-4ef8-945d-7e4bbe13afb8}} W, {{formula:af4a8e9b-733a-4d4e-bcd0-8d5dc59d4513}} dBm. For simplicity, it is assumed that the blockage probabilities are equal {{formula:3a6e0675-a846-4229-9d27-da0f308741cb}} , and that the target SINR of all the users is {{formula:f80c65f5-36d7-4fcf-bfbd-3e6b1867a2f1}} , leading to the minimum target rate {{formula:49655b86-81e9-456f-b3fc-326c7bb83952}} . To evaluate the performance of the proposed BSGD algorithm, we consider three benchmark schemes: 1) RIS-random: {{formula:85ba5c11-67d3-47dd-b8d9-d2fb27876ab9}} is designed randomly; 2) RIS-non-robust: the beamforming is designed by setting the blockage probability to zero; 3) Non-RIS: RISs are not deployed in the network.
r
ee028afa541dea124bd16b308a2211d9
More precisely, since we can write a first-order logical formula specifying that {{formula:77f2ac39-5b6d-41b3-981d-b57d195a4077}} is a {{formula:c78017a9-a147-43dc-905a-99f1b3cf9dc6}} -composition of elements of {{formula:9d88dd26-599b-4cd1-aa8b-2114f28ea7e1}} , we can obtain a linear representation for {{formula:1b8d97eb-2eae-40c1-88af-d13ba102403e}} from the DFAO for {{formula:41026ad5-38fe-4d9c-95a9-d5019d4433fc}} . Then, using a simple construction based on block matrices, as in {{cite:12f3be8a2acffadfe9d027b0b9641e97dfd6ee2a}} we can find a linear representation for the difference {{formula:7bbc26a4-1ecf-42a5-8c5b-426ef405d85a}} . This can be carried out using the Walnut software system, originally designed by Hamoon Mousavi {{cite:c4042071b2c1de9c7207eca09a594edf69c2ca5e}}, and available for free at
r
422cfb910c75c85fb1cad91d3627e092
there exists {{formula:70c26795-593d-44c9-ab8a-189b7588e3f3}} ''{{formula:06f789b7-d443-411a-945c-9dd4db45beb0}} c{{formula:e2bc7a59-231c-4431-9f85-883d2e9d5acb}} For {{formula:89180007-5fee-4f60-98fb-9638332ca3f3}} and {{formula:26c388ff-51ec-4e4d-a643-baaaeb4a4933}} Lemma REF {{formula:fa226281-07c1-4c3f-a019-4b6973d43357}} -{{formula:b19534f0-3619-43b0-b893-6a3a085cde86}} yields that {{formula:4796e608-dd29-4f7c-aad2-ea52de535e24}} {{formula:f2532325-dd46-43c3-86f9-378fa13cdccc}} {{formula:1d0c23ab-c577-4578-8927-a1babc713980}} , {{formula:862db7af-5ad1-42b2-9dc5-333b2ce95275}} {{formula:30e9b009-fa48-463c-9e7d-d3eef04b9f1d}} {{formula:e283dc65-8a3b-4e8a-b6de-829ea9284cf9}} This and an elliptic regularity result {{cite:557351da201c481e98477988c8438276a31490ef}} implies that there exists a unique solution {{formula:9bb10125-611a-4d0f-a009-231143a7c025}} of () and a constant {{formula:f23c75f5-c182-410f-af5f-99225e16769f}} such that {{formula:c479dab3-054a-4caf-a132-2d54cb5e0a01}}
r
dec6f98e1e4f7160c243404f56fda474
In a real urban traffic environment, there may be thousands of intersections that must cooperate to optimize transportation. An action taken by one intersection affects its local traffic pressure and affects the road conditions at other intersections. The state transition function and reward function of the traffic environment depend on the phase selection of all intersections, making the traffic environment dynamic and unstable. Therefore, how to apply RL in such a complex environment has become a challenge. Specifically, the urban traffic control problem, which aims to minimize the average travel time of vehicles through multiple intersections, can be defined as a multiagent cooperative problem. The direct application of single-agent RL algorithms to the multiagent environment may create new problems, such as the non-stationary problem {{cite:10fae42887f2e51c5e4d620b404c1c507fa8df46}}, {{cite:f0d24d7293c99b7506affccc7414860153dd8d66}}, which often leads to non-cooperation or even fail to convergence for an RL algorithm in learning a joint optimal policy of a multiagent reinforcement learning (MARL) environment.
i
10f6999b01b9768506280a3aa262b7c0
In neural systems, often multiple neurons are driven by one external event or stimulus; conversely multiple neural inputs can converge onto a single neuron. A natural question in both cases is how multiple variables hold information about the singleton variable. In their seminal work {{cite:c811b5019052e8a3545aea7bf05a74e877ca53d4}}, Williams and Beer proposed an axiomatic extension of classic information theory to decompose the mutual information between multiple source variables and a single target variable in a meaningful way. For the case of two sources {{formula:573b6c8e-912a-4b4b-baf8-f51fca0ac7a7}} , their partial information decomposition (PID) amounts to expressing the mutual information of {{formula:41d6862c-54cf-464b-9435-2e78a3717f10}} with a target {{formula:ab4b8ad5-6582-4d2c-aa31-1e5e6167535f}} as a sum of four non-negative terms, {{formula:19495962-9494-4927-a6e8-96bcf7425e54}}
i
c03f67a72342c7fbd3d8f25abcae8d4e
Electro-optical (EO) materials are of vital interest in almost all areas of photonics and telecommunications. Their use is based on their optical properties with the linear modulation of the refractive index under the effect of an applied electric field. This is the Pockels effect, a detailed development of which can be found in reference {{cite:283d044cbbaf47f013186cdf67e5e5662dd170ae}}. The main used EO material is still lithium niobate {{formula:76a695dd-2fe9-4efb-b17e-2455182e3801}} (LN) but various alternative materials have been developed, in particular thin films of {{formula:3c8b917f-053a-4e35-9fe2-a2d18d85fbf9}} or {{formula:640d35bc-c891-4127-9a96-8c6c50fc4756}} (SBN) {{cite:882e84e7fcba357513557dfc195f21b5e28ee2ca}}, {{cite:d995bc8974e4d032f5f01eb5978cdff56cecb438}} or semiconductors such as gallium arsenide (GaAs) {{cite:fc3eb8d8f4ac5edf90f93947d8bfef0252429d86}} or, to a lesser extent, gallium nitride (GaN) {{cite:a46f5992e35003780d279cbfedff51dd66754074}}, {{cite:2d37dc6277781bce48e77d82283f73417301e775}}. The precise and reliable determination of the electro-optical coefficients is therefore essential to characterize these materials, to optimize them and to integrate them into applications. This is particularly essential in the case of materials such as III-V semiconductors with EO coefficients that are very low compared to those of LN. Cuniot-Ponsard et al. proposed an experimental procedure to reliably measure the EO (and converse piezoelectric) coefficients using a Fabry-Perot interferometric setup with an example shown in fig:structure. In the present work, I implemented the first open-source code to analyze and extract the EO coefficients from the reflectivity measurements using this experimental procedure. The variation {{formula:8d43cd7d-995c-4b95-9b6b-05aa5ebc3114}} of the reflectivity {{formula:93d1b32e-ab5d-4df6-8ec1-ed59bb58ed79}} of such a multilayer planar structure (as shown in fig:structure) for the transverse electric (TE) polarization is given by: {{formula:5b2d3be2-dcf2-4dfb-80e7-8941a2f3d9c2}}
m
a9d476a4a55d6bd29bcbf0ac23a98955
The mixed-valent and spin states of Ni in the ternary La-Ni-O system tend to form charge and spin order. The emergence of the two types of order has been verified in hole doped La{{formula:95985e59-d2aa-4a0a-922c-ded8f5b7ec38}} Sr{{formula:6eb0374a-c533-42de-98bc-cbd33b39d6df}} NiO{{formula:ff1fe993-8df0-42d4-8151-9e8499c264b1}} ({{formula:25d76dba-1f3d-434a-b709-ed7779d2d6e8}} 1/4, 1/3, and 1/2), La{{formula:52fa7081-d418-4c94-9f0f-b5be590b036d}} Ni{{formula:6d98db07-39ec-43ae-a539-f75af1a51101}} O{{formula:63b1b613-eb4b-4740-b412-0a51250cab13}} , and La{{formula:6b147df1-ecb6-4532-89bb-0e6a9fde54cc}} Ni{{formula:2e70c071-611a-4db7-9f70-c8d99ed5f303}} O{{formula:b8436b84-bdfe-45ab-9149-943f49a13fd9}}{{cite:0ecec7a7db9514a961848cad72f893c81f47595f}}, {{cite:fdf94340333df83e08e9f83e5bef4f80b26f0291}}, {{cite:4e8eccddf70b579718f35f4a43e9929300c9b3c7}}, {{cite:a988a6bf29fd4caea9ff1a80c80af18407210a3f}}, {{cite:18b736a97a1800ae7be998d68f156c0695c12e01}}. For the trilayer RP compounds, the ratio of the magnetic Ni{{formula:b74c2c5a-2306-457d-9110-e522d2054a31}} ({{formula:12e94a01-d3da-44cf-97be-b5698ba20067}} ) and nonmagnetic Ni{{formula:913f8272-7b06-4081-84d7-c160bd9adea4}} ({{formula:da903997-b7bb-4f60-a78d-ed9df575a12d}} ) is {{formula:ccdd9d3e-8963-41db-a933-ae6f37e03c2e}} ; for the bilayer RP compounds, the value of this ratio drops to {{formula:84436561-9e23-4ec3-96b8-43279e6b9a08}} . The formation of charge order is expected, while the spin order may be weaker in the bilayer compounds. The changes in resistivity, susceptibility, and specific heat at 153 K are indeed less pronounced in La{{formula:2a4b5954-1ab6-4d39-bc34-5956c03fcb43}} Ni{{formula:c2a8f4e3-48f5-46fb-be84-4b13661aeca2}} O{{formula:496db0ae-e80f-4ec6-b6d6-c74ac6014f62}} single crystals compared to that of La{{formula:607b3cee-b684-4b75-93b3-2f1fdb6d3cba}} Ni{{formula:1e53cee3-7027-463b-96f5-cb82c73ff719}} O{{formula:f1f2ad9b-e1ac-4633-9077-7721c489e8e9}} . The possibility that the charge order emerges in the bilayer La-Ni-O system without accompanying by the spin order could not be ruled out. In this case, the anomaly in magnetization could be due to charge-spin interactions. Our Raman scattering measurements on La{{formula:59820e8e-e085-4a18-ae84-92fd723812b8}} Ni{{formula:aeee4913-3f54-4081-83bf-9cf376e99267}} O{{formula:ca7f247f-9a22-48e7-acb2-21e324b2bec1}} also reveal an anomaly at {{formula:4e73fce7-5b33-42f3-a298-15285afdc7d2}} 150 K for the position of the peak at {{formula:bdea6161-b1f9-4f91-8dc8-8efceec0d513}} 597 cm{{formula:53075d32-87f0-4a80-b410-3d1ae6b41768}} (data not shown). Based on previous studies, the anomaly in resistance at {{formula:2dd6badc-e1b7-4111-affb-b93391bd7e42}} 110 K may be related to the change of carrier concentration induced by a structural evolution against temperature{{cite:8e0892c62338fe1f70a4d72c0ddd3123443a61cf}}, {{cite:2f2c449e9bc3322fbded5d473eb95dc859a2ca77}}, {{cite:d1c6b5253e1e85d5aafdf64317b017dc3345e4e8}}, {{cite:512ec1e9a2bd96a4e669f6c612d7e1e4964074ad}}, {{cite:2bfd0d600cb312f7b842cb3dd659de8bc103aeec}}.
d
f68f9f227e1de6547740e5825245b7d3
The corresponding minimum cost flow problem has unit node capacities (maximum one unit of flow through each node) and node weights equal to negations of the weights of balls, so is equivalent to finding {{formula:cab172c2-eae2-4fe1-9b1b-0a2045d54ad5}} node-disjoint paths from {{formula:9ae04451-cf01-4cb0-9a61-3267cb7efe55}} to {{formula:214451c9-6173-4f0f-ae32-6d0be9bc5864}} (the paths share only nodes {{formula:c3296efa-c7a0-46c5-adb8-891d1f1137e1}} and {{formula:f58067af-664a-4993-b04f-9701dadf4132}} ) such that the total weight of the selected paths is minimised. This problem can be solved in {{formula:4a56d0c3-838f-4da4-be35-ccb947cea123}} time by the successive shortest path algorithm {{cite:018b7b1b10e634b4b415a68465410917f618af22}}. The quadratic dependence on {{formula:59f79ebe-2ad9-40a4-a064-f23180a1590d}} is due to the fact that graph {{formula:89665a79-96a5-4d85-8a6c-e4a261c9ffee}} can have quadratic number of edges. Looking into some technical details, graph {{formula:87dd304d-f5bf-4915-bdf9-daa200300037}} has actually {{formula:af06819b-029c-4580-b46b-74f4e046ae80}} nodes since each node representing a ball is split into two nodes (connected by an edge) as in the standard reduction from node capacities to edge capacities.
i
5b125b9c0bcae9bcb7fb4b641a387f3b
Theorem REF indicates the necessity of a finite penalty factor {{formula:dad5cb59-85f5-41b1-83d5-6aae98e851fc}} , or the approximate error would not be controlled. On the other hand, we have to restrict policy updates within the neighborhood of last {{formula:de963531-85a8-4860-9313-8d0e5060015c}} (i.e., {{formula:621c6823-ec29-4407-86b4-5aea8a55ea9b}} ) and improve the policy iteratively. Here, we incorporate the trust-region constraint via clipped surrogate objective {{cite:ba782690df0cca7a1d9fd37c7eb451600f8e616b}} in the approximate exactly penalized problem (REF ) to guarantee a proximal policy iteration. Then, the final optimization objective, abbreviated as P3O (penalized PPO), is derived as: {{formula:3bad23e7-e488-4fa7-91bb-8434f6a8062f}}
m
f28070b9ca4a31f94de2ae31f70f07e3
Since our proposed method is the first work that constructs visually protected image for DJSCC transmission, we compare the proposed method with two perceptual image encryption methods, i.e. the learnable image encryption (LE) method {{cite:c9fa804f82ff0a0345ed03f2e9f484738bca6719}} and the pixel-based image encryption (PE) method {{cite:be01b7036583b7aa69b498f01c987b701731e14a}}, which are designed for image classification task. The classification accuracy of ResNet-20 {{cite:e5f25799078e4c6874f058b2585149babc3084e5}} based on the LE method and the PE method are 87.02% and 86.99% on CIFAR-10 test dataset, respectively {{cite:2a306e22297258d4349ddc5452c3fc82e6a6a9cb}}. {{figure:988665a5-032e-4136-813a-c97843645985}}{{figure:6efe0cf4-c20b-45e6-bea7-3ad88a4cc75f}}
r
2815508698e45b233a4167d335097524
Implicit in the causal effect analyses is that inferences are done conditionally on finding the margin. As argued in other contexts (e.g., {{cite:e199300e5cc5f2ed99c03106dd958c07f70cfea6}}), there exist many modes of performing inference in causal analyses. This issue should be explored further as well. In particular, the recent literature on post-model selection inference in {{cite:eb2ff7ed17f41b4217e3e20afbdeee75ff8a1dd8}} could potentially be extended to this setting.
d
75382ea2d47f7599d4b71c73a66953f8
We present qualitative results in Fig. REF and compare with the state-of-the-art approach of Choutas et al. {{cite:f85770fc30d1fd362d8bdf3c5e014420838475ea}}. Despite much faster inference speed, our model gives results with equal visual quality. In the first row we show that our model captures detailed hand poses while {{cite:f85770fc30d1fd362d8bdf3c5e014420838475ea}} gives over-smooth estimation. This is because of our utilization of high-frequency local features extracted from the high-resolution hand image. In the second row, we demonstrate that our hand pose is consistent with the wrist and arm, while the result of {{cite:f85770fc30d1fd362d8bdf3c5e014420838475ea}} is anatomically incorrect. This is due to our utilization of body information for hand pose estimation. We demonstrate in the third row that with variations in facial shape and color, our approach provides highly personalized capture results, while {{cite:f85770fc30d1fd362d8bdf3c5e014420838475ea}} lacks identity information. In Fig. REF we compare the face capture results of coarse and tight face crops. The result on the loosely cropped image already captures the subject very well (left), and a tighter bounding box obtained from a third party face detector {{cite:b8526e095b31044d98d28d54a1a7686b7148ddc3}} based on the coarse crop further improves the quality (right). Unless specified, the presented results in the paper are all based on tight face crops. As our approach does not estimate camera pose, for overlay visualization, we adopt PnP-RANSAC {{cite:d447fca2045a975b504ffef740bc0eecaab67b4c}} and PA alignment to align our 3D and 2D predictions. The transformations are rigid and no information of ground truth is used. Please refer to the supplemental material for more results. {{figure:19ea8c94-2e6a-4631-b6cb-1a768bb34f1f}}{{figure:0b18124b-0bef-4dff-950c-148b053e0e5a}}{{figure:e01d9fe4-ef41-45dd-855b-f45da0b7aac1}}
r
20841c3fc4ffa9005eb90b759a2cf98b
where {{formula:f939b52c-b40d-4a2f-b972-1d3ebbe75e43}} is the set of non-descendants of {{formula:c231df78-e3c4-4f45-b6fd-5db8afa0963f}} ; for details, see {{cite:2596ea62411952459752598c6fd5dc39cd72f8a7}}.
d
683b70f840f2cb9b51b75356a75df933
Overall superiority of LAMDA with varying budget. In Fig. REF , we compare the performance of LAMDA and the existing approachesUnfortunately, {{formula:baa1e7eb-2fdf-4fbe-9a30-3fd674c6333a}} VAADA {{cite:cb6246063a13fb951699fa68dc7142a7d0876a0c}} for DomainNet and VisDA-2017 requires infeasible memory consumption, in the supplementary material (Sec. REF ), we report its performance on a part of scenarios of DomainNet which our resource allows. varying budget for each of OfficeHome, OfficeHome-RSUT, VisDa-2017, and DomainNet datasets. Note that each method is equipped with its own domain adaptation technique (e.g., VAADA {{cite:2331ed846a7c60caf218c24986b71ee57842342d}} for {{formula:a10ccd51-0b90-4ac3-9750-13439f2d307c}} VAADA, and MME {{cite:b70650646bb0841e57ddb53accda3f2c84b83d5b}} for CLUE) and classifier (i.e., cosine classifier for LAMDA). We evaluate these methods while varying their adaptation techniques or classifier to examine the contribution of their components thoroughly. The results show that LAMDA clearly outperforms the previous arts in every setting on all the datasets. In particular, LAMDA with only 2%-budget is often as competitive as or even outperforms the methods with 10%-budget. The performance gap between LAMDA and other methods increases as the budget increases. This suggests that LAMDA utilizes the budget effectively by both ways: label distribution matching and supervised learning. {{table:bf5c5e8e-c60d-461c-a75e-4c3a25fd6be1}}{{table:cb0c52b4-1ca4-4e50-bf58-558a5d49cd08}}
r
717f7741888c2585bbb7f0af89e620fb
Detecting Out-of-Scope Intent in Conversational Language Understanding To validate the method beyond image modalities, we also evaluate SNGP on a practical language understanding task where uncertainty quantification is of natural importance: dialog intent detection {{cite:7fa21086f122930f55f631f5f77915ddf0d71f20}}, {{cite:0638bf6d5c9bb07f37385ae8ad70bde5e7537396}}, {{cite:f87fcd6e3bdfbf2ab552880cf394e7207f2dbf9c}}, {{cite:98c7d8f4da8e4c5f811f4143a90038166c87eb50}}. In a goal-oriented dialog system (e.g. chatbot) built for a collection of in-domain services, it is important for the model to understand if an input natural utterance from an user is in-scope (so it can activate one of the in-domain services) or out-of-scope (where the model should abstain). To this end, we consider training an intent understanding model using the CLINC OOS intent detection benchmark dataset {{cite:7fa21086f122930f55f631f5f77915ddf0d71f20}}. Briefly, the OOS dataset contains data for 150 in-domain services with 150 training sentences in each domain, and also 1500 natural out-of-domain utterances. We train the models only on in-domain data, and evaluate their predictive accuracy on the in-domain test data, their calibration and OOD detection performance on the combined in-domain and out-of-domain data. The results are in Table REF . As shown, consistent with the previous vision experiments, SNGP is competitive in predictive accuracy when compared to a deterministic baseline, and outperforms other approaches in calibration and OOD detection. {{table:d3c121c0-28e0-4852-a2a2-25aada96dc15}}
m
da366a95491282b49e77219158b670a0
The fact that we consider complex-valued “para-Hermitian” structures implies that, unlike the usual literature in DFT and related matters, we can analyse different signatures of the metric, by imposing different reality conditions. Since we work in a 4-manifold {{formula:f2f63f0f-4afa-42e6-8d9d-7bd578d321c0}} (which we assume to be orientable), this means that we will deal with Lorentz, Riemannian (or Euclidean) and split (or neutral) signature. The Lorentzian case is relevant, of course, for general relativity (and in particular for the so-called hyper-heavenly spaces {{cite:69ca16cc311ce7754c6148814e0152b3263bdbfb}}); the Riemannian case is relevant because e.g. of the Atiyah-Hitchin-Singer approach to twistor theory {{cite:3bd6397084a24a02f071f97a86ce5a0316f084bc}}; and the split case can be related, since the work of Ooguri and Vafa {{cite:4c87916add38f6a5b0cdb8e99197b40e25f3b789}}, to the geometry of strings with {{formula:18869b3b-efeb-4586-8178-d62ff3e11bc3}} supersymmetry. (It is also related to the LeBrun-Mason twistor construction in split signature {{cite:7784ae656e43ae1709310153947604babfdcd002}}.)
r
42d1c372074a014ac2b719ab0d6d7b55
We want to apply the results from the previous sec::errorest to PINNs as introduced in {{cite:2ec07b90ba80c72a0306f8d02749ceb25d0a250f}}. Therein the the data-driven loss used during training of feed-forward NNs is supplemented by a contribution reflecting the PDE (the physics-informed contribution), yielding {{formula:6e732422-a934-4fb4-817e-53a0899420d9}}
m
14df1816b2787cb6c434490fdb777204
In our model, the maximum energy is estimated under the assumption that particle Larmor radius is less than the termination shock radius (see Eq. (REF )). Two parameters are concerned here. One is the magnetic fraction {{formula:fe752461-60c4-4923-b24f-1ffea5aaa054}} , which is limited to a small value: {{formula:f1a9d10d-46a7-4e40-a3d5-2532d8dde6e4}} (i.e., {{formula:c095f062-beb7-460f-8e7a-51ea5faba4fb}} ), consistent with the result derived from axisymmetric two-dimensional simulations {{cite:6754923709e89b5a12d524213d2bb3754aa27471}}. In this case, the energy fraction occupied by the electrons (protons) is {{formula:01ae21d6-0530-4741-adc5-1d2d98946d4e}} ({{formula:29ca7193-670d-4114-b90e-15d09a81d307}} ). Another parameter is the ratio {{formula:a93c33e7-331c-45a8-91ef-bdb30c4135e8}} of the particle Larmor radius to termination shock radius, which is set to be {{formula:125aba35-d275-411c-a48e-acd1bbf814fb}} here. Despite that the value of {{formula:479cb6a0-e814-457f-aa34-773c086421b1}} adopted here is slightly larger than those previously used {{cite:c8dbd978b4ecb0c5280581e6d2c19df624e4713c}}, {{cite:2f4a01c9251f2f4893c12367148e9bbfc01ee87b}}, {{cite:9d67a0c6c5f9818d4b2aa26b555171fa9f3d657e}}, {{cite:7a085ee85e4821a6b369a43aad9ff8b34edad5e5}}, we deem our choice here reasonable.
d
b712d3924b874a4a712e251fa7b3bc95
Comparison to state of the art. We compare WikidataRec against the following methods on Wikidata-14M: BPR-MF {{cite:847d9c713a863733acce2ad130d0a0a76cc73c30}} is a representative collaborative filtering model that uses matrix factorization (MF) as the underlying predictor and is optimized with a pairwise ranking loss. It is suitable for recommendation scenarios with no explicit editor feedback and personalised ranked recommendation results. eALS {{cite:298ee1e8214c75b13a4df190250343614581f9b5}} is also a MF-based method that is optimized with square loss. It treats all unobserved interactions as negative instances and weights them non-uniformly by the item's popularity. GMF {{cite:9263bcec4060e7deab2508249ffdf05ac46887e9}} is a neural network-based collaborative filtering which implements MF with cross-entropy loss. It embeds each item and editor in the network and computes their element-wise dot product to predict the relevance score. YouTube-DNN {{cite:f243d32e519fad0fc4dfc57122229d4e5d6bc8b0}} is a neural recommender for YouTube videos, using deep candidate generation and ranking networks. It uses a hybrid of users' activities and content information of users and items and directly learns their low dimensional representations. In this paper, we adapt YouTube-DNN with our item content and relational representations.
r
eb3176d6fb076eb9d0977876abb3232d
3D human digitization has drawn significant attention in recent years, with a wide range of applications such as photo editing, video games and immersive technologies. To obtain photo-realistic renders of free-viewpoint videos, existing approaches {{cite:dca0b4144d0ea95212d9b5ed7b641ff6e97bf217}}, {{cite:a52a3770dd8a095d1ab049985974e8f4be87f23e}} require complicated equipment with expensive synchronized cameras, which makes them difficult to be applied to realistic scenarios. To date, modeling detailed appearance of dynamic clothed humans such as cloth wrinkles and facial details such as eyes from videos remains a challenging problem.
i
ccf868112be5c996b98e04f1d0d20071
If we include the discovery of the nova shell around IPHASX J210204.7+471015 by {{cite:4d8c65fbca3cb78720a2f75f15c520b87f57e172}}, who used a similar setup, in our calculation then we have two NLs with shells from a total of 57 NLs. Assuming the {{formula:433f834a-cfcf-45a4-b36e-d85175a84951}} 100-year visibility window as discussed in the previous paragraph, this would imply a NL lifetime of {{formula:05b8c190-0be9-48cd-a545-73870bf435e1}} 3,000 yrs. This result is broadly consistent with the order-of-magnitude estimate of 1,000 years derived by {{cite:afeb1ea2bf4844ce542922f581d33b465a04a299}} for the NL phase of CVs.
d
f81b49d19aa19a70b9cf6600dcca947d
In most cases of practical interest the solution of problems involving many interacting quantum particles rests on some kind of approximation {{cite:4eb92499095d5353cc6ac79a904d79d6b95a72b1}}, {{cite:d9e95ff81f265318fc671dfd2acead12f378d659}}, {{cite:e1fe9c96a93f8de860cdbb55dd979486a84144fc}}. It is not surprising then that an important fraction of the theoretical work in Condensed Matter Physics is concerned with the development of such approximations and the test of the predictions derived from them. Many of these techniques are similar in spirit, they improve over the simplest Mean Field approximation to the problem including correlations, and/or interactions between clusters of variables. Among the most celebrate approximation, although certainly not the only ones, we could mention, the Effective Field Theory (EFT) {{cite:81b06384e5825a7f09662570141540f3ccc952de}}, the Cluster Mean Field (CMF) {{cite:85dacafe53778754dd8b85166550193d65a48e72}}, {{cite:c24e15be4702554664fc785e2dfcaf419217d7a6}} and the Correlated Cluster Mean Field (CCMF) {{cite:22d20e9b643da99326ef4181518a38f8fc38ead1}}, {{cite:fb0700a9822778c846d4f64823bf330a9aacf21c}}, {{cite:fe0a74eab51f11a6271e1ef7797a322458b75b57}}. The Quantum Cluster Variational method (QCVM){{cite:d974bb6a587f88285b9e19605e88e9294bc3392f}}, inserts into this family in a novel way: it allows the presence of disorder in the model, establishing a clear distinction between average case scenarios and computations on single instances, and connecting with message passing algorithms developed in Computer Science and Information Theory{{cite:f2ce1e6d23a2c663ab2fe1cf4777f871eb51c1f9}}.
i
a3a6b1d6c5eb8ad5b32bcb2d25e93089
Limitations and future work. The presented work has limitations with several potential improvements that are the subjects of our future work. First, dual-domain self-supervision requires an accurate k-space signal for loss computation. Increased noise in undersampled data may impact training performance, and thus pre-processing via denoising could be applied to stabilize the self-supervised training. Second, the proposed conditioning method requires well-aligned MC-MRI. As misalignment between MC-MRI may degrade DSFormer performance, image registration should be considered in the pre-processing to eliminate the effect {{cite:636939e8a5b886a7a0192167e2fea8eae99a1ab1}}, {{cite:836c52ae3ff1351019e71be548e201ce78c2ee1f}}, {{cite:9e10e6f15e6787cd91ba9047821fee2f573374d8}}. Third, currently, DSFormer only considers accelerating the target MRI in MC-MRI, while acquiring fully sampled reference MRI. DSFormer could be extended to simultaneous accelerated MC-MRI for all contrasts, which is an important direction for future work. Importantly, our experiments benchmark DSFormer's performance only on healthy human data and it is plausible that pathologies (e.g., lesions) may lead to drastically different appearances in some modality pairs. In future work, we will investigate reconstruction on pathological data with heterogeneous tissue appearance.
d
c3aece4a232592a1b353ca3c649603dd
While we have presented a number of corrections for dogmatism, we have not presented any coherent framework for deriving corrections. This presents an important question: are there any objective Bayes principles which automatically lead to priors which adequately account for dogmatism? Certain strategies, such as using Jeffreys priors, cannot work because they usually imply that REF holds. By contrast, other objective principles which are not parameterization invariant and do not necessarily imply REF , such as priors constructed from decision theoretic principles, entropy maximization, and reference priors have some chance of working {{cite:649038d1eafa80abb0d1b9bcf951a8838ca65edf}}. With the exception of entropy-maximizing priors, the computational difficulty of implementing these priors makes numeric experimentation difficult. Interestingly, entropy maximization with respect to the distribution of the observed data can be used to generate models which possess very strong Frequentist properties, but these models are (in our opinion) lacking a satisfying justification.
d
b3023186b7df9b2b2bad2059e03ff315
The proposed model learns the structural information of light fields without requiring ground truth disparity maps or hand-crafted features. The model is generalisable to different light field image formats, limited only by the availability of training data. Since the model optimises for not only pixel-to-pixel distortion but also the structure, it may encourage a higher MSE and lower PSNR values. This is, however, only a hypothesis and we have not performed extensive evaluations in this direction. Apart from objective tests, subjective evaluations need to be carried out to judge if the structure is well captured. Overall, we show that this is a viable research direction that requires further exploration and targeted efforts to develop optimal architectures. In future, we plan on extending the model to accept the entire 4D light field to increase the compression performance. Further, the idea of hyperpriors {{cite:f7e3ad3460ade7e24792e0659c050ad90a6bad5e}}, which is an instantiation of side-information, can be incorporated.
d
e1bad6f120f1ad410111b41091e8e347
The problem framing in fig:pipeline, the formulations in sec:methodology, and the analytical framework presented over hyperparameter settings above naturally extend to any ML system (white-box or black-box) which have hyperparameters, {{formula:3634d9db-9867-411f-bf95-67d22c89562c}} , or more generally, up-stream factors, that affect a final model. The specific analyses presented in our paper, however, is bound by the choices made during the model zoo construction (in {{cite:94e6ff4dcc37f803d7401d2d0b09296b8bf7edca}}), e.g., choice and range/values of hyperparameters, and thus, the interpretation must be limited to the domain of {{formula:daa9cfbc-b36c-4050-8325-f98b87cec2eb}} that we tested. For instance, while the model zoo offers an extensive number of models, their architecture is kept constant in all models (3 CNN layers, {{formula:e62aabed-d34a-4cf8-9e30-9dcbc0267cdb}} ). Further studies on larger and complex models (e.g., {{cite:eea8e91ceb9a3339786f98b066f673fb715edf24}}, {{cite:4b8d79ab2800a04d39a79126b2b458392aad7954}}) or similar analysis when training dataset is (adversarially) changed (e.g., {{cite:6c52dd5d11912aa389a39e5928a5a7955013255c}}) across different stages of training could reveal interesting insights. Finally, extending our work to uncover the effect of hyperparameters on other types of explanations would be interesting, e.g., influential samples {{cite:455654459a5bdb6a6c1a6c9cdca9b8bc19af8788}}, Shapley values {{cite:e997c405458d6371efbc1b26ddec63d0018c54ed}}, concept-based methods {{cite:fd451e66069e325d9be8f7db620945c7c5beb4e1}} surrogate-based methods, and recourse-based explanations and recommendations {{cite:f38775d1df819d3dacd89d1a6dce27d5c4c38112}}.
d
bb98c1be105233f5c07281f7bd522ab5
The first historically discovered gauge theory was classical electrodynamics formulated by J. C. Maxwell {{cite:0ed9513f94793c0ea198bba1faf66898c32654ce}}. The existence of a gauge symmetry within Maxwell equations initially did not appear to be a fact of huge importance. Only the papers of Weyl on the unification of electrodynamics with general relativity consciously introduced the notion of a local symmetry to theoretical physics {{cite:628c8f02683fc68c45d69962b56cf1424b2d7c72}}. In his seminal paper {{cite:628c8f02683fc68c45d69962b56cf1424b2d7c72}} Weyl introduced a term ,,gauge transformation", and in particular gauge invariance (ger. "eichinvarianz"). However, perhaps the greatest success of gauge theories came in 1954, when C. H. Yang and R. Mills introduced non-abelian gauge theories to describe strong interaction confining nucleons in atomic nucleus {{cite:09048fd4f315f1f23ab667426a1c83c46b853e8f}}. These theories came to be known in the literature as Yang-Mills theories, after their inventors. Ever since then Yang-Mills theories and gauge theories in general became one of the main objects of study in theoretical physics {{cite:671bebd8d11159fc92a72f788ad5efd1c3969f51}}, {{cite:615feea65c3ebe701a05437bf181bbf0016fad03}}. Arguably, the most important discovery within this field was the emergence of the Standard Model of particle physics.
i
6def4c8d3180fe5f43a715186ad6d902
The discovery of a Higgs boson of mass 125 GeV at the CERN Large Hadron Collider {{cite:bc6682fea7bcb91f8699e731b6bce452a89bf17b}}, {{cite:f6e8ddb3169c6b2aec9726c2215fea5751635326}} has completed the particle spectrum of the Standard Model (SM) of particle physics, and has established the Higgs sector as the origin of electroweak symmetry breaking (EWSB). Nevertheless, numerous open questions remain unsolved by the SM, and point to the need for Beyond-the-Standard-Model (BSM) physics. Among these issues, one can mention, on the theory side, for instance the hierarchy problem, the lack of a description of quantum gravity, or inflation. Additionally, a number of experimental results have contradicted prior assumptions – upon which the SM was built – such as for example (to cite only a few) the need to realise baryogenesis, the evidence for dark matter and dark energy, or the discovery that neutrinos have tiny masses.
i
6292dd3f57b0ff85a9a94cadb28e51c2
We first compare the outputs of TorchRadon and Astra Toolbox {{cite:d5f824306576b509dc7abdaedef89a22bd254ca9}}, {{cite:43157643c34fb3cdc3d5a8f66d32d1500286d718}} on Radon forward and backward projections and filtered backprojection.
r
92f596ebdf80f12b4d04db6f12ac00c7
The optimal control problem () is discretized, using 10 min as sampling time and 1 h as control interval. The problem is set up using CasADi {{cite:27805e994829653c07906b691c857cf639ddd8f7}} and solved using IPOPT. Fig. REF presents the states {{formula:29bffa8d-b5dc-4b47-bfa8-ecbc2c83e3e4}} of the system. {{figure:9497337b-360f-445b-a99b-189813a2f9d2}}
r
670effc8fb265543ed50aa8691640017
where {{formula:e058833e-9072-47b9-854a-0899a77abe9e}} with {{formula:97cae08e-b922-4099-a6b5-e6f19bdc8ac4}} being the size of Hilbert space. For the static Hamiltonian system with eigenvalues {{formula:f7f448de-a4cb-4f45-b5e2-8043efaff446}} , we have {{formula:a75b1f48-42ab-44cd-b5a0-e862879a572a}} and the ratio can serve as a probe of the phase transition between the ergodic and MBL phase {{cite:806859680282db89e483659cdeddad04fcf33166}}, {{cite:481d95f5d27289026346757edb3695d8370a50de}}. In the ergodic phase the energy level spacings satisfy the Wigner-Dyson distribution with {{formula:71a05c06-bd3f-43f4-93e6-9cc84e34fbc6}} , whereas in the localized phase with the Poisson distribution {{formula:3dd0acab-aae1-4475-a0be-2f15bd730d28}} {{cite:806859680282db89e483659cdeddad04fcf33166}}. This quantity was also applied to study the disordered Floquet systems {{cite:92bf1b98c8927f9268daa5833ec34efbc18638da}}, {{cite:25db850a1e2e6866ec24ccfb5be3ace5011afd92}}.
m
a58fe27dc06aa90141d53b53e0df736d
Topics for future research and broader issues in SARS-CoV-2 seroprevalence studies merit mention. In this paper Wald-type confidence intervals are considered, which have known limitations {{cite:01406e7b304078a8ac88e42eb36b41758d8838e2}}, {{cite:71d1cf7a27e80cdd3b7d065fa3065281bc9ee544}}; alternative types of confidence intervals could be considered based on the bootstrap {{cite:f46171200ec41b8fa79aa2e470bb45fdb10ad1ae}}, Bayesian posterior intervals {{cite:f507e5ae2f255c75e0398d6831f69d410159490b}}, or test inversion {{cite:2cba4b078a7d938e9cea92c02e01d3841505d456}}. While the approaches here estimate seroprevalence at a fixed point in time, seroprevalence is a dynamic parameter. For analysis of studies with lengthier data collection periods, extensions of the estimators in this paper could be considered which make additional assumptions (e.g., smoothness, monotonicity) about the longitudinal nature of seroprevalence. Another possible extension could consider variations in assay sensitivity, which may depend on a variety of factors such as: the type of assay used; the recency of exposure, infection, or vaccination of an individual; disease severity in infected individuals; the type and dose of vaccine for vaccinated individuals; and so forth. Where additional data are available related to these factors, then extensions of the standardized Rogan-Gladen estimators which incorporate these additional data could be developed. As an alternative to standardization, inverse probability of sampling weights {{cite:7a7a63a713810f1433fca3f0baf9e7da54865323}} or inverse odds of sampling weights {{cite:caaa2c75eb86c65fb718b54fe7396927a75fed4e}} could be considered. Standardization and weighting methods may possibly be combined to create a doubly robust Rogan-Gladen estimator.
d
1f21b62c33a295676a6c82a5f959f8aa
Fully homogeneous nets and implicit bias. Suppose the network that we use is fully homogeneous and we have an over-parametrized training problem. For such classification problems, it has been shown that gradient descent methods {{cite:33133d17bef1d0540dfc8bce0dd13a63cec9fccd}}, {{cite:904c4570977723b72a1482f415c678c40ccd2539}}, {{cite:ccd7dbf8429df75434a05878fa7bae9cdacda250}}, {{cite:ad9c0931e11abbd5aca6514f788a0bda0d392a31}}, {{cite:088c897e01b1a721f244533b69c2a3746e803dcd}} have good implicit (inductive) bias properties. Specifically, it has been shown for fully homogeneous nets that gradient-based methods asymptotically attain weight directions that maximize the normalized margin (“max margin"): {{formula:97ceebc7-9e0a-4616-b62b-39945dfa73a1}}
m
cdfd9038080f615eecbc557e02cee14d
The Vecchia approximation offers an alternative to inducing points, but without introducing auxiliary quantities. Although it is sometimes cast as a novel modeling framework rather than an approximation {{cite:1425f72d04716ca97a2e6fa9a10392d932d82880}}, a key advantage is that it doesn't fundamentally change the underlying kernel structure – at least not in the way inducing points do. Rather, it more subtly imposes sparsity in its inverse Cholesky factor. Although Vecchia can be higher on the computational ladder ({{formula:c1acb8d5-ffad-4ef5-9543-8f132a339883}} ), it is able to provide good approximations with {{formula:30197baa-06a1-415e-88f6-76fd2a5932d2}} much smaller than that required of inducing points without the “blur” or hassle in tuning the locations of {{formula:bd1af3b9-69bb-483a-9950-8e866b210132}} quantities. {{cite:b0e5f0cc044578b97e83256ea24bcc97960d339c}} entertain Vecchia in lieu of inducing points for ordinary GPs via VI with favorable results. It may only be a matter of time before Vecchia is deployed with VI for DGPs. We prefer MCMC for its UQ properties.
m
7d2f6873026451d01d34b433e8162d23
In the last couple of decades, exact renormalization group (RG) equations {{cite:714d4a4d57b7ad50dce61749557941f8ec35e016}} have become a powerful tool for nonperturbative investigation of both renormalizable and effective quantum field theories. The framework for exact RG of Euclidean quantum gravity {{cite:0ee3ee86144f46d713162438c9969217b76cecc2}},{{cite:2b78434412663994cdae05263504f409936c0ac1}} has opened up the possibility of investigating cosmological models {{cite:04acdc87012633333244fc91d04a46ad4069657d}} in a systematic manner. An essential ingredient of this framework is the effective average action {{formula:0ab39719-fa22-4dbd-ba22-6cedf14b706e}}  {{cite:7f6a2d8d01e31c475e95c72e1b231327f52bdb85}}, {{cite:7434d56c7c4849b98add61974f1c117c817ed6b7}}, {{cite:f22e2a49c4eb6af1c2e71a05a876f8e5a66c0ca1}}, {{cite:97012916820504487ceb22f8452246e5c4cdd3ab}} describing all gravitational phenomena, including the effect of all loops, at a momentum scale {{formula:f8c0aedf-b27c-431a-9998-1474ddd31af8}}  . Assuming that there is an ultra-violet fixed point much higher than {{formula:0a58c2bd-d077-4014-a296-aecc2c88664b}} , all quantum fluctuations with momenta {{formula:5e232c1d-bf94-48ec-af4f-431550486226}} are included, while the momenta {{formula:5c66eed4-f189-41e2-90bb-6fca63d71a98}} are suppressed by an infrared regulator. The form of this effective average action is {{formula:7c799f66-8ca7-4cc2-9378-a6fad3536fb4}}
i
9ab501b2f82db27db123e69cdec18a0a
Although we have focused on the resonant contribution to the GWs, there are non-resonant contributions. Here, we comment that the non-resonant contribution to the GWs does not affect our discussion on the detectability in SKA. This is because the non-resonant contribution is dominant for smaller {{formula:a3a58162-e26d-48bd-9135-56837f817217}} with the abundance smaller than the peak of the resonant contribution by {{formula:0bd47ba3-0513-427d-90b2-79e3057f7b5d}} , as shown in Refs. {{cite:a06cf982b5d3178bbe22af8682554cfed4ba57ac}}, {{cite:794e23578b4439e5f2e05e2eda73b04f7bbcc599}}.
d
625de5126447f84a8b164df1326826cb
DAB-DETR, DN-DETR, and DINO. DAB-DETR uses a 4-D vector that represents an anchor box as the object query. Results are listed in Table REF . For the C5 model, The model trained with Group DETR obtains {{formula:751f42ea-c7e4-4dd7-b6b5-2e5d46e5064c}} mAP and {{formula:66375054-2485-4d49-802b-ef29c2e5e183}} mAP gains over the baseline under the 12 epochs and 50 epochs training schedules. For the DC5 model, the model trained with Group DETR outperform the baseline model by {{formula:145bedbd-107c-4c65-8318-5598d149639d}} mAP and {{formula:02e1089b-e5c3-414e-b48c-d0a1f5b492a7}} mAP under the 12 epochs and 50 epochs settings, respectively. We apply the proposed method to a stronger baseline DAB-Deformable-DETR {{cite:44e39f968c9ccbae37583214ddf2eb62c3feb1d5}} which use multi-scale features and deformable attention {{cite:1e646cb7f38ab129ac3c960549a68101554443ed}}. Group DETR improves the strong baseline by {{formula:d8e5a308-4313-4cd3-880b-f4457554073e}} AP under the 50 epochs setting. We also conduct experiments on DN-DETR and DINO. Results show that the proposed Group DETR could obtain non-trivial improvements ({{formula:4f163983-3032-4e70-a97b-d0da2229fcd0}} mAP and {{formula:1d751631-2ec6-4561-b58d-837d4ae365f7}} mAP) over these methods, which verifies the generality.
m
891764b36e1e49f2ab5473478ecea3d5
where {{formula:be552889-8ad3-46b4-99d7-c05da4abbda2}} represents the extra cosmological parameter {{formula:1e81bad1-b246-4caa-bd2e-c2bf1490962e}} and {{formula:a61c9aba-fcf7-48e1-9252-f3731b81c445}} , which are marginalized in a large range ({{formula:aa92cc68-c12a-4bda-9a00-d21183cc6b61}} , {{formula:3bc9b673-3ef4-4325-9ae1-8debe957474f}} ). The likelihood analysis is performed employing a Bayesian Monte Carlo Markov Chain {{cite:a367db6e95f97487e82db4663107a3b486b0781f}} with the emcee (https://emcee.readthedocs.io/en/stable/) package.
r
7239303fac8f59d43daa18cefe3f235e
Secondly, this set-up somehow bridges recent discussions of averaging {{cite:9be375683e78187900af46050fa842e62c65ae68}}, {{cite:c838fb4c4cb0c963380605f34512a925b03e7e5d}}, {{cite:fd615319bbed9ee7a1c86179cf5e8aa0fe4349c8}}, {{cite:ea2777835658c7fb958bc62f02ab5a5dd5e91cee}}, quantum error correction {{cite:aff5648a69054ad3523efef8c368124f49c36538}}, {{cite:fcdf5687f1955ca8afa06a5ee82e1821937da64f}}, {{cite:4aa363937a3ff6f5a115b8980291b104dcfbff72}} and `third-quantized' gravity {{cite:b5a71d63f5f10dc20079e55abeae37dbc08fd65e}}, {{cite:e8a1c8586af73b5febaf1acc3ec5c2d30ea9b7f3}}, {{cite:82ede9f94a302b396272b4f77ad35a88a03981eb}}, {{cite:d6506e9e89e150920628cb7afd35bb48dd25c65c}}, {{cite:d202d5190de306507147cb6f34f70be2586c46af}}. The most obvious connection is with averaging, since the set-up of two black holes with real-time coupling was the one that was found to give the clearest demonstration of the averaged nature of semiclassical gravity {{cite:fd615319bbed9ee7a1c86179cf5e8aa0fe4349c8}}. The one advantage of going from bra-ket to ket-ket wormholes is that now the wormhole has a Hamiltonian interpretation and so is easily related to discussions of quantum error correction.
d
6c144215ff82ffa8c66b0cf96a76b2e9
A useful operator in this context, as defined in {{cite:cbdab04fedadd794643ce050ac0e62017ea8f526}}, is the following {{formula:41fe4c95-3219-4897-b2fa-f7455c510635}}
d
2754405cb1fa201376426bd18be295c1
More recently, the computational task of sampling from generic closed periodically-driven quantum many-body systems has also been proposed to demonstrate quantum supremacy {{cite:7020de47e26457e6a084550af03bc088872e1570}}. This opens the road for a plethora of analog based quantum simulators on periodically driven cold atoms and ions to be used for such tasks {{cite:23544534cd569d69fe6ef0bed7575348dd3e7318}}, {{cite:51b74f7bb9a0b3217539acbce3737f7b58c444b5}}. The computational complexity of sampling from such dynamics is intimately related to the ability of the closed system to thermalize under the combined influence of interactions and the external drive. Once thermalised, the associated temperature is effectively infinite resulting in a quantum evolution capable of exploring its entire Hilbert space {{cite:b44155a79a501ae50b9f8385fd78b95b4b61b313}}, {{cite:66a9f558788f0235ce62383dcf01c54bd7e8477b}}, {{cite:7d3f3ad527a127fbc76ff83fd0b52d26e5ff6871}}, {{cite:7a68894d0a713e9ebeff6dcad65a930939cc854c}}. This is in contrast with the driven many-body localized (MBL) phase where the presence of disorder prevents the system to thermalize and limits the quantum evolution to a restricted portion of the Hilbert space {{cite:7a68894d0a713e9ebeff6dcad65a930939cc854c}}, {{cite:7d3f3ad527a127fbc76ff83fd0b52d26e5ff6871}}, {{cite:ffdf8661b0d3e74cff1b20cdd393afb25e1c9062}}, {{cite:5161f40faa40bba7708e3791dc1661d363dba57c}}, {{cite:9601a703907d7243106a29ac9df39098d11f0dbc}}, {{cite:8e3888b2a15d31685b6df28ad7c2576b37c7b141}}. Efficient theoretical representations exploiting this fact suggests that the sampling from MBL dynamics is tractable to classical computers {{cite:3416335f3dea74f778a03d4f6ae6b102dd6d844f}}, {{cite:2531537876cad482be92b789cff8b2eb3069cce2}}, {{cite:978b9330b214b5d6837aacdf0464af3432740b97}}. Therefore, the experiment we proposed in Ref. {{cite:7020de47e26457e6a084550af03bc088872e1570}} to achieve quantum supremacy can be interpreted as a protocol to probe how much of the Hilbert space has been explored under specific driven quantum dynamics. This ability of tuning in and out of the chaotic evolution underlying the thermal phase by controlling the level of disorder has already been exploited as a potential application in the context of quantum machine learning {{cite:d2437baa8676b836fbdba84a120ce91fa4cbec07}}.
i
55dfad41866aa4b6c98f9ee77664c0a4
See {{cite:b31e77ccbfae8674d8b837499355db71eaed84a8}} for more detail on the subject of metrics, and other distance-like functions, on probability measures. See {{cite:2ed6449679b339c086e2ecb7084ced5b2ac3b671}}, {{cite:6bf7b3a7d5bdc3de08757d54a614fad04a42807d}} for more detailed discussions on the well-posedness of Bayesian inverse problems with respect to perturbations in the data; and see {{cite:4bec13aff013ed72d1ee47e5b8d1a6ec919430cf}} for applications concerning numerical approximation of partial differential equations appearing in the forward model. Related results, but using divergences rather than the Hellinger metric, may be found in {{cite:f1d13c01397f6881d71efe541994460f02ca31f9}}. The paper {{cite:759b3e278bd5fbc8c1a1c8bab8aa1e9d5e90a822}} contains an interesting set of examples where the Meta Theorem stated in this chapter fails in the sense that, whilst well-posedness holds, the posterior is Hölder with exponent less than one, rather than Lipschitz, with respect to perturbations.
d
12c37fcfd52a1738ae30b7bbf3336e18
The proofs of Theorems REF and REF utilize the linear programming approach (see {{cite:fba4d6811c29f463912bbefda52e2323555a09fd}} and references therein) except here we do not need to show the positive definiteness of the interpolating polynomial. We next state a solution to Problem REF that follows from Theorems REF and REF and the universal optimality result from {{cite:fba4d6811c29f463912bbefda52e2323555a09fd}} for the minimal energy problem.
r
791ca2e589af690c49f4d4efc6a6114a
Many studies concentrate on the relation between the NS spin direction and kick velocity direction (e.g., {{cite:1c1c3473b97dc9dc68d57a67d76ebc23e8239889}}, {{cite:b97e03e26eed605faceaddd6ac4b4f6b2e1ac138}}, {{cite:ed8cca2ce03ba6c34be27f69480b8d17e7b8dd96}}, {{cite:78f596b555019bb258410a2d62b0205dd3a60294}}, {{cite:024b6679ed48a77383abeaa041833c8c50d250f4}}, {{cite:a9425c6ca600471048c8536a6d9c31135eb254d5}}, {{cite:f902298940ff1246b9bd6008ceba87e49900cb18}}, {{cite:f3d009d74c1b5cb8180591df7fabceb338975df0}}, {{cite:0c2483b619db99dc893d23f565eb6d0f377a337f}}, {{cite:c27621239d8984010d962f2ca6b78cf56cd81ede}}, {{cite:4c128a0582dff69d56e051a8b60d6154e908e107}}). Some studies expect that in the delayed neutrino explosion mechanism the kick velocity and the NS spin axis be at small angles with respect to each other, i.e., be aligned (e.g., {{cite:9699a0dbc4441f9afb3cb01278d9f6a607827379}}, {{cite:749718be36c5dcdb4c300dc333ccc957598e4534}}). In a recent study {{cite:749718be36c5dcdb4c300dc333ccc957598e4534}} propose that tangential vortex flows of the gas that the NS accretes after it acquires its kick velocity can explain spin-kick alignment.
i
9088b04ab2659d725f85246f4f8acf5b
Different Distillation Strategies. Table REF shows the results of distillation using different strategies, i.e. , layer-to-layer distillation last-layer distillation, and MSE loss NCE loss. We first study the effect of our proposed visual token alignment by applying attention distribution and hidden embedding distillation loss only to the textual token part: e.g. , using the “textual-to-textual” attention sub-matrices and their corresponded textual token embedding. The second line in Table REF is the result of textual distillation, which shows a slight improvement over the VLP baseline. Following previous language distillation works {{cite:0978ce2fadf673cdda41fbc8ffabd4378459c032}}, we also conduct the layer-to-layer attention and hidden distillation between Teacher and Student, and observe inferior performances than the last-layer strategy. Beyond that, the layer-to-layer method can also be severely limited by their architectural structures {{cite:7ddf862a4e60e57defc4e7502504cd1cbed679cd}} (e.g. , different number of layers and attention heads). “NCE + Last-layer” represents the results of DistillVLM using our proposed contrastive objective function that uses negative samples for the alignment learning. We find that contrastive learning leads to slightly better results than MSE loss. To this end, we study the differences in using token-wise embedding and the mean-pooled layer-wise embedding for contrastive learning and observe that learning with token-wise embedding gives much better results, which is a different observation from {{cite:2815219feb5d6d0e7a9ba15344c091e44dce730e}}. However, applying the mean-pooled embedding with NCE loss mitigates this issue and gives on par results with token-wise NCE method (see last two lines in Table REF ). We further provide the ablations of VL distillation for the downstream tasks in the appendix.
r
8ae8255f4989a2ff06c56d226ba54f1f
We present the class-averaged AUROC scores for the SIMO-based evaluation in Table REF . Here the optimal method for MNIST is a discriminative AE, with {{formula:382f31b2-43fc-442d-8afd-2a34e07da2f3}} , {{formula:2116c9e6-9cb3-41d0-98bc-a536e5b912eb}} and {{formula:a4612a0e-e87f-4f52-985c-fa63881af3b4}} and for CIFAR-10 we find the optimal method to be a vanilla AE with {{formula:9e50954d-f044-4eab-9900-588ac1661bd9}} , {{formula:0031d70d-1a56-4868-8b1a-036845a025ed}} and {{formula:d9d9dc4c-7dfb-475d-99c2-c2e9186d78a9}} . Furthermore, we find the best performing method on F-MNIST to be a VAE with {{formula:ffe82178-251a-45fd-b1bb-0f66bc7fb1de}} , {{formula:8b85ac7b-1d77-4c53-867c-b716848b8167}} and {{formula:ef4458fb-14c9-47ef-9c85-a441ec8fa09b}} . For the MVTec-AD dataset we use a discriminative AE with {{formula:736af7fb-9e84-4762-87ce-fd03ee247ad8}} , {{formula:7d3037d4-bbd1-4c55-8553-0fd35c80d87b}} and {{formula:3da4280c-1311-4fc9-9b7f-8da6692ee31e}} . It is clear that the attention guided VAE (CAVAGA) {{cite:87cd52b95a3d6ede992f3d723afb5282f424aafb}} method performs best on MNIST whereas DKNN {{cite:7ad447b039f73a9bcb7f24b91f7715592cd6c13e}} on CIFAR-10. However, it is evident that the NLN-enabled autoencoding models offer increased performance over existing autoencoding and ResNet-based architectures for both the F-MNIST and MVTec-AD datasets in the SIMO context.
r
dea071e32e82b8970c1b5dd0e4fabea2
In contrast, learning-based methods use policies for defining human-like behaviours, which are usually learnt from human demonstrations by matching feature statistics about pedestrians. These methods apply machine learning techniques such as Inverse Reinforcement Learning (IRL) to model the factors that motivate people’s actions instead of the actions themselves. An experimental comparison of features and learning algorithms based on IRL is presented in {{cite:1861de544a479f7492c989f2fc41eafaf30891b7}}, where the authors conclude that it is more effective to invest effort on designing features rather than on learning algorithms. More recent approaches include the use of deep reinforcement learning in order to set restrictions {{cite:0922b0441362eee70e605960720e584149f012ae}} (i.e. passing at the left side of the people) instead of learning the features that describe human paths.
i
2edc9f4718f30b1c34249ddc6e24ea44
In this context, we refer to the class of domain decomposition methods {{cite:9951302cd7a5fe58aa418eb5f8021719f2d1ed9a}}, {{cite:5d90815693e9073cb893ac003ccdbb8e9d02f780}} that are based on such a splitting and instead solve small local problems on (overlapping or non-overlapping) sub-domains. The resulting global model then only inherits a few degrees of freedom from the local problems.
m
066ddcc8688006c7b6500d50ea92d8a0
Datasets: All SR models were trained on the training set of the DIV2K {{cite:cf12b139a1ed7cc2b19b4107013e510c2e7e4f87}} dataset with 800 training images. For the evaluation, four benchmark datasets Set5 {{cite:71722170d68d59dbd50d8af686f9627d4c49298f}}, Set14 {{cite:7e894253cb63557a08d90245350df725f8cc7c26}}, B100 {{cite:06a438eaeea7b357aef0913b818aae73445887aa}} and Urban100 {{cite:ab2a4806817935c98d3b7f6f34ae512a087be390}} are employed as test sets, and the PSNR and SSIM indices are calculated on the luminance channel (a.k.a. Y channel) of YCbCr color space.
m
32df2ffd288de2b8829fcf1099cd8883
An immediate example is the family of all graphs having no copy of a fixed graph {{formula:c9e2b6a6-a735-45cc-bff8-25c335cb48d0}} which is a major problem in extremal graph theory. Here, if {{formula:0eedc15a-214b-4bb8-8a97-d09b17f741f4}} is the corresponding Turan numbers then {{formula:2a410bd8-29d3-4020-ba06-7fa05602d1b7}} is the smallest {{formula:e64c36bd-fc16-47ab-a86c-84084748d105}} such that the pair {{formula:51df9579-3fb6-42e2-8f8a-9a8bab46378a}} is non-feasible, and of course these“H-free families" are all non-feasible families {{cite:6faabe9bf1968193c4d73d33d041a3de92d024d6}}, {{cite:38223839bae9cc57b7505ccdec7c1d2d09956ca3}}. A simpler example is that of the family of all connected graphs. Here we know that given {{formula:0fd23431-1512-4d69-b6cb-cc1e38897817}} and {{formula:38c95591-3a0d-4fb9-b102-57f20e153b5b}} , the pair {{formula:ea869fdc-33ea-47e7-ac5d-85282c1f8aa3}} is feasible, namely realized by a connected graph, however for {{formula:86aea29d-4366-4fb5-9f3d-a9ee939aa24e}} , no pair {{formula:dda46034-bca2-4380-8311-b7d96d579e60}} is feasible and the maximum value of {{formula:11cf2bb7-b01d-4597-a237-eea5c773efcf}} for a given {{formula:5b115e10-ecdf-4734-8707-14f0561400df}} which gives a non-feasible pair is {{formula:d51f60fa-5ebf-4744-aa46-2e188e2b62c4}} .
i
117ed6507c989012ebec1ea4f32291e2
The evaluation of our model sentence embedding method by {{cite:a7bcdb56f24d90ed09862b3882b7440ad07055b6}} (SIF) has succeeded in (1) reproducing established findings about the method at hand within our scientific and technical domain with its specific data structure, (2) exposing some shortcomings within the evaluation set, and (3) showing applicability of the method for our intended use case.
d
2cc79028472daa8fb985fa7185ef4bec
We compared our method with state-of-the-art occluded Re-ID methods on widely used occluded Re-ID datasets Occluded-Duke, Partial-REID and Occluded-ReID, including PCB {{cite:f5f280904d46d276d2c04fc5b9559495cbc41919}}, PGFA {{cite:1cfa8b927044d770d8bc34ebbb35e944098d775a}}, OAMN {{cite:722a30f8417f1bb380821aa8d5a1dd4150a90b2d}}, MoS {{cite:2e376ebcbf24ab64fc87710bc3055d4c0fa8c3d1}}, ISP {{cite:e0f181e7170413edfc2f5cf08210c31296b49aac}}, ViT Base {{cite:b331993e629b95cf8495771ed46d2550d46a4458}}, PAT {{cite:a718257f521d7efcb3c6db5cecd8901c55cdfb28}}, FED {{cite:4b37c43c84019f7c0445f19a2de9120d2375e449}}, TransReID {{cite:402a1a2401830948d9f4df1ecdea3b0d6d4fd663}}, and three methods using auxiliary clues HOReID {{cite:18ecf0b2d8d8d72020160c2816263b17324adcfb}}, PVPM {{cite:e183d1b6e264b8b29115687c5f6ac93d3cf8df28}}, PFD {{cite:17ffff4db828f998473df5e0e6b5250ecca1ddcf}}.
r
3d1730574da6ff6dfb355900de635723
We are conducting the first large-scale Chandra/ACIS survey of the spiral galaxy population in the Virgo cluster. This project complements and integrates the AMUSE-Virgo survey, which studied the early-type galaxy population of Virgo, a decade ago. We selected a sample of 75 galaxies, including all the largest and most active spirals ({{formula:fcb5143b-404d-4de3-bd91-4a7e94bc4827}} 20 late-type galaxies with SFR {{formula:d62fbf19-0d40-44d3-bc46-ec0d8d4bbac4}} 1 {{formula:1a6ccaec-bea8-49d6-ad79-49f44c155fce}} ), plus a selection of fainter ones. As detailed in Section REF , our sample is skewed towards brighter objects, in the sense that about 60 of our 75 galaxies have SFR {{formula:e6dcb351-e21b-4936-a158-99d01ca13aeb}} 0.3 M{{formula:95a8a88e-5823-4f56-82dd-e40ee2b21a1a}} yr{{formula:e4482933-bac3-4f39-ab0e-2eb41092c982}} . This was done to provide a contrast with the early-type galaxy sample {{cite:da37e941c3bdfbf1d7e14099d828eca70e038a0d}}, {{cite:97d71f87ee38ad5c7afa45a31f781fb4f971336c}}, {{cite:7d8d4a8b84b1965824e56d167f92066df149c361}} and to aid the discovery of luminous XRBs.
d
ff6b9bd914dcc4e5b82697007ed98ea9
Data. Between January 2015 and September 2019, patients who underwent imaging for a lymphoproliferative disease were identified. Initially, 584 T2-weighted abdominal MRI scans were downloaded from the National Institutes of Health (NIH) Picture Archiving and Communication System (PACS). Radiology reports for these studies were also downloaded, and natural language processing {{cite:997125493d0853702b1e100463956e849fe15fd4}} was used to pull the nodal extent and size measurements in these studies. A radiologist performed a quality check on the collected data, and removed incorrect annotations and scans containing only one LN annotation (either LAD or SAD measures). This process yielded 421 T2 scans (n = 421 patients), which were divided into training ({{formula:9807ab63-ce32-44de-ab54-99f60d46e6d5}} 60%, 254 scans), validation ({{formula:8184e72b-2e83-4e15-a8b3-6a4b81ba9a9b}} 10%, 45 scans), and test ({{formula:957c35f4-e7e4-4a06-b5ff-e41211bbde69}} 30%, 122 scans) splits. The train and validation set consisted of only 2D key slices with 1-2 LN annotated with both the LAD and SAD, while the 3D extent of all LN in the test set were fully labeled. N4 bias normalization {{cite:649b5e2774961adad83523c13d5de8c947f5f0f2}} was first applied to the scans, followed by normalization to [1%, 99%] of the voxel intensity range {{cite:02b6b9d7fb3a49403cc4a2460759ea5bf0343bd0}}, such that the contrast between bright and dark structures in the slices was boosted. The resulting scans had dimensions in the range from (256 {{formula:c07f0f7a-dcf9-4629-bee7-845bd52529fb}} 640) {{formula:a9b89fd7-cb26-45df-8328-8acfaa5b7416}} (192 {{formula:676aedf3-dc69-435c-a5e4-985383e93066}} 640) {{formula:3cfac4e3-cfcf-4351-b9f7-54f89b50c2f2}} (18 {{formula:9b9c6e98-5986-4f56-b7cd-c266a02656a0}} 60) voxels.
r
5e2649580c54f00fb62e6eb7131ffc13
When there is only a single item, i.e {{formula:ae5d6603-089b-4e5f-a52d-2baf42722cb0}} , then {{formula:e12b76a6-ffbb-4895-8fcd-9e389fa53d95}} is simply the probability of the set {{formula:cfacd43b-7014-4886-b31c-373ec88679e2}} . It is easy to see that this function is submodular, with means that {{formula:be8ae9b1-eb8d-4c1f-932c-46b85dd9fb92}} is a polymatroid.Given a finite set {{formula:df6d5e57-3ae0-452c-bc1b-a7228c6882df}} , called a base set, a polytope {{formula:1c2e6728-29ea-49f1-b2e9-b953d38df0df}} is called a polymatroid if there exists a cell function {{formula:b05d2199-85e9-4985-9d82-e4aa6a2839a9}} such that {{formula:f101c727-f34b-4744-a281-45c949c3eae1}} . The standard single-item Border's theorem follows by observing that the extreme points of a polymatroid have a known characterization via a certain greedy algorithm, and that such vectors are in fact realizable by a certain greedy allocation rule (also known in the literature as hierarchical allocations) which will be discussed formally below (see for example {{cite:d6171b8aa5d4dcbb8b4756e197a407c415d18c20}}, Theorem 44.3). Since all extreme points of {{formula:6b05a91b-0e60-4017-bb0f-22f469b3ee80}} correspond to realizable interim allocations, every point of {{formula:9e756db7-48c3-41f5-a6e8-bc86177a9b8d}} is in fact realizable.
r
67e677a2ef1f4e9be6f7f6d2f98059fb
The generalisation of this work to nonlinear iterative methods, such as CG {{cite:e33350fc7c750d8f545c6b5b6aa3b8781ffb6ce2}} and other Krylov methods is of interest. These methods are more widely used than stationary iterative methods in modern applications, owing both to their faster convergence and that they only require access to the action of {{formula:a302fe85-2f02-4b76-9b0b-23e2fcb10296}} , rather than needing to interrogate and modify the elements of {{formula:d03c4c1d-af00-4c21-84d4-5369dd889780}} .
m
92f64e3c8eaf53bfd375b8dc5caa43d9
To guarantee that the CBF QP is always feasible, a control invariant set is needed, which is defined as a set in which any trajectory of the dynamic system can stay indefinitely. The concept of control invariant sets has been studied under various background and names, such as viability kernel {{cite:6a5ae7244f2f5e062691a4bc4459c8b832a911af}}, infinite time reachable set {{cite:d77af7be011181a0276cdcebcb3dfa9c61dea609}}, and various methods have been proposed to compute the control invariant set depending on the system dynamics, see {{cite:1052ddd398b82a422e5bcd9dfafcd249f37bf7e8}} for an overview. Unfortunately, the computation of control invariant sets is notoriously difficult. Even for simple cases such as linear or polynomial dynamics, computation tools such as Minkowski operations {{cite:23037f7d62da0ce38d92cc6eda07c059d059d3f8}}, robust linear program {{cite:2e157a0b2003f57fb073b0119e594b63d93d88b5}}, and Sum-of-Squares {{cite:91b14c36b771a0d4c22c49fb9a33f8710dfc1068}} do not scale well. For general nonlinear dynamic systems, the standard tool for computing invariant sets is Hamilton Jacobi PDE {{cite:2b2f8684f6627200920fc237a3d21ff97e751aae}}, which typically cannot scale beyond systems with state dimensions 4 due to the exponential complexity.
i
bf40febbce8eca244081e30b93501e9a
where {{formula:e946901e-b9b1-480f-a932-1bc34c02e91a}} and {{formula:54921bd8-c3d5-4a07-b33f-13a2293eba50}} ({{formula:34dec47e-a185-4d16-bf7b-86def942c11f}} being the active flavor numbers) are the one-loop and two-loop {{formula:a4f3bd68-67e3-496d-a3b9-f85c33559fca}} -functions. According to {{formula:78c92e71-7ac5-4f9b-b004-9b907f6041f9}} {{cite:660b54e2cefa776fac2869ddf2dc73f8a89c6cac}}, we obtain {{formula:ef4f5a49-073e-489a-9615-ccce5f3270a6}} .
r
541cfda7650daf65427ebc96382e019c
Studying the collective behavior of individuals in a large group has long been an important research area of statistical physics and relevant fields. The question of “collective action,” the tendency for individuals in a group to forgo short-term selfish behavior in favor of long-term group benefit, has been extensively discussed and examined. Of particular interest is classifying the environmental factors that foster cooperation within group, particularly in the case of a public goods game and the Prisoner's Dilemma {{cite:deac40e0b676f0f1d78337f70ce255efb4fd292a}}. There are a plethora of studies that use networks to model a social structure on the group, and the exact topology of networks can have a profound impact on the cooperation inside a group {{cite:9ff04f9deed99246181bda465b514579ad30de6d}}, {{cite:3d8d63e6a3f2ec7fda8b3c6629375a68551f2abd}}, {{cite:42b40feed15cf26b62318d19532f5b7a3f480b02}}, {{cite:a22658c69563ce28ccbc8fbbf197a18af577a815}}. Additionally, empirical research uses human trials to examine how humans behave rationally (or irrationally) when actually playing public goods games with others {{cite:b320be2678a1741ab00c0b962ca235c41a9b89a4}}.
d
4365c223d0bd846099ba81e563088f6b
Our results resolve an emerging puzzle in legal NLP: if legal language is so unique, why have we seen only marginal gains to domain pretraining in law? Our evidence suggests that these results can be explained by the fact that existing legal NLP benchmark tasks are either too easy or not domain matched to the pretraining corpus. Our paper shows the largest gains documented for any legal task from pretraining, comparable to the largest gains reported by SciBERT and BioBERT {{cite:4fbd29ef6700d417cb02b47c7274da0872631fa4}}, {{cite:73f57d6bbd0726819914667347fe157792718cd7}}. Our paper also shows the highest performance documented for the general setting of the Terms of Service task {{cite:9af1b5253cf971360b03d148479e6fcd8467cb55}}, suggesting substantial gains from domain pretraining and tokenization.
d
b846672428b6e1f4d85b644935c474e2
In addition, the path loss is modeled as {{formula:4fd556d2-d382-4a94-8e86-50a3df88a095}} , where {{formula:44f10268-5236-4d09-8f8c-a0124f72963e}} is the distance, {{formula:d3f14214-5dc6-48a6-91ed-4f3c5fc73ad7}} and {{formula:551b1391-4d1c-420d-aca7-91c4f17dd772}} are set as {{formula:57fa3949-7562-422f-bbbe-68d8cec2e225}} and {{formula:3caaaf15-60ba-4835-8322-2d75573bb81c}} , respectively {{cite:5b105bddfeffdb8f519c2fb23585cae19e96fbfc}}. The path loss exponents are set as 3.5, 2.5 and 2.8 for the BS-user, BS-RIS, and RIS-user channels, respectively. Moreover, the Rician channel model is considered in the simulation studies. The BS-user channels are assumed as Non-Line-of-Sight (NLoS) channels because of the complex wireless propagation environment between BSs and users. Since RIS is usually deployed at high buildings, there exist strong Line-of-Sight (LoS) channels between BSs and RIS. Furthermore, the RIS-user channels consist both LoS links and NLoS links since the RIS is set nearby the users. Thus, we set the Rician factors as 0, {{formula:43986c14-d8d8-4f7b-89d1-13dfdc708731}} , and 1 for the BS-user, BS-RIS, and RIS-user channels, respectively.
r
9490c48c8dd21f9b4de006bc011875d9
Boolean Networks (BNs) were introduced for this purpose by Kauffman {{cite:5d8439bcd521410e030f789ead6a9f3e1a63f4cb}}. In brief, a BN comprises a set of Boolean variables, each variable representing the on/off state of a gene, while interactions between genes are expressed by Boolean functions. It was found that even randomly generated BNs exhibit behaviour reminiscent of gene regulatory networks, with naturally arising attractor states which represent cell types or the phenotype {{cite:61a53279b51d015959125dbfdb6eb5e660fff51c}}, {{cite:cb2231d6e8c6691b5da9b4bc6204b389b058d5d8}}. This explains the popularity of BNs for modelling gene interactions {{cite:17b9c53df889e19045eeec1a55407df9577d205f}}, {{cite:0b0ae535198fff79cf05d09fa4e65310c82dca5f}}.
i
9edd2d8f266ffd9b6e2a262a8a33544b
Dark energy beyond the cosmological constant is usually modelled with a single scalar field, with prominent examples being quintessence {{cite:cdf1897b585fb7e9e322e4eb035a0ee132e59b2f}} and scalar-tensor gravity {{cite:9f8a8527a427c810e94d970434722fbe083193f1}}. Quintessence is the simplest dynamical dark energy scenario, which assumes a minimally coupled canonical scalar field with a pre-specified potential. Successful acceleration requires the potential to be nearly flat, with a mass scale of the order of the Hubble constant ({{formula:602f1e6d-1b48-4502-86ad-a731eb01aae5}}  eV). As a result, the quintessence field acts as a smooth, un-clustered energy component at observationally relevant scales {{cite:19b82b54db4b085dfe0f9573edeb50a98a59421f}}, and the primary prediction is a modified expansion history, uniquely determined by the shape of the potential.
i
4efeba8da27eb92a0c22a71fb55696f7
Finally we can turn our attention to convergence properties of the regularization method. For this sake we use an exposition based on {{formula:baf7ba22-10d8-4c30-83bb-38e57f7e38ab}} -convergence (cf. {{cite:792f8a0a113aa40ba08349de0f827eaa27b6c85f}}):
m
5a80679bf2dd24f52719199051a18a44
Google considered up to a depth-five QAOA ansatz on three separate classes of graph problems {{cite:ca9abd3579f18785579404036879a77891f43738}}. Under experimental noise, Google was able to realize functional depth-three ansatz performance whereas depths four and five were critically limited by noise. The depth three ansatz involves classical optimization over only six real parameters, three of which are constrained in the interval {{formula:6a00ebc8-7eb1-4a5a-91f7-3cee8029d4af}} whereas the remaining three are in {{formula:abfabd41-5f6a-49f3-909e-72d1cd717a50}} .
r
4f2bee3e6a62096fd02e8be48dfa0b4c
DeepLab v3+ {{cite:2d11c48e621525337638853c8cad3e619f3ed003}} is the state-of-the-art backbone network for semantic segmentation tasks. We evaluate this network here to show that generic semantic segmentation networks do not generalize well to curvilinear structure segmentation.
m
3101c5efc5204cead79ce1936568123f
A fraction of unexplained variance {{formula:34a46bc3-df7f-4085-bdf0-a6cae12764ee}} is not immediately recognised as a good result. For example, the works {{cite:45d7f7ad90872a29d7e492b8c5ff0237ce48b51c}}, {{cite:e5893a880587a52041da9967f6a5974980db3886}}, {{cite:15e07bacfa0b4a9c4f3cd0dfde02c0314e66ddba}} achieve an unexplained variance in the order of {{formula:d309df90-463f-41c7-ad40-89cff161e04b}} modelling 2D fluid flow around a cylinder as well as sea mean temperatures. The main difference is that our data is extremely sparse, that there are no high resolution grids to train the models on, and that wind data in particular is known for high variance over time and space. The authors of {{cite:45d7f7ad90872a29d7e492b8c5ff0237ce48b51c}} point out {{cite:45d7f7ad90872a29d7e492b8c5ff0237ce48b51c}} that their model generalises poorly to unseen data if the flow is non-stationary. It stands to reason that for their models to achieve the same efficiency on the sparse wind data used in this report, higher resolution in-sample measurements or simulations are required for training. That being said, we know from Figure REF that the wind measurements are highly correlated over time, with clear seasonality. The models used in {{cite:45d7f7ad90872a29d7e492b8c5ff0237ce48b51c}}, {{cite:e5893a880587a52041da9967f6a5974980db3886}}, {{cite:15e07bacfa0b4a9c4f3cd0dfde02c0314e66ddba}} are trained on multiple time samples whereas the spatial interpolation models used in our work only use the time aspect for hyperparameter optimisation. Therefore, extending the spatial interpolation models to take time into account could improve the results.
d
48efc84a26eca8cd7a86d1a9fd50c172
Boltzmann machines {{cite:0f7ca75df3ba8acc0b686df826f11a77e464b539}} are multivariate probabilistic models that are widely used in the field of machine learning. Here, let us consider a Boltzmann machine with {{formula:1a74e4d8-ba61-497d-8188-678bd4aa5c25}} binary variables {{formula:d98fd7e7-3aa5-41fe-8b1f-f9168ce96304}} . In this paper, we handle fully connected Boltzmann machines with no hidden variables and no temperature parameter. A Boltzmann machine represents the following distribution {{formula:49061e33-e1ea-4500-afcf-80b11caaa93a}} , which we refer to as the model distribution. {{formula:809b1558-d391-4f0a-b255-93af16668897}}
i
a6ba5693c9888e620ca53aa5dc05a009
Despite the success of reinforcement learning (RL), extensive data collection and environment interactions are still required to train the agents {{cite:f4c942f2c1933ab4baad976c6044fef11743fc16}}. In contrast, human beings are capable of learning new skills quickly and generalizing well with limited practice. Therefore, bridging the gap of sample efficiency and learning capabilities between machine and human learning has become a main challenge in the RL community {{cite:1f7226f94c6e4a8603a4976029b45b5212624edc}}, {{cite:89c8c0a6b5c6693bae8d8b9f9862b89bf55e3de9}}, {{cite:c2913c9c48f4b1454e3a3e442d1a64affb07ca3f}}, {{cite:fb67f901bbbbd5eea7c68fefed2768cea9009577}}, {{cite:c9b3bc09bf7851de1ec9fb24d6bdca72a7d4d468}}.
i
1c0df0253e8f6689f86d1c299d937f92
To evaluate the geometry consistency for a fixed geometry code with varying appearances, we use sparse facial keypoints for evaluation. We measure the standard deviation of 66 facial landmarks computed using an off-the-shelf tool {{cite:30f214029befb6e8ddac9b8938c198d6a0c2e486}} across 100 samples with a shared geometry code and different randomly sampled appearance codes. We render all images in the same pose, in order to eliminate additional factors of variance. This evaluation is repeated for 10 different geometry codes and the error is averaged over these geometry codes, and over the 66 landmarks. A lower number with the geometry consistency metric implies that varying the appearance code is less likely to cause geometry change in the image. While we outperform the {{formula:24750294-914e-4b2c-bd6d-ee79abb3a543}} -GAN baseline, GRAF {{cite:c38acf48e81df5a4606eba7fafd1720cabe53334}} achieves a better score. This is due to the fact that the appearance variations are limited for GRAF, as the appearance information also leaks into the geometry component. We further evaluate this using an appearance variation metric for these images. This metric is defined exactly the same as the appearance consistency metric. Specifically, for the set of images, we calculate the standard deviation over the average hair color over the 100 images with different appearance codes, and average over the 10 geometry codes. As shown in Table REF , our method achieves the highest value, implying that our appearance component better captures the appearance variations of the dataset. We also evaluate both baselines using these metrics. As expected, the “256-dim“ baseline performs similar to GRAF, while the numbers are similar without the inverse network {{figure:ad3c8e01-185b-4def-ad43-0be1abd0b782}}
r
07b0047d0024c508c068744745cf948f
where, {{formula:a4e44b9d-7a77-40f0-b240-1f7bcbd814cb}} is the attitude quaternion (vector first), and {{formula:bac135c1-4f57-4325-a697-3bef3ac40d6e}} is the Davenport matrix denoted as {{formula:9c445f8d-9570-48ce-bc0a-f83ad3efe297}} . Since, {{formula:65057776-1f37-4bb8-9543-94d4c0d0de7e}} is symmetric, the Davenport matrix is also symmetric. From spectral theorem {{cite:4b694f787ba561b2198b82c7f55323dac6fc1615}}, we know that for any symmetric matrix, the eigenvalues are real and the associated eigenvectors form an orthonormal basis. We use this property to ensure a deterministic solution in the next steps of the process.
m
ea24e74c21985129820ff1fa39532a5b
We note that {{formula:e10d7197-55f9-4e82-9fea-574294fd8eb2}} is nontrivial because {{formula:3e9f3408-eb7d-403a-a67a-26c6aa4a982d}} and a standard rescaling argument enables us to deduce that the Lagrange Multiplier {{formula:4589c8ab-5324-445f-a92e-abdc979bf096}} can be chosen as {{formula:5f361011-53cc-44bc-9e48-48e4475697e9}} . Now, let {{formula:dcb41b65-e2d8-44d2-8bfc-1545a7ad6ba3}} be fixed as before. Since the minimization problem {{formula:cb48a830-8d56-47be-a39d-859fcc312bd5}} can be solved for any {{formula:02e96258-b3f4-44dd-9257-c2bcdef71923}} , we guarantee by arguments of smooth dependence in terms of the parameters for standard ODE (see for instance, {{cite:3c766177ea844cd8ea537c5f7a499d5e75e38ec8}}), the existence of a convenient open interval {{formula:80773794-1e90-413f-9294-2851d7e0b396}} and a smooth curve {{formula:151412d2-83a5-4d0b-84e8-4eccd079925c}} , {{formula:7fc65734-e9fe-4e63-ac29-c7cd3fee38b4}} , satisfying the equation {{formula:c6389304-e96a-4305-8641-abadf307d3da}}
m
d1c6838ee050183c6c423d758ba8d7c9
We now include two useful results on operator norm bounds of higher-order matrices. The results only require the condition of {{formula:39361f03-2d0d-4030-8363-bd7a69f76e84}} -L4-L2 hypercontractivity (which is directly implied by assump:design—see for example the sub-gaussian moment bounds in {{cite:53ce4c7085b99f7d30fcf30720cb028ceebc2415}}). Formally, we say a random vector {{formula:37b6dd09-6a9d-4fcb-9498-df0c019cd320}} is {{formula:5fc9145a-ff38-4d70-b35d-d7fbd9d49ed9}} -L4-L2 hypercontractive if {{formula:057b4a77-f916-4371-8619-00759ea42b37}} for all unit vectors {{formula:1d60820d-48e1-4314-8cff-6058f898d646}} . Also note that if {{formula:167c919c-55d1-4d65-b703-e8d57dbcb7a7}} is hypercontractive this immediately implies that {{formula:2273b074-9ee4-473b-9e5c-e904d9e264ee}}
r
5898bdce4de373534060b1042706e56a
MixUp {{cite:d15f6b0daf3822ffa7bdf12a8d9ad9ce3c618a38}}: We also evaluated the effectiveness of mixup, an effective data argumentation strategy on medical image segmentation, following {{cite:d5af882473d4f47cf68c0018114627801e04c508}}. In this method, we interpolate two labeled images with an index sampled from {{formula:bdd9e963-f45c-46dc-b651-aeed963a9039}} ({{formula:8ea24653-9294-4401-b98a-07b6f20b0396}} ) distribution and enforce the network to output the prediction as the interpolation of the two annotations. We fix {{formula:08457b60-6cb5-45f4-bbb2-754d665cf602}} and the coefficient weighting the mixup loss is selected by grid search from {{formula:20a52c64-9083-4574-b4a4-edc080ea051c}} to 0.1.
m
69a532af2fa53fc5aa285d78568c22d2
While the combined Search + Generation approach is overall useful, the effect is not consistent across relations: performance improves for some relations and deteriorates for others. In §REF we described the techniques we use to reduce the noise coming from additional queries. These techniques however are rather basic and these results indicate that more advanced techniques of the type used in {{cite:87c6edd76a385fde222c73a9fc0d30ae5ddd5e97}} and {{cite:26329581eeeba2bb34b8a8af197521d50e6c3225}}, are likely to yield more consistent improvements. {{figure:22b2488c-82da-4db2-8d7b-06f13239a1f5}}
r
864b09b8c51d7b9482fc6e8598b78912
In this section, we provide extra experimental results with extensive hyper-parameter settings. We commonly use 128 clients and a lcoal batch size of 32 in all the experiments. The gradual learning rate warmup ({{cite:c142357c77a43a38a065ad97a65fd4fd1d62fb68}}) is also applied to the first 10 epochs in all the experiments.
r
e12cabbb67a30451838b78e44fbeaf5b
Differential geometry has been used extensively in the study of higher order asymptotic inference. While there are many references that could be cited here, {{cite:77f2f3a14301c439aaceb985cd2c1de91fe8b3ab}} stands out for its clear description of the role that geometry plays in understanding and simplifying many areas of statistical inference. This paper's finite sample results are a departure from asymptotics, but we hope it continues geometry's role in providing simplicity and intuition for inferential methods and concepts.
i
db04c37019296e20ca25cd9da62274c1
Model training: For our experiments, four state of the art models, based on three architectural designs are trained from scratch. The first architecture, VGG16 {{cite:59bb58165d7385a92149eb018cd389f57a24ac4a}}, is a convolutional neural network (CNN) with all layers connected in series. The second model is the DenseNet-121 architecture as described in {{cite:2c55e04c4072f6b8541db09bbbd3dece1d7d5f97}}, with internal dense blocks that refer to sets of convolutional layers, where previous layer outputs are concatenated to further improve informational flow. The last architecture is the Wide Residual Network (WRN) {{cite:93ae3905e41a10da3a1d461a302a83801dca962b}}, who introduced a study of depth and width for residual networks {{cite:a3c1524c3484164aca3f9b80e65d08145841a4fe}}. A residual block consists of a layer that concatenates the feature input such as {{formula:1669e924-ddd5-46d9-8687-0d51899c1206}} where {{formula:d49f57c0-87f9-4def-90b1-95ca436e81b3}} refers to a set of layer permutations of the input. For our experiments, two model setups are selected for WRN, namely WRN-28-10 and WRN-40-10, where the depth factor is set to 28 and 40 respectively, and width factor of 10 for both networks.
m
7813558921521f21630401ff1b1df9c2
Equally unexpected was the observation that increasing the dimension of our embedding affected the accuracy of the GRU model. Overfitting in the embeddings is a potential reason for the GRU's diminished performance. Empirical evidence has indicated that low-dimensional word embeddings are sometimes, but rarely, superior to their high-dimensional counterparts {{cite:c3ef1837eafcb01545899aac7ee1b7d353b5f9e3}}. Due to the difference in training corpora, we anticipate that the higher embedding dimensionality did not enhance the accuracy of our model. Rather of starting with a pre-trained embedding, it is likely that future research may train a complete embedding for the language.
d
d0874bdc96b48df4fabb2034a761afc5
The Fisher Information Matrix {{formula:f2587eb4-f7d8-419c-b937-392b0f04bb6f}} of a distribution {{formula:4b3e2574-5c54-4ae1-bb4f-7f422c61e503}} w.r.t. {{formula:4c607913-44cf-4dbb-b9f5-1d2f89863074}} defined in {{cite:86fa0f90084d596da093aeb7ff04dadfc0ebb756}} is: {{formula:104af599-43f3-4c8a-aaf4-3ecbc2bdf785}}
m
f7decb4ea5e2049487a74ffff5cadcdd
Decoder: Earlier works using MVT for dense prediction {{cite:f772f85ac6ca256922841b0a494fe5ec8b5cb165}} upsample feature maps at varying resolutions to a resolution of {{formula:e3cdbfb8-560c-4879-8d21-e52629fdda7e}} , and reduce their channel dimensions to {{formula:769f4678-3cbf-40e6-ac0d-0f0060c35f46}} using {{formula:98f0c87b-b9c0-472e-8b7d-49f1b3e36b50}} convolutions. The features are then concatenated and finally fused to predict the output of size {{formula:5dbabea8-bbdf-4862-b7c9-1d39216fbb3b}} for the segmentation task, where {{formula:4b26deb1-cf58-4cde-90e2-3526db101061}} are the number of classes. The feature maps are then upsampled using interpolation to {{formula:29a708f8-67e0-448f-9c5c-400a1331368b}} which helps in producing a smoother estimation. Such a decoder design suffers from the loss of local information due to the smoothing effect of interpolation. Earlier works using CNN have used Feature Pyramid Network (FPN) {{cite:44c8e9c1c064d34fe46773cbfa96db9491057797}} architecture design to preserve the local details. We adopt a similar design and a decoder that iteratively fuses feature maps from the lowest resolution for MVTs.
m
ac546a2c299bcceadcb9b43b8a2b6abe
Lemma 1 {{cite:00660f0593499f8c4500ff912e4217ae24f48454}} Let {{formula:44e7de11-d983-4eab-a165-3721ab08dd46}} be an anticommutative algebra with {{formula:e802763f-bc03-47db-9f54-d3e54657d7f2}} . Then there exists, up to isomorphism, a unique anticommutative algebra {{formula:a337e940-5194-44d3-b054-4ecd4c1f767f}} , and {{formula:4e9d6dac-a3fe-4195-815e-1349db0e1824}} with {{formula:0d2f81f5-1f97-49eb-9281-4586d2535e7a}} such that {{formula:6f5b421c-d65c-4f32-951b-6ef1b8306aaf}} and {{formula:8dad95cb-ecc0-482d-95c7-da96f53a1a82}}
m
04d9ce71396abeaa2b97cf9fdc94b542
These lectures are meant to be self-contained, but we necessarily omit numerous details, while trying to make clear the basic ideas and results. More complete accounts of cosmology and its particle-physics aspects may be found in various books {{cite:04d250920936aad407eb184dd16885de3e74b54c}}, {{cite:1d955094b720a3c750cb4a58aaefd29013588f56}}, {{cite:bd069b268662dfef42ebea7eddc1fb4e873c6382}}, {{cite:5812fbeb3651836031e5df192e22066cd5fb5c8a}}, {{cite:bc710521035011dd4396388548dc8ddac69f5546}}, {{cite:f9008f3efc93ae5ee194c2f7baaf52a9812f0ee3}}. Dark matter candidates we consider in these lectures are reviewed in Refs. {{cite:d60b05f2477ec991bef9022bf4ad9e8218e6ce7f}}, {{cite:b38704282baf49a3e7e0d5bce15b399951b4f46d}}, {{cite:74162aaffd3a33a7449db815165e830c88588f57}}, {{cite:125b512a3782bae1dd20f05301c395706bed5f0b}}. Electroweak baryogenesis is presented in detail in reviews {{cite:f58958b6d6eb9408c2746b055db99aeb51605cb2}}, {{cite:057bd45b055d7c3124ea21a2f372489fc65f3240}}, {{cite:46c8da76901b3a5e59900f7facb89eb98a841997}}; for reference, a plausible alternative scenario, leptogenesis, is discussed in reviews {{cite:4ef37ff598b36506b2e7dedfd8dd02b8f05bf8a6}}, {{cite:a3f17167d403c8e6d952451e36905ca7672e61d7}}. Aspects of inflation and its alternatives are reviewed in Refs. {{cite:9068d70055a0fc015ea7629efb0b1fa66b52359a}}, {{cite:ab9b4e765f9422b90d81cac963df47c2e556967c}}, {{cite:9f73022b6ab8fd1ebe59c2a53426b61569d26981}}, {{cite:dfc0239b5e79a993b3bce9e8853ca7dd135a22e1}}, {{cite:947df52cb1dc26b065bbc279ac59e4914841a8a4}}.
i
90286490e1c83f5f7aaaf79e6cb0c5a7
Despite the incredible success of neural network function approximation in reinforcement learning environments {{cite:c5e7fe0ef20ff9ef9703bf53b51f56092122f815}}, {{cite:77c39099ad783b09f1e626edf81a3d0ae294a0af}}, {{cite:32e0e31b72949ab98f9a0049cbcee292924836fe}}, search algorithms have largely remained tabular, as can be seen for example in AI algorithms for go {{cite:5141a1c5469fce7eec1817254a8ea77ac00b5dda}}, {{cite:2dc72b9721d0284cd5ed9db090bf9b164fad03db}}, {{cite:57c7b35ac1f542fbc02ed833e2d34b9278b95399}}, poker {{cite:b31dabb5749c0196ab2bfa28e7d48b1b77e8d255}}, {{cite:a8d727cfe4d7e3f58e91be36e3fb1c05ab62d3e9}}, {{cite:f5ab5cfc9659f6f783f35937f9b563bbdd5b2115}}, {{cite:e7a63b92a1be6bb695ab4a998c325064b9c3ec78}}, and Atari {{cite:86939dc275c7593535b27b39aa9c0e1a507c41bd}}, {{cite:72e70cdc5b69a70da55e05672f5bb1a28676a8df}}, {{cite:fc1302343a76cc005eb69ee0d1b6077e0bdd3945}}. Tremendous research has gone into adding heuristics that enables tabular search to perform better in these domains, especially in perfect-information deterministic environments {{cite:8bea1adf20bd8b79b2cb2e8104afdf14140c5530}}, {{cite:132cdaf3733391b8e3dc53cf56a3fc7ebbcf4d63}}. However, we argue that in order to scale to larger, more realistic environments, especially those involving stochasticity and partial observability, it is necessary to move past purely tabular search algorithms.
d
ed30ec6d249147900fb54333da471e31
Retrieval augmented models (RAM). We compared with a recent retrieval-augmented method named KFCNet {{cite:0763dd289018014d6334d1c9b1e094dc21a2acf2}} for constraint commonsense generation. In addition, we also compared with using sparse retriever such as BM25 to retrieve knowledge from our constructed commonsense corpus and use FiD {{cite:b9b7103819be72c454ee28becc3bea6bb2931775}} as generator to produce outputs. {{table:3b9c1647-e234-44db-abc8-546c3d5f3678}}{{table:660398a2-217c-476c-bf38-4046704a409e}}
m
4f649ecebf0c893784b1a32bebd3c400
We also show the comparison of the convergence speed of applying different pre-trained models in method MGN {{cite:675f798fbfc8689413da89f5580fff1d5164cd45}} at the early stage of fine-tuning in Figure REF . As can be seen, UP-ReID achieves a faster convergence rapidity compared with Moco and INSUP on all the three datasets, which further demonstrates that the proposed UP-ReID can better benefit downstream ReID tasks. {{figure:9ee249ab-e45d-4552-bb1a-616270c01f30}}
r
7e1c04b22f3851780c7f92189339eea4
Stable matching algorithms have applications in several real-world problems. For instance, stable matchings have been extensively used to match students to schools and colleges {{cite:c14c3d7e36fa373b600e4ccf50e3882a46c884f1}}, {{cite:30146fd10a2c436dfad293a7bf5a72a39c9690e7}} and one of the oldest applications here is to match medical residents to hospitals {{cite:f6b60afa29fbcd6842b53fa5bd3fc8d8691851f4}}, {{cite:6c9aa593c8b797c70f27bc97866fd722b51a276c}}. It is known that all stable matchings in {{formula:9ff896ee-95df-4add-9436-95f9997d7f12}} have the same size {{cite:ce27fad4a975a113081514055a2be6f1d2fdd297}} and this may only be half the size of a maximum matching in {{formula:dc6402ea-0604-4a3e-9cb1-2733b0b13736}} . Consider the following instance on four vertices {{formula:56c2fc44-136d-4fb6-b313-223df7dd2a22}} . The preferences of these four vertices are as follows: {{formula:0062a3e5-2f70-4639-9a7d-6f0de52f9420}}
i
f774f931f202fe02be2721ad28428688
The gray region was excluded by the neutrino observation from the galactic center by the SK {{cite:53481265baa934fa45beaa5b2f87dca87c1cec02}}. The dashed gray line represents the future prospects of the HK with gadolinium 10 years observation {{cite:2f4a939c6604644df969bdf84405d15473aa13c5}}. One can find that the sensitivity of the HK can reach the parameter space favored for dark matter relic abundance and core formation if the dark matter mass is in the range of {{formula:8ac1cfda-e43d-4f7f-a69d-3d72805237c8}} . The lower dotted gray line is the future prospect with a boost factor of 10 for the semi-annihilation cross section. Such enhancement of the annihilation cross section can be realized by mechanisms such as the Breit–Wigner effect {{cite:a5b219af94e708da2ba9d0f1da130933403bef1d}}, {{cite:e4bb8495f2424b27f4433d1baca2d93c906b031b}} and the Sommerfeld effect {{cite:238c01a273495e4b894bd7f920abff8eadc7bc61}}, {{cite:3d7306ea57b63f12fcc58ad772265c0051f046c8}}, {{cite:39b88f07aed181c85e016ed205d77c47e2d76ff7}}, {{cite:f0ca13104c23bdf083dcaf3546dee2de9cbc7008}}, {{cite:f06be8d89f25b9eff807e6e1ee309f4c6b03c6a8}}. For example, the simplest extension of the model can be achieved by introducing a singlet scalar {{formula:a6371d41-081a-4c74-b1bb-6c49fc623e3a}} with a global {{formula:2739dcb6-4912-441a-ac30-82ddfb88a6cd}} charge {{formula:955d200a-0bab-4ca5-84ee-8c77738654b7}} . Then, the semi-annihilation channel {{formula:5227503e-afa4-431e-b098-2bb9e92afc4d}} is enhanced at the resonance at {{formula:ee0dcf18-9759-4439-bf40-6b277a6236bb}} . If this kind of enhancement occurs, the dark matter mass up to {{formula:5c0e6a37-7c40-4cd2-acb8-592c7e7179e6}} can be tested by HK with Gd, as shown in Fig. REF . {{figure:1201f815-91f7-4ada-869f-8aeb71aa2bb7}}
r
4768e51e341fbb8c1ec04957418f4486
where {{formula:7666ea8d-0fe0-4593-abe3-3cd1cd6f795f}} is a discount factor for multi-step rewards, and {{formula:ba8a8317-06c0-459e-bf2f-19dbe989f835}} is a learning rate. Equation REF shows that the state transition reward is passed back to the starting state. In activities such as maze traversal or board games, being aware of the reward associated with multiple steps or decisions and passing that reward back to the current position is of great value. However, in the context of trading, where the value function {{formula:9f5da72d-42fd-462f-b4a0-e45f3bf31ab9}} represents the value of a position {{formula:e6ab0e46-cc0c-4f07-ac19-c0889a316b4e}} , with the possibility of switching or remaining in the same position denoted by action {{formula:8322f0e3-178f-47c1-9030-35c1a1909403}} , we will on occasion find that a larger utility is assigned to the wrong state. For example, imagine the current state is {{formula:5319791c-e815-4b03-87c6-a2bc730af141}} , that is, the model has no position. Now we observe a large positive price jump leading to large positive reward {{formula:c73ffb33-7649-4833-8755-0a86425616c5}} . Value function estimators such as equation REF would pass the state transition reward {{formula:0b43b9a5-c274-4700-84df-f109830db9a0}} to the initial state {{formula:fd6fe7be-8a6f-457d-82b7-139331373bcf}} . At the next iteration, with probability {{formula:e115e97e-8c9a-4587-9e75-454dd3053b7d}} , {{formula:9d616e84-f47a-480e-bb63-7da1658f4618}} as {{formula:3a31c4d0-2ada-439b-b92d-553523d4d51d}} is the highest value function. However, if the position is zero, the model cannot hope to earn a profit. Even if one excludes the zero state, then there is still the possibility of observing this problem for a reversal strategy with possible states {{formula:ed7172b5-59ae-45ad-93ec-4cc169586ab3}} . Direct reinforcement, as we describe it, does not incur these problems and we have, through transfer learning, improved upon the earlier work in direct reinforcement {{cite:0d21526a1a6432f2910d10c5c6ece7c2cf8eb62d}}, {{cite:d454d0c4319e5179ace56a69501f57e734eff880}} and extended the work of {{cite:8637941d8ab5a0804f9350edac8007a6e3fa17d5}}.
d
f34ff4bb5d5b9fbb7574b1b7a442e7a6
Finally, the new/modified forces G3(M) and IOPB-I(M) were applied to neutron star matter to determine mass {{formula:0e040a02-ecf9-4937-8429-563374b63253}} and radius {{formula:b37fbedb-7517-4555-af56-4c0d2fd30576}} . The {{formula:743d996c-1876-4573-8df1-b3a1f162fb89}} profile with the old/original and the new/modified forces are shown in Fig. REF and also in Table REF as earlier discussed. The experimental observations for mass and possible radius {{cite:d0d9478a1856d28ecc2baba6fb17a98ab84344f2}}, {{cite:43fb9285bb6e95fb9dc7f857187b51a8ae6a1e1e}} are also shown in the figure. It is interesting to note that the overall results for both new and old parameter sets are unchanged. The calculated mass and radius are obtained for both the old/original and new/modified parameter sets compiled in Table REF . The {{formula:161f7b4b-ea79-4147-a921-ed605b8081c6}} and {{formula:b5bcc8e6-aed5-4794-ad36-ac7e793ebc31}} with all the parameter sets are well within the recent measurements {{cite:d0d9478a1856d28ecc2baba6fb17a98ab84344f2}}, {{cite:43fb9285bb6e95fb9dc7f857187b51a8ae6a1e1e}}, {{cite:4c18f1000c10e4b5ecdfd00e6c1e96d40449db53}}, {{cite:daa228f8a6a0331a2b93eed5232f72f8d292345e}}. Moreover, we have calculated the highly discussed binary neutron star merger quantities {{cite:c8b927a245aae4585275660aba557e1f2d8a66ca}}, {{cite:ece45b3f0c5f317494977e9f4cfcc97d3d7615ae}}, {{cite:0422b655f34e2a3c1d2fbc30e47732778f1321bd}} such as the Love number {{formula:fc4d8dd4-1592-4ce6-bb74-7d01064b211f}} , quadrupole tidal deformability {{formula:be68475f-08e9-4021-a1e1-43b8ccbc7da2}} and the dimensionless tidal deformability {{formula:4e391fec-3220-4b95-a9c3-9415be53607c}} for the new/modified G3(M) and IOPB-I(M) parameter set. In the above analysis, we find the new/modified version of G3 and IOPB-I are able to reproduce all the nuclear matter quanties and neutron star properties, including neutron star merger, compete with the old/original version along with the current PREX-2 data for neutron skin-thickness. More systematic analysis over the various region of the nuclear chart with systatic study of nuclear and star matter quantities will be communicated soon.
r
643ad591bf1ea76a8fdafbb445a54ee5
We use the CosmoMC {{cite:80bccb3f43569b4efe5df53a8ba73a0267661845}} and CAMB {{cite:536a52bdefdfa67ea4cb867968e17955c03873ba}} packages to study the constraints on cosmological parameters at the background level of our specific model in this section. The CosmoMC package is a MCMC engine, which can be used to explore the parameter space based on maximum likelihood method.
r
02532d1f9522bd49b94c3d1a23b4f911
Sampling methods, or also often called Monte Carlo methods, are another family of Bayesian inference algorithms that represent uncertainty without a parametric model. Specifically, sampling methods use a set of hypotheses (or samples) drawn from the distribution and offer an advantage that the representation itself is not restricted by the type of distribution (e.g. can be multi-modal or non-Gaussian) - hence probability distributions are obtained non-parametrically. Popular algorithms within this domain are particle filtering, rejection sampling, importance sampling and Markov Chain Monte Carlo sampling (MCMC) {{cite:42c10cb8495086018e3e079100cc8de593749c2c}}.
m
93e64b464ced72ebe6b0a0a0d394902d
We assume that the stirring planet is located interior to the inner edge of the disc. The stirring timescale decreases with planet mass {{formula:1ec1bc41-3110-4f08-a845-5789860bdb66}} , and increases with the outer radius of the disc; a lower-bound on stirring planet mass is therefore the planet mass that is just sufficient to stir the disc outer edge within the stellar age. {{cite:c387b331a5a7d8504fcefa1d77246e90c52c7ea8}} give timescales for an eccentric planet to stir debris at a given radius; rewriting their equation 15 in our units yields the time for a planet to stir the outer edge of an external disc as {{formula:12e5dd10-6d24-440c-8895-668bcc35bd27}}
m
e23f12a66a17e1327b2eff34efe31691
At NNLO and beyond, several subtraction schemes have been proposed in the past years. These schemes mostly fit into two categories: local methods and slicing methods. The latter are based on partitions of the phase space into hard regions and infrared-sensitive regions, where the cancellation of divergences is performed with non-local subtraction terms. In order to apply these methods, one has to introduce a resolution parameter to identify the phase-space regions where the non-local subtraction acts. Slicing methods that have been successfully applied at NNLO and N{{formula:60ac169f-e992-4169-9670-b686c0a04a29}} LO are the transverse-momentum ({{formula:7f3190d1-be8a-4100-b30e-180c75827a2c}} ) subtraction method {{cite:ae849db31627161ff616a694aa0c94a3a28417b3}}, {{cite:9ee30e9b3a9a3ecba0734236df095f859e077a41}}, {{cite:0cc28c8fd4a3c25dbec02f9198dca7f78a4d86af}}, {{cite:afb41c52e5433776f4410ef4afa1ddc9b4d679bb}}, {{cite:b6e1871c981e79226beb2872c289ce949496d3a8}} and {{formula:c1f7b5cd-5754-44c8-a158-41103fa99e15}} -jettiness subtraction {{cite:a3c42dd1e02550ea727d9d72b0eaaf56ef7afccc}}, {{cite:98c24fa71d9875d8620613e67a17260461e7d854}}.
i
a24082f17d8019c8974fd6a035c9d89d
Dataset. We use Voxceleb2 {{cite:2dc14abfba389aef8258ff1d5255bebcf3cf2efd}} dataset collected from the wild for training and test, which is a popularly used dataset in previous works {{cite:4de3c783e0938ea04c284b5f3cc905a0f6c8a8b2}}, {{cite:071dbcdb41ad69036ca92806ec5e8a3f7a1ba159}}, {{cite:200b089c67d692797f28947890c9f276e60b36de}}. Voxceleb2 is a dataset which consists of YouTube videos of talking people with a large variety of identities and motions. In detail, it contains 215,000 videos of 6,112 identities over 1 million utterances. We subsample the video into frames with 25 fps. Following Siarohin et al. {{cite:2941a9e4c1d36b90aa094d211d0bf5e672ed1184}}, we crop and resize frames into facial-central frames with the size of {{formula:26714257-b178-4b67-acbc-57946b2e5393}} . Audios are down-sampled to 16kHz, from which we extract mel-spectrograms with an FFT size of 512, hop size of 160, a window size of 400 samples, and 80 frequency bins.
r
09d74a7ba6d45cc3b05e9ee66a3bc913
To conclude, neuroscience research is at an exciting junction. On the one hand, new technologies are being built promising bigger and more complex datasets to help understand brain function {{cite:7cd634ef3dc49b60f8da0496c1a0126dfe29543a}}, {{cite:1cbcd3e9e296d234bbb259336cfd6e01b7338ec4}}, {{cite:3a441617ab5aa5f56945e42d0c1d213003ccf399}}. On the other, we have rising concerns over the incorrect use of statistical tests {{cite:4a96b304fca073857809d708e84a3a5d94806779}}, {{cite:2a7d839b8568c215254c9422d9890c53e72a2a86}}, {{cite:b42c48751066282f30d5df1637a73517316a82bf}} and the lack of reproducibility of a number of past findings {{cite:fd8d337915df8b1b1fbc7eee90e49f9f10f3fb9b}}, {{cite:e464441ca7fd8402217838c4dd34c43bdcbd8e2b}}, {{cite:5fcb0ee913400f917e026a750a5505ebc151683c}}. We propose the hierarchical bootstrap as a powerful but easy-to-implement method that can be scaled to large and complicated datasets, that returns a direct probability in support of a tested hypothesis reducing the potential for misinterpretation of p-values and that can be checked for correct implementation through sharing of analysis code. As we have shown through this paper, we believe that widespread use of the bootstrap will reduce the rate of false positive results and improve the use of appropriate statistical tests across many types of neuroscience data.
d
060635b62a1755697a0e1fd609f127e8
{{formula:b9d38e9b-6c13-4d80-b393-a272800e79fb}} and {{formula:cb407eda-a1aa-4eec-a3df-08bba1460dff}} are important quantities for characterizing a network. Given a network several inequalities can be constructed from Eq.(REF ) using different {{formula:53606959-1c72-4bce-b5a6-03ade49d000a}} and {{formula:53d83da0-19ba-41e6-a7b7-423b73e85b9c}} , which are followed from different subsets of hidden states satisfying Eq.(REF ). {{formula:dc63555f-ac86-42ee-a0d0-db583450e1f0}} is another important quantity for featuring networks. When {{formula:47f52236-dd16-403b-bc13-0e7ce78fa6b1}} inequality (REF ) reduces to linear Bell inequality {{cite:e585d1620efed7178d8599e8e1d454dc8e78baed}}. Generally, a larger {{formula:95a05621-0f70-42f6-9e77-2add747437cf}} implies more multipartite correlations being involved in inequality (REF ). So, it is reasonable to find the maximum {{formula:0bb08eba-96fa-4e55-9b0d-bab3f9d9c71a}} (i.e., {{formula:7bd6ca3f-60fe-4255-914e-adf689e7b358}} ) and the corresponding independent parties. Unfortunately, {{formula:81cc8a6f-dc9b-4638-a3e0-07716760dcb4}} depends on the network configurations. Intuitively, it requires to check the independence of all subsets (exponential number) of {{formula:4b4df8a0-36de-45c2-80a0-d6a1a47a5a14}} parties {{formula:24c73603-d8b9-41e8-b47c-9c53b1da6808}} s. Hence, it may be hard to get {{formula:95145ba2-3c96-4ab8-8765-a1b8e69b714f}} of a general network, see Appendix B {{cite:fe2796e70314d67278bdd4c16dea1cf6900e274f}} for two explanations of this problem. In spite of that, analytical methods exist for some complex networks (Fig.3) beyond chain-shaped networks or star-shaped networks {{cite:260d841f4c13f22f6f90f6fe8adb9f5b042e3d35}}, {{cite:6635edd8e5ac244c014f12d98e0f9afea152b56e}}, {{cite:2b98ac46f991e2b57bf51f894ca1e0d86bf6faae}}, {{cite:c16d0b4d9b9e114ca9cf7cd4a610398ed43cb0bf}}, {{cite:c312cd25d7256b245ec83b8332fd4265ef1f9557}}, {{cite:840a4d801c83672214dc6c9782e302652201d091}}. Additionally, from a suboptimal {{formula:a224290c-d3da-46a6-95ee-6230c3f858a1}} we can construct useful inequality (REF ) if {{formula:96b54572-4c01-4342-a968-2a3575a3a0d2}} . Notably, the suboptimal {{formula:626ecb06-0706-497c-ab3d-c9ec3e1d121b}} is equivalent to the maximal matching of the unweighted bipartite graph {{cite:fe2796e70314d67278bdd4c16dea1cf6900e274f}}, which can be solved using a polynomial algorithm {{cite:c6a86db473d8bdb688a4ef505c010ed03fa0e755}}, {{cite:ce3fd22b335d3fb7cdcc5448f48515eada0a8bf2}}, {{cite:937b06a3c9d9842b9bbf115b4193aaa6b6a13585}}. Therefore, inequalities (REF ) can be efficiently constructed for any networks with multiple independent parties {{cite:fe2796e70314d67278bdd4c16dea1cf6900e274f}}.
r
058c4ae9c519e180883f450d5348798b
Trading and portfolio management systems require prior decisions as input to properly consider the effect of transaction costs, market impact, and taxes. This temporal dependence on the system state requires the use of reinforcement versions of standard recurrent learning algorithms. {{cite:3db310bc334b7bb7221d25d2bc90e6190e559a36}} present empirical results in controlled experiments that demonstrate the efficacy of some of their methods for optimising trading systems and portfolios. For a long/short trader, they find that maximising the differential Sharpe ratio yields more consistent results than maximising profits. Both methods outperform a trading system based on forecasts that minimise mean-square error. They find that portfolio trading agents trained to maximise the differential Sharpe ratio achieve better risk-adjusted returns than those trained to maximise profit. However, an undesirable property of the Sharpe ratio is that it penalises a model that produces returns larger than {{formula:7bab8a1f-1f6c-49bb-be87-ae75e01eec50}} , that is, the ratio of the expectation of squared returns to the expectation of returns, which is counter-intuitive to investors' notion of risk and reward.
m
58e5514a598b9c03153d96ea86459657
Kernel methods generalize classical statistical methods to discover non-linear patterns in data . They have been demonstrated to achieve state-of-the-art results in many application domains and it is straightforward to apply them to non-numeric data, such as graphs or text , . Through a near arbitrary non-linear mapping of data points into a Hilbert space they offer remarkable flexibility whilst providing a precise mathematical framework for statistical analyses. A host of linear statistical methods have been adapted to be used with kernels, including Fisher discriminant analysis (FDA) , independent component analysis (ICA) {{cite:f159715c8d8694a3e35f14ca06927ae18b5e7cd3}}, instrumental variable (IV) regression , and many more. Kernel PCA is a non-linear version of principal component analysis (PCA), a ubiquitous method to discover the most important directions of variation in data . PCA may be used for dimensionality reduction, exploratory data analysis, anomaly detection, discriminant analysis, clustering, or as a general preprocessing step for regression or classification {{cite:a780b6867aecc2e97dd8596359443d5fdb89620d}}, .
i
b7c3eafc44c5c8ef6c76a8260b0b8618