text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
The uniqueness of such positive solutions was subsequently proved by Chen, Li, and Ou in {{cite:47dc9ddeaed6a1be8c6e9c2ae2d7c503dc69b3b1}}. A different way of looking at (REF ), which also underscores its geometric invariance, goes back to the ideas in {{cite:1f48512d1a283215e1acff18dea797fc16156498}}, {{cite:bdce89de01121bdfe3ba43e4c36a3dae47028b07}}. For a function {{formula:03bc8d06-c193-4f46-b934-a4716eb019d8}} on {{formula:7529da03-0ec8-465d-a54e-6d478208a05a}} denote by {{formula:4fce354f-e8b3-47b9-bf47-cc55ad172000}} its push forward to {{formula:b053882b-217a-495b-af39-069dca4ed0de}} through the stereographic projection. Branson characterized the conformal pseudodifferential operators {{formula:ecab8b96-a4c9-4fc9-91c3-c01b03b41c20}} of order {{formula:5ad66019-e2bb-4529-befa-041cc9f68a94}} on {{formula:eef0fb89-746f-48dd-becf-99d79ceb6ea7}} satisfying the intertwining formula {{formula:3016dd0b-714a-464b-85ad-30aafffcb6f7}}
i
d4aa3fa63aeae404420d1a39e9deb34e
Here {{formula:5ff6d75f-6b15-4d22-87e0-fa31ba7a5ab8}} is the angle between the bond vectors {{formula:5378242a-8e7f-40fb-a300-cf05caad9b8e}} and {{formula:8a60c8aa-0d33-4628-aeeb-03d0e20c2d41}} , respectively, as shown in Fig. REF . The strength of the interaction is characterized by the bending rigidity {{formula:1b0314c9-1bdb-4d5c-a200-254c56e76d91}} associated with the {{formula:e24f82f2-f427-499c-93f5-09cd697840f6}} angle {{formula:784ab1b0-4a41-4dbf-841d-6983ecb38c97}} . For a homopolymer chain the bulk persistence length {{formula:7b101c29-312f-4ec1-9021-e983afbe220c}} of the chain in 2D and 3D are given by given by {{cite:ca430154a543784e31b7f098638bb0149d13c355}} {{formula:204c54eb-ac82-40da-aead-8decb040d36b}} {{formula:d0c8460d-def0-422b-9089-59f18aba76dd}}
m
a163181dfa1d212ebf210e383a559815
For temporal clues, most existing methods only model either short-term {{cite:52f79bb599be56d37c44a43175ed1a57527d0cec}}, {{cite:7069c71674fa6e64ed8ede3027d64b857934c231}}, {{cite:32be6cb1b4cd34e3c7abec0072ba5af10d6740c9}} or long-term temporal relations {{cite:016d0bb32a07b683127898968403af3be92f7127}}, {{cite:35f09061372762b9326c2e5d4d898f68a8f1f316}}, {{cite:e533becb48176970577b6d0c5d7e940196c7aa45}}. To enhance the temporal modeling ability, a few works {{cite:929d511dadd6bd928fe5431a663c065834c36d8a}}, {{cite:a6b29fb2c5abed7c746295b04d63880d6503b2e3}} attempt to jointly capture short and long-term temporal relations and fuse the two relations with equal weights. However, the two temporal relations have varying importance for different sequences. For example, as shown in Fig. REF , for a sequence with partial occlusion, the long-term temporal relations are more important to alleviate occlusion. For a fast-moving pedestrian sequence, the short-term temporal relations play a greater role to model the detailed motion patterns. So it is necessary to adaptively capture short and long-term temporal relations of videos.
i
99e4141e783e5d397a1aca05f557d5e5
The outline of this paper is as follows. Section describes the conditional linear factor models with sparse time-varying coefficients, and how to implement the no-arbitrage restrictions in the specification of the random coefficient panel model. Section  develops our penalized two-pass regression with time-varying factor loadings. The penalization in the first-pass (time-series) regressions of Section REF enforces sparsity for the time-variation drivers while also maintaining compatibility ex-ante with the no-arbitrage restrictions through building appropriate groups of coefficients. We explain in detail in Section REF why we prefer the aOGL method over the original Group-LASSO of {{cite:2a849becd24f31198269abe298d26c100da9867f}} for the first-pass regression. The second-pass (cross-sectional) regression of Section REF delivers risk premia estimates to predict equity excess returns. In Section REF , we show asymptotic consistency of our penalised two-pass regression estimates under an adaptive estimation for the first-pass regression coefficients. Section reports our simulations results. Section  gathers our empirical results. After describing our data on US single stocks in Section REF , we present our empirical results on in-sample and out-of-sample prediction performances in Sections REF and REF . We investigate 13 characteristics and 6 common instruments for the dynamics of factor loadings, and use the four-factor model of {{cite:174d2343fcb7053f38c4932fada2b39a2f6f2ff2}} and the five-factor model of {{cite:e987801254947beb57cce30d8f68bd5426ad2288}}. Section  concludes. We list regularity conditions in Appendix , the proofs of our theoretical results in Appendices and , and a description on how to construct groups for the numerical optimisation in Appendix .
i
b908368b8c95777897c7d25a58bcaf5c
All these algorithm are implemented in existing computer algebra systems, like for instance Magma {{cite:046e26b86e16d4e1a47014c4ef6b2db993ee05d5}} (see also {{cite:c31162abac1974e8fddc4028cbf5dbb53df33d71}}).
i
339c75cb04381bd56ba639d4f4158d57
As shown in the table, our method can make a trade-off between speed and accuracy. Although our MSCVNet is little less accurate than top-performance methods, such as GCNet{{cite:c724e24dbb5016419ce613c276d6487cb5e82016}}, PSMNet{{cite:b45bc90f8bd643da82c7a5a47185ee8213f1ab9d}}, GANet{{cite:02a7336b1d548c73673b3248904961ba09927bf7}}. it is our advantage that we can achieve real-time at 41ms. Meanwhile, compared with the traditional method (SPS-St{{cite:e02903d14eee271eedfb0656fe923abbe92d5e81}}) and other fast networks, we achieve significant performance improvement. In particular, compared with Fast DS-CS{{cite:a1b6ce97783583673516e7a4977f3e6d2b708fce}} based on ADCensus and DispNetC{{cite:f8cf85bb596b3648332850c7411e676a5efa9bf7}} based on 1D Correlation, our MSCVNet obviously better than them in all evaluation metrics on KITTI2012, and only a little worse than Fast DS-CS in D1-fg on KITTI2015, demonstrating that integrating ADCensus and 1D Correlation is effective. We also visualize the results in Fig. REF to further prove our method's effectiveness.
r
b80143c8a95efe7b2065a18a1d994280
Hyper-reduction refers to a broad class of methods that aims to reduce the online assembling cost for projection-based ROMs; hyper-reduction techniques might be applied either at the continuous or at the discrete level — continuous-based and discrete-based hyper-reduction. Continuous-based hyper-reduction methods (e.g., {{cite:9633f0820f71b3764077b8b8e4400ec36a1c5798}}, {{cite:d61e2463bdf5bf158010be6b699f7c274b44a14d}}, {{cite:ff1dff5ed4c60df8514011af6baca8909e222dd9}}) aim to devise an affine approximation of the integrand to ultimately obtain a parametrically-affine (cf. {{cite:e58f82591b5bf07f61fa8250c3c8c06af51aa997}}) approximation of the problem: {{formula:32a09353-45ee-4d58-8859-88279431a78a}}
m
1fdeb7ff25bc50d51ce24aed282da9c1
There are several limitations to our analysis. On the one hand, based on current evidence, we use biological parameters that characterize the SARS-CoV-2 virus that are not fully confirmed in practice. For instance, we have explored the unknown susceptibility rate (relapse probability) and found that its effects on our conclusions are not major; however, one may update these values as more comprehensive data surfaces. On the other hand, the entire contact networks of large regions around the world affected by COVID-19 (e.g., China, Europe, or the USA) are impossible to reconstruct entirely due to their size, dynamics, and lack of individual-level information. Similar to other approaches {{cite:dd144296aa03071f66b69825815dcc13cfb0d3de}}, {{cite:f9c21c354b69abea8e521b5b63063a5c0bb27b56}}, {{cite:78d7fda843dd6d756812617bf8958a73a69525b6}}, we base our study on several synthetic and real-world datasets as reliable proxies of the contact network of individuals. By fitting the SICARS model to multiple complex network topologies, we aim at making the best possible assessments on the SARS-CoV-2 transmission dynamics.
d
d656e57ffd7d4a938d5c242dff38c18f
where {{formula:d9df2827-017c-4d74-9f72-36f349e63c85}} denotes the set of safe policies, and {{formula:52cc56dc-6ccb-4284-a9d1-7875d896a758}} in some sense measures the “sub-optimality” of the policy class {{formula:04706e02-a7fe-4f8f-97e4-838f484df4cf}} . In case a fixed sub-optimality gap {{formula:49255ea7-3670-4263-9c7a-d1af0006be5a}} is given, such difficulty of unknown {{formula:878aab1f-803d-4e19-b701-ca666b4fbe7b}} also appears in the guarantees provided in previous works {{cite:e1b5e7f56616df100a23f8063025b30e7eeedc72}}, {{cite:b5422ee48c163bd91ef29867bdaac68f0af2d022}}, {{cite:102ed378f97cdf3d4b1359ad7319dffed1e7b16b}}, {{cite:031d511142354b5256690e741258d28a01a28e6c}}, {{cite:0f42c81b2df488cbbaccb41ae73c94bd9624b731}}, {{cite:3c5ac972b15b90f7eb5e4031f41c86245b4e6366}}.
r
a4d36ab50641b1d339bf650d5ded196e
The limits derived in this work are compatible with other limits and sensitivities of future ALPs experiments, as shown in Fig. REF . Together with the SN 1987A {{formula:05566539-8ae2-4614-89f9-ebf17ccf87fe}} burst experiment {{cite:7eddaebb93e500e211f51df467b0025d369b3eee}} and with previous Fermi-LAT {{cite:e34c8d763c300d45522f2609291aefb9fcae63a9}} results, our limits strongly constrain part of the parameter space in which ALPs can modify the opacity of the Universe to {{formula:134efbe0-696d-4a16-8281-9a00999e790f}} rays. Our limits also constrain a part of the unexplored parameter space that the Fermi-LAT work using observations of the central AGN in the Perseus galaxy cluster could not cover before, i.e., the hole around {{formula:4152b4ec-ea96-4134-aca4-3813534df0e2}} and {{formula:af7fade8-6c66-4d8a-b5b8-31ed0a2197af}} neV. They are also within the planned sensitivities of ALPS II {{cite:4020d96ac26afea490d5c3b5835da4a6623fd420}} and IAXO {{cite:f9e7d137531e0047500989026d107b7b08866308}}. None of these limits constrain the region where ALPs could compose the entirety of dark matter content of the Universe {{cite:7b69faf463804cce25cdb711b214ae0ceff1a2d5}} which is below the dashed black line in Fig. REF .
r
3a34a9f5d59c81e364eb50e7cd921c79
The HE gamma-ray light curves with a weekly binning obtained from the analysis of Fermi-LAT data are shown in Fig. REF . Flux points are displayed only if the significance of the source is above a TS value of 10 (corresponding to a {{formula:4d902531-29f4-43aa-9c18-e865521ae8a2}} detection), otherwise a 2{{formula:dd2eaf75-99e4-46ac-83d0-9c5da9c0fbed}} upper limit is placed. The results of this analysis yields compatible results with what was reported previously {{cite:0ad9e10a4ccafe54a6f7ce7a35026394882b5739}}, {{cite:af2f561df190532ebb21f393a2bb7aee6e023cde}}, {{cite:39f28120338cfedbf18099f9e599aa5d942d2770}}. The most striking feature of these light curves are the bright HE flares starting about 30 after periastron in 2010/11 and 2014, and {{formula:e46741a1-92a6-4f76-9e80-f82d382740c2}} after periastron in 2017. The flares last about a month in all cases with substructures on timescales from days to minutes {{cite:af2f561df190532ebb21f393a2bb7aee6e023cde}}. {{figure:e5d5ae95-a7fb-48a2-b225-e64762403007}}
r
5aacdd44ccc14aa6c039aa6ce2873720
Projections of IPCC AR4 and AR5 and regional climate models {{cite:01f253e45be73aa5b33ebe3f99397997476dcbd7}}, {{cite:e25e8126d14bf588711256f8fa726fdd2425e9d7}}, {{cite:5b6214acc42908a2df352f05a00d6493ab667b13}} suggest that the eastern Amazon may become drier in the future, and that this drying could be exacerbated by positive feedbacks with the vegetation. At the broadest temporal and spatial scale, most global circulation models predict that greenhouse gas accumulation and associated increases in the radiative forcing of the atmosphere will cause a substantial (more than 20%) decline in rainfall in the eastern Amazonia by the end of the century, with the strongest decline occurring at the end of the rainy season and in the dry season {{cite:40ce61cf7f541b0f3da3d0f515828d8b4cfa9b22}}, {{cite:d4ba672cc70d5259288b10aa8e67aac9fadce800}}, {{cite:32f830e668928b7ee98acb41d3af3184796b6a35}}. If severe droughts like those of 2005 and 2010 do become more frequent in the future, this demands adaptation measures to avoid impacts on the population, particularly those living on the river's bank. The impacts felt during the droughts in 2005 and 2010 show how local population are vulnerable to climate extremes {{cite:118cd634b0d1319a43ea011eb0348345e06f8a27}}: local farmers are affected by drought due to high temperatures and drought conditions; and river levels are extremely low, making it impossible to transport along the main channels, which in many cases is the only way for populations to move around and becoming isolated. Two extreme record droughts in less than five years period is something that have brought the negative impacts of extremes of climate variability and climate change in the region. Therefore, a proper assessment of the intensity, spatial coverage, and climatic impacts of the future droughts is of fundamental importance to the society {{cite:fe6d73fb83afab9115d414831d953a8156cb8fe8}}, {{cite:aadfe3fab1ad46e55983c7774ed7f19636247e2b}}.
d
f0c5268114f451f1c5dec506ec2cc4ff
In the last section, we restrict our discussion to the case of HYM connections over general complex manifolds. If we assume {{formula:60f4870b-1782-443c-aa38-af1c5775ac75}} is the limit of a sequence of Hermitian-Yang-Mills connections over a compact Hermitian manifold, then by using the argument in {{cite:6454c0dfd32a8e3e666f58a541bc5b076234832a}} for Hermitian-Yang-Mills connections over Kähler manifolds and the extension theorem in {{cite:5e2c98d44af3c0fb09c31bd255ff5b06eca62222}}, we can show that {{formula:c7636242-4a97-4269-8afb-3a58711ec1b9}} are all holomorphic and {{formula:233e805d-a765-42ce-9150-32b2731ca1d2}} is a complex subvariety of codimension at least 2. In particular, we can now take the quotient of {{formula:896d8b3e-bc6c-41a7-8e70-8cf35b97548a}} mod gauge to get {{formula:5587339a-93e2-48de-ae6e-ecc1d931c762}} . There exists a way to give it a topology that coincides with the four dimensional case (see {{cite:593260de6587a4afc010f2b0e3668376ee10671a}}) so that
r
7a10463af581e2ad66cdf083c03aa49b
One limitation of our study is that it considered only planar objects. We expect our approach will extend to 3D objects, like recent work that has successfully implemented tactile servoing to slide over complex 3D surfaces (3 pose components) and edges (5 pose components) {{cite:078c181f53f76f3dc492a69f4c2b0998db83d1d1}}, {{cite:09ad0ee7699173cde56166ddcfc9fb3c888437ff}}. An extension would be to sliding shape exploration and full 3D object reconstruction, which are open problem under active investigation {{cite:8d41fd74de85b44c942cf2197ea1daad5c7a10e0}}, {{cite:545a4e27ca59f367f2a0f6ad20f16eadcc63a03f}}. This would open interesting avenues for future research, such as: 1) the fusion of visual and tactile sensory modalities for robust object representations and 2) tactile object recognition by exploiting prior visual experience of the object. An additional limitation is the requirement of annotated data, although much less than the fully supervised counterpart {{cite:739c85dfdc7e8cc46bd610eca1398429b4b4aa4e}}. An unsolved problem for future work is to completely eliminate external supervision. Finally, our training was limited to translational shear even though servo control introduces rotational shear from the angular motion while sliding over objects. That the methods worked well both for sliding exploration and contact geometry reconstruction in the absence of trying to compensate for rotational shear shows that some shear removal carries over from translation to rotational shear. Our expectation is that bringing rotational shear into the training will become important when extending the methods to 3D objects.
d
6b4be5b55faa5d2487180350e24d3119
The blockade radius for an ultra-cold atomic sample is defined as {{formula:e939faa4-e382-4275-a895-67c08dc041ae}} , where {{formula:aedc96dd-1f3a-46ee-ab2e-5067753d4aaa}} is the effective Rabi frequency of the Rydberg excitation and {{formula:7b2224e2-c76f-48ed-936a-ef7ac302ec93}} is the strength of the van der Waals type Rydberg-Rydberg interaction. For a thermal ensemble of atoms, the blockade radius is affected by the Doppler width ({{formula:f3d20369-9f18-4685-be29-c0e9df5c38c9}} ) due to the thermal motion of atoms. Since the van der Waals interaction scales as the sixth power of the inter-atomic separation, the blockade radius is scaled down as {{formula:482cbdd7-d308-44d8-bf09-774163a3a534}}  {{cite:79a60fcfcf85cbcc5e1fffe496e0bd041c80d898}}. For thermal rubidium atoms at room temperature, the blockade radius is decreased only by a factor in the range of 2 {{formula:10aca070-8c3a-430d-a218-927bcc51e2c7}} 3. If one works with a Rydberg state, {{formula:7478379f-860e-4c70-9c5c-4b9fcfe1d064}} , the blockade radius in thermal vapor is of the order of a few microns. Electromagnetically induced transparency involving Rydberg state (Rydberg EIT) has been demonstrated in thermal vapor cell {{cite:2a8364825864bf2ca39c1b23b08011f3831ac008}} and in micron sized vapor cell {{cite:79a60fcfcf85cbcc5e1fffe496e0bd041c80d898}}. Van der Waals interactions of Rydberg atoms in thermal vapor has been observed recently {{cite:d9de6bb77922129e402888aa896832e5312b7038}}. In addition, four wave mixing involving Rydberg state {{cite:9aec2069be978769d7871c32d16856aa75e45f04}} and large dc Kerr non-linearity of a Rydberg EIT medium has also been demonstrated in thermal vapor {{cite:700fadbafb7d07ca93bf5533a3b7031b8cdb5b52}}.
i
87c7bfa14e6a34e1593cc606d4b14dd0
After considering these possible avenues for improving BENDR, we still do not fully discount the validity of some of the transfer learning paths we appear to exclude above in our introduction. We will reconsider these paths in future work. Particularly, given the success we had in crossing boundaries of hardware in this work, and in prior work {{cite:bfcd709f6b9408b328358a0075775ac012140ac6}}, it may be possible to construct an aggregate dataset featuring a variety of EEG classification tasks, towards better ImageNet-like pre-training. The construction of a more coherent label set that crosses several BCI paradigms would no doubt be a significant effort (e.g., problems may include: is a rest period before one task paradigm the same as rest before another? What about wakeful periods in sleep?). This would no doubt be imbalanced; the labels would be distributed in a long-tailed or Zipfian distribution that would likely require well thought-out adjustment {{cite:29ff00c401ff6b6ec3e4c0266f5d13346f3e3be1}}, {{cite:148634755287c002bcdecd79852acc1d5e7a7385}}. Furthermore, the value of ImageNet pre-training seems to be localized to very early layers and the internalization of domain-relevant data statistics {{cite:90bf1ca6219fd0b548559cb5ff6fd513c7b7b9ef}}, {{cite:bb897adc8ebaa24bcad0c75887248cef17416ef0}}. Future work could look into which of these may be leveraged with a new aggregate (multiple subjects and tasks) pre-training, or the common subject-specific fine-tuning. This may provide insight into better weight initialization, or integration of explicit early layers similar to {{cite:bb897adc8ebaa24bcad0c75887248cef17416ef0}} (one could also argue that SincNet layers {{cite:40cd29a026d81360e7ee9450756122af50c690d0}} are some such layers that could factor here). Additionally, as temporally-minded reconstruction losses continue to develop {{cite:1848fd05ebafb16e06df7e08a1fe028073709a3d}}, reconsidering the effectiveness of signal reconstruction as a pre-training objective (and/or regularization) is warranted, whether this is within an MLM-like scheme similar to BENDR, or a seq2seq model {{cite:3902b426903ccc37d39eba943e895f2aa118f122}}.
d
14d68c9387e259c96a91728866b80e29
Our two contributions for GLL follow the same simple idea. We leverage the GLL structure, namely the fact that these losses are effectively “one-dimensional,” to make a fast approximation of the Moreau envelope of the loss {{cite:b44dcd8a226fcc2665e25da0c5d8348fb60fa2b9}}. We can then exploit the smoothness of the envelope to improve algorithmic performance. A similar approach was taken by {{cite:b125e01503d90f21ca2fbcc551e734f9f20fdada}}, but their approach suffered from an increase in the running time by a factor of {{formula:25620a3e-20fc-4955-bff3-53f305e3d688}} due to the high cost of approximating the gradient of the envelope, which involves solving a high dimensional strongly convex optimization problem at each iteration. In the case of {{formula:00f3e90d-3c5d-48ba-aeb5-e4690d1e6c5d}} , we use an existing linear-time algorithm for smooth DP-SCO with optimal excess risk {{cite:df3ecea73f30819c69bb5b0b860ca26695cb5f65}} combined with our smoothing approach, which results in an {{formula:42c39a1b-b23b-4a6e-8975-b636fb8295b7}} -time algorithm. In the case of {{formula:a3288079-702c-441e-8989-d872013c2051}} , we use an existing noisy Frank-Wolfe algorithm that attains optimal empirical risk for smooth losses {{cite:b124b381c64e13fd4b2faeee107eba58da249b21}}, together with generalization bounds for GLLs based on Rademacher complexity {{cite:d1f333afe0aafe4905e5299b15cffa300416abdc}}. This algorithm is not linear time, and hence it is tempting to instead use a variant of one pass stochastic Frank-Wolfe algorithms, as in {{cite:cd19ccaaa6b515b32f0d3a65e1d413097345ccf8}}, {{cite:e9ee51e279a76a6108e1dccd34151562da417d70}}. However, the excess risk of these algorithms has a linear dependence on the smoothness constant, which prevents us from obtaining the optimal risk via smoothing. Hence, it is an interesting future direction to improve the running time in the {{formula:4529ed58-026d-4b2e-98c7-b9007f64ea77}} -setting.
r
4e9af0ffcdb657909841fab531db0bc3
In this appendix we introduce several classes of heavy-tailed distributions that are considered for the job size in this paper, see also {{cite:598d2bbb4af72cf3f29e21dc9dff5881fa1ffac4}}, {{cite:e4985621594e55c466adc6bee13b247f1ac7af8c}}. Let the complementary cumulative distribution function be defined as {{formula:8a586e72-701f-46f8-9966-9c96c2b64715}} .
r
0ccc227853d277639f0c92e47471d5fc
Here {{formula:4d64a58f-57d5-421b-af2c-63c228a5e62e}} is the density of mass, {{formula:8306ef0f-9a81-4b83-a400-76153f0e9260}} is the current, and {{formula:5fea1a7c-f4f8-48d3-9563-5c7e1dab0295}} is the constant diffusivity coefficient. Equations (REF ) and (REF ) can be obtained as the hydrodynamical limit of diffusive interacting particle systems of “gradient type” {{cite:d612ecb24657fa84dca2364d00a653998006a253}}, such as the simple symmetric exclusion process or the Kipnis-Marchioro-Presutti model {{cite:b711cad507b3932ec930b3708ced937ccb5c86a4}}. Fick's law (REF ) tells us that the total flow is opposite to the density gradient.
r
66663e5951e423f085bd62ff35f5fbb1
SA-RNN beamformer vs. Conv-TasNet: The MIMO Conv-TasNet (i) with a fixed STFT encoder {{cite:fb02e8de344ad43840bba59f4566d6a5ca403ef7}} is a variant of the original Conv-TasNet {{cite:a39fb7dee9f9f59ed69e9e6dca78bbe5a15da53b}}. It is the cRF estimator without the beamforming module as shown in Fig. REF . The input to the Conv-TasNet is the same multi-channel information with other systems. The proposed SA-RNN beamformer (vii) achieves higher PESQ (3.78 vs 3.00), higher Si-SNR (16.46 dB vs. 11.25 dB) and lower WER (10.13% vs 24.21%) compared to MIMO Conv-TasNet with STFT {{cite:fb02e8de344ad43840bba59f4566d6a5ca403ef7}} (i) baseline. For the MIMO Conv-TasNet with STFT (i), we can find that it performs the worst in the WER metric (i.e., 24.21%) among all systems due to a large amount of non-linear distortion. The non-linear distortion in the separated speech is not ASR friendly. This non-linear distortion problem always exists in purely “black box” neural network based methods {{cite:8dbcdbafa89a42981f4d4b68d84e96812d358788}}, {{cite:f9f8a3afb8b20288adfd512cebd19bf607f12ee8}}. The difference between the Conv-TasNet (i) and the SA-RNN beamformer (vii) is also shown on the sample spectrograms in Fig. REF .
r
d59265f8eef9414a98abcad4b974171c
{{cite:21d359380f029646bd43f08ede09b03e966d7dc3}} considered a special case of DG in which the target domain is a convex combination of the target domains. However, the number of target domains that this technique can cover is limited if the convex region of the training domains is small to begin with. Extending on this idea, we include target distributions that can be represented as a convex combination of the training and the augmented training domains (Figure REF ) {{figure:ab03703e-f872-436f-ac1a-78d551ee3750}}
m
e3019151cc0506e5f6cb96b343e8d725
One of our goals is to explore the nature of nonlocality in open systems, in hopes that this can shed light on whether nonlocality can also arise near horizons for black holes. Nonlocality in this context traditionally means the extent to which the effective action (or Hamiltonian) is not simply the integral of a Hamiltonian density that depends only on fields and their derivatives at a single point (a property normally held by Wilsonian actions as a consequence of cluster decomposition and microcausality {{cite:515cfa4b99f7f9df095500fd7fc87db954cfdca8}}).
m
20af1a4dd78ccf4a4c3eb15e833933ea
Hyper-parameters settings. For all simulations, the learning rate in Eqn. (REF ) is set as {{formula:f8cda1de-81be-4795-bf5c-a15efc74ccfc}} . The implementation of the quantum encoder employs the hardware-efficient ansatz {{cite:43292d0d2db30313181703ee74ff5bab3dc98111}}, {{cite:fd4750272954f243f97f1047e6049d2aaf47d974}}, {{cite:a79ebb9f0b3131e3ab388b2fef1b60f09c0f6f6b}}, i.e., {{formula:fd8d139a-423c-4f24-bddc-51d6567e735c}}
r
d51a03c9f25aaf59f772414667c4449e
We construct an ensemble of EoSs based on the piecewise-linear speed-of-sound parametrization introduced in {{cite:aa8bb512c022f956d17942b7248b644afd21523f}}. The model has been already used in several other works {{cite:d3d465affeceaadde3cefa6cbe63ca083010cc95}}, {{cite:515c82b41dd3833c4623a11b8dece9077efe51eb}}, {{cite:1ce9127ad409acfa716a2b09ce4e29bc1d3f7ac5}}, {{cite:cc696f9b33de1cf40b99161e793c5fc92bcab4ea}}, {{cite:3ac4c45fcce7369f1a4e61d9b905400528e9a12d}}. For details, see {{cite:3483fb2357f90f393b90b360571fb5b476922a28}}. The EoSs are required to be consistent with the pQCD results down to {{formula:31c89553-2bd1-4e40-8662-cfc93434ba29}}  {{cite:52a6c589b8d550d7d48797b35b17ae19040bc0c3}}. In addition, we impose the observational astrophysical constraints: the lower bound of the maximum-mass constraint, {{formula:2e5b93c6-7c82-4506-9144-2151579d04e5}} , from the measurement of J0740+6620 {{cite:eb3f9d3030315d754f94f3bd6981fca925dd97b9}}, and the constraint on tidal deformability of a {{formula:dd928312-1ac2-4160-8fc9-4cb26ea9f91a}} NS, {{formula:0296879a-3e4e-44f8-92f5-3506d570ea9c}}  {{cite:e9df4f6149b06005af3c2443b92fef2dd6c89984}}, from GW170817 event measured by the LIGO/Virgo Collaboration (LVC). In total, we analyzed a sample of {{formula:cac64e49-c374-48b7-ab2a-82ffe64789a7}} EoSs that fulfill the imposed observational and pQCD constraints.
m
9a7260f1e2116cb6b18046deb5c7cd01
Artificial neural networks often give good results, but it is difficult to understand what they learned, or on which basis they generate their output. In the following, we will try to dissect the proposed model, understand its workings, and see what it pays attention to. To this end, we compute saliency maps using guided back-propagation{{cite:cc56a0d2e15b0313589e063a20711b962caa204a}}, adapting code freely availablehttps://github.com/Lasagne/Recipes/ for the Lasagne library{{cite:0fd11891883c212db7fc1c20f659a6b9f2e55ac5}}. Leaving out the technical details, a saliency map can be interpreted as an attention map of the same size as the input. The higher the absolute saliency at a specific input dimension, the stronger its influence on the output, where positive values indicate a direct relationship, negative values an indirect one.
r
ffe14adbca72c44bca6572fcf1488c76
In order to estimate a standard error of the mean for the number of remembered items across participants, for each list length, we performed a bootstrap procedure ({{cite:12222992f843c5ff6e43524856844f6177a1b249}}). We generated multiple bootstrap samples by randomly sampling a list of N participants with replacement N times. Each bootstrap sample differs from the original list in that some participants are included several times while others are missing. For each bootstrap sample {{formula:7cdccb6f-0620-48a2-9588-49a534d0c3e6}} out of total number {{formula:76e0b7b3-4135-4d51-bdd9-92dd2ff891ed}} , with {{formula:9a9503f9-a810-4d90-a51d-61981455f51e}} , we compute the estimate for the average number of remembered items, {{formula:8457d24b-cd8f-4a8b-837d-b9b39aa10dbc}} , according to Eq. (REF ). The standard error of {{formula:7dde07d7-c2a7-4df6-9422-9d47f9d1df0d}} is then calculated as a sample standard deviation of {{formula:74bff1ac-121b-4e52-bb21-371e9a445ebb}} values of {{formula:0df71691-9f64-4a6d-93b0-da3f5ea26f5e}} : {{formula:1c5e7b7a-6f00-4332-bda9-5b013dfb74eb}}
r
3cda99896c246166fa09f58efe9ec4f7
In this work we focused on NAS for medical image segmentation. Due to computational cost reasons, we used a 2D segmentation paradigm and quite compact architectures. However, it was shown {{cite:c86f5831a12b01bc779299174aa70c8e505585d4}}, that using a 3D segmentation approach (i.e., train on 3D volumetric patches instead of 2D patches), might be beneficial for performance. Moreover, increasing the resolution of the patches, removing the image downsampling in the stem convolution and training for more epochs might potentially substantially increase the performance of the found networks. The computational cost of our experiments and the available computing capacity did not allow us to make such modifications, but we argue that the conducted experiments are well suited for this study.
d
a60163dee71b7a27c3cafa77d11a0b10
This was confirmed for {{formula:82bc2968-fc26-4df5-9f0f-fe5a7e7b75ab}} by Singer himself {{cite:5903bda55c6391d79fd614648c33e85e0f5083bc}} and Boardman {{cite:52babac0648788f7544a39f218bc76d81a1618e5}}. Our recent work {{cite:eb27426a734873ad328d92c31a5f4384c6191262}} shows that the conjecture is also true for {{formula:ae7c32fd-3bf7-4d49-ae34-6b626bdef59d}} So far it remains open in general. Very little information is known when {{formula:8b35d4d7-768d-406c-956c-8536ef09d53c}} Now, based upon an admissible monomial basis for {{formula:0f871963-683d-480f-b27c-110bcd8498b6}} in degree {{formula:796eb8d2-93fc-4192-adb6-9251e9d81233}} (see Theorem REF ), we verify Conjecture REF for {{formula:45dd5728-0b21-454d-b4a0-5f0c3218c804}} and the respective degree. The following theorem is our second main result.
r
b8191114f76339ba915dccb11fcd9b30
Quantum computing and machine learning represent two of the most significant fields of computational science to have emerged over the last half century. Over the last decade, attention has in particular turned to the ethical implications of machine learning technology, resulting in the emergence of a burgeoning multidisciplinary field dedicated to the ethics of emergent technologies. For example, the ethics of automation and artificial intelligence covers diverse fields, including fair machine learning {{cite:dede9d68ca0f37d1bf831a010fa421f87cf19199}}, {{cite:d4146990f7a5a9c1a3f9ef5dacee8c96834b7ef3}}, distributive justice, representational justice and jurisprudence. While ethical analyses are advanced or advancing rapidly in many such disciplines, one glaring omission is a dedicated consideration of the ethics of quantum computing and quantum technologies generally.
i
163839b129952a71af87a19d0794cdf5
paragraph41.5ex plus1ex minus.2ex-1emCase 3. Next, in Figures REF and REF , we present spectral properties for Case 3 in Table REF , where a spike detaches from the bulk after large-step-size training. Notice that Figures REF (b) and REF (a) imply that the bulk spectra for weight and CK remain unchanged over training despite the emergence of spikes. This not true for NTK by observing Figures REF (b) and (c). The Frobenius norm of NTK changes significantly during training and is not {{formula:15f04b93-c2a8-45e1-8624-bac600a9abd6}} anymore; the spectra of the first component of the NTK shrinks after training (Figure REF (b)), which indicates NNs converge to flatter minima. This resembles the catapult phase in {{cite:dbad567f8702ccf945e6cb15883202eaffacb6c8}} for extremely large learning rates. Figure REF (a) shows the convergence rate for SGD in Case 3. Empirically, we observe that the training loss will not monotonically decrease when using a larger learning rate than Case 3, which may be analogous to catapult phases in {{cite:dbad567f8702ccf945e6cb15883202eaffacb6c8}}. {{figure:dfda77a3-b2d5-4dd4-b055-e70385825001}}{{figure:e4e47044-0b08-41c2-ab3d-fb851568be91}}
r
26d4d37a74c6cf69d94aaa504c18fcc8
where {{formula:e09f0a6c-691b-45be-b126-8dde5813687b}} and {{formula:b4264c16-fbdb-4ccd-a10f-c3f8d7593f85}} are BCPTP channels, and {{formula:fdf20d9f-e17c-44f4-86d3-18addd66956e}} and {{formula:5b47eaef-3a38-4dca-a044-6b1946b9bd83}} are probability distributions. Eq.(REF ) implies a universal circuit with two-depth for building any state from a fully product state, as shown in Fig.REF . The present model is stronger than the standard quantum circuit model with unfixed depth {{cite:728dd1d259b45954847b915da1610e07c2ea180e}}, {{cite:6349aaf66bde782a0e23deaae5935618b5a0bd47}}. The present biseparable quantum channels may activate entanglement from biseparable states. This kind of entanglement swapping is the core of quantum networks {{cite:650c06c702dccf008ef2b60de801ec2af00a0e03}}, {{cite:9884b9868cb188daef74453dc1d5403d27c43641}}.
r
2cbcdadebd6879d5145e13f0052bdae2
Referring Expression (RE) is a widely studied cross-modal task in both computer vision and natural language processing fields as a vision and language task. In a RE task, the agent needs to localize a specific target object in the image in response to a given natural language referring expression. Most of current studies in referring expression focus on passive image datasets (e.g. RefCOCO, RefCOCO+{{cite:040de333045b117faf8ff27b142c9634df0dc05e}}, RefCOCOg{{cite:847efcd990866ad6bbb4ca8694839ec2b565f7d9}}) where samples will not change with agent's decision. Recently, referring expression tasks in embodied scenarios has emerged. In an Embodied Referring Expression (ERE) task, the agent is required to navigate to the position mentioned in the given expression in a 3D environment and complete the REC task on the final scene. However, the process of navigating to the target object scene in most of above tasks merely consists of spatial movements without interaction with surrounding environments, such as opening closed objects or moving occlusion.
i
ef08cc64f96dc28b0089208627f9421d
For our numerical calculations, we discretize the Hamiltonian to a tight-binding model on a square lattice of spacing {{formula:05368fb8-eb07-485b-8c41-cd44afc606fa}}  nm. Simulations are performed with the following parameters: {{formula:af56039c-a966-44e9-9a42-cdb0c58e7fba}} , {{formula:59dd77b3-1521-4532-8298-6aceafebfd4b}} eV, {{formula:8865d1a1-e0d5-4607-b418-9882e7173273}}  meV, {{formula:f6e5de86-2819-4ceb-9b7a-ef8a952b82ba}} m, {{formula:05724ca6-3074-4c6d-a349-f92e4b30c95e}}  nm, {{formula:01797e89-c0ad-46ce-b4de-985807a55950}}  nm, {{formula:88c88ab4-de36-4ad3-8a42-c3b9736f15e1}}  nm, {{formula:82a095bc-4f9c-4280-aa81-9283eb57fbac}}  meV, {{formula:8a26cdb1-4197-4304-b613-9de0658c7b8c}}  meV. The g-factors are taken to be {{formula:89f232f8-9f55-45dd-95c0-e681bd9f89fe}} and {{formula:7be7a764-6fd1-45ea-99d3-a7e96688d97e}} . We use the leads spectroscopy measurements [Fig. SREF ] to match {{formula:45ff64be-24f0-4740-927c-4cb0da342862}} (the effective cross section for the field-induced superconducting phase gradient), {{formula:9b9ed20d-784c-4faf-ae63-b5e6b137f3e5}} , and the difference in spin-orbit coupling between the two layers. We obtained {{formula:6d02d29d-261f-4482-ac61-4b68b87d2ce4}} , {{formula:f1383db7-cfa2-4813-aa81-d60c6c36292b}}  meV, and the spin-orbit coupling constants {{formula:97bb02e5-25dd-42ec-9ca7-e71341a1d042}}  meV nm, {{formula:1931c45d-f3c2-46f3-b325-4a9b7364dfdf}} . Pfaffians were computed using the pfpack software package {{cite:30f081138677bf2b423895a6e8888b80894c0cae}}. Some of the preliminary simulations were performed using the Kwant software package {{cite:582339d8519212e808109514634eabe0850ae8db}}.
m
bce9a348b9626ed58f3b7370eb6b90c6
In the light of our results, there are several directions to pursue. Although the extra part in the single copy equation (REF ) was shown to vanish for all the examples in the literature, a general proof or, at least, the conditions under which it is true are still lacking. The resolution of this might lead to a better understanding of the classical double copy. The study of wave-type solutions with a curved background, as done in {{cite:21edeacff241113caed8b97e0a61b8b335add983}} for constant curvature backgrounds might also be interesting. In addition to the simplifying assumptions about the background metric, the assumption of the minimal matter coupling can also be released for certain types of theories. We will return to it elsewhere.
d
7ec80c14a566145b14124015d88104ad
Figure REF (right) shows the temperature {{formula:03824a63-8917-49f3-963b-79f24c54445f}} and density {{formula:406f62fc-a01b-427b-8791-efb5c1f7f299}} as a function of radius {{formula:e9c1b1ad-055d-44cf-b416-6884b32d3591}} predicted by our maihem simulation for an adiabatic example. It has four different regions described in {{cite:265cf3b2aa99f2d0ea57cfdf480a5646ec9eb034}}: (a) freely expanding wind, (b) a hot bubble, (c) a narrow dense shell, and (d) ambient medium. Ref. {{cite:0949c4ff1656a34a3236205d939f9ed37873323d}} classified the the temperature profile of a freely expanding wind under the adiabatic bubble (AB), catastrophic cooling bubble (CB), and catastrophic cooling (CC) wind modes. The AB mode is the classic bubble model described by {{cite:265cf3b2aa99f2d0ea57cfdf480a5646ec9eb034}} and shown in Figure REF (right). In the CB mode, the temperature profile in the region (a) has radiative cooling, but it still has a hot bubble. In CC mode, the temperature profile of the free wind is also radiatively cooled, but it is without any hot bubble. In Figure REF , we plot the temperature profiles for three example models in the AB, CB, and CC modes with expected adiabatic temperature profiles in dotted red lines. {{figure:2fdf596a-4e54-462a-a854-94948f33e905}}
r
5f692517189326c52bee640590316e9f
Here {{formula:b9f33a14-2af8-48e7-b0e5-78090ef48c64}} depends on {{formula:83c8f577-3267-49b8-b3c9-eeb4ad1e805e}} , historical search direction, and {{formula:a882bee0-f838-4574-bed4-aa230a36e702}} is a diagonal matrix. The diagonal form of {{formula:495d10db-521e-4d23-8cd4-75ea7fbd3533}} allows for different effective learning rates for different coordinates. The use of inverse {{formula:48e7e620-83c9-4bef-82aa-083b3347a310}} here is to be consistent with the form of the natural gradient method {{cite:affe2638c59d8c43a21acd7237274a1a1e902426}}, {{formula:fb91506e-41f0-45cc-b5f1-82b1268d15e0}} , in which {{formula:a66cd544-612c-4f86-96cf-ef131bd6f9de}} is typically a positive definite matrix, playing the role of a metric matrix for certain Riemannian manifold. Table REF is a comparison in terms of different choices of {{formula:411d1d4f-e77d-401c-835c-12439fae07d4}} and/or {{formula:263391ff-1ae9-4516-8a36-bd7090e297bb}} . {{table:a52f91bd-2c29-4461-a3ef-9b8762aea30f}}
m
5483fa96aff5369c7dd579de466c94f5
It also comes with a collection of pre-trained models whose performance has already been reported throughout the paper in Table REF for voice activity detection, Table REF for speaker change detection, Table REF for overlapped speech detection, and Table REF for speaker embedding. While speaker embeddings were trained and tested on VoxCeleb {{cite:50a8d50db9361aec89498e7512690f494690d997}}, all other models (including the full diarization pipeline) were trained, tuned, and tested on three different datasets, covering a wide range of domains: meetings for AMI {{cite:95ede0f7d51ee2ff2593f21102b3b701aaa0cd79}}, broadcast news for ETAPE {{cite:ee944bd6b1160b4fc93c3919970fbeb2f43f0ee8}}, and up to 11 different domains for DIHARD {{cite:f1e61e283a6297de80db6d3b48d75a81a04da161}}.
r
ba479dad39f370da62b594ddcf8d5cce
Although there exists an extensive literature on these kinds of image-to-image translations {{cite:a33725aa1341cec1aa99afa021f1dfd52178657f}}, {{cite:17aa2f70f1b87ba5564506787fbec09c20d114b2}}, {{cite:1dee070415a23149be59cf53e8ca1f5b758ae2e6}}, recent works also focus on the multi-modal domain translation such as synthesizing images from raw 3D point sets {{cite:4af7feaf792b68effec4043a74f3fdda21d07d7c}}, {{cite:2beb4039019de80b8ae2a21fae3492f178847a75}}, {{cite:e0da037ac9fa36c8c63bd6731aabf166ceadc52c}}. The latter remains, unlike the former, relatively underexplored since point clouds, e.g. LiDAR scans, are sparse, unstructured, and nonuniformly sampled, which makes the mapping to the structured image space non-trivial.
i
f524e91e3c8fa0b8beef444e1141f5ca
Table REF shows the recognition rates for multiview DAIN that outperforms three other multi-view classification method: FV+CNN{{cite:d64b949980559c4df50f8c86a785e84a342a818b}}, FV-N+CNN+N3D {{cite:ea6ab6f71fcfbe7b9193aa931cc7293c75857e0d}}, and MVCNN{{cite:1c3d9039226d7a3ada044f1685f5be8bab9b2ef4}}. The table shows recognition rates for a single split of the GTOS database with images resized to 240 {{formula:192ea4e1-3cef-4107-9d28-46c3b0c277af}} 240. All experiments are based on the same pre-trained VGG-M model. We use the same fine-tuning and training procedure as in the MVCNN{{cite:1c3d9039226d7a3ada044f1685f5be8bab9b2ef4}} experiment. For FV-N+CNN+N3D applied to GTOS, 10 samples (out of 606) failed to get geometry information by the method provided in {{cite:ea6ab6f71fcfbe7b9193aa931cc7293c75857e0d}} and we removed these samples from the experiment. The patch size in {{cite:ea6ab6f71fcfbe7b9193aa931cc7293c75857e0d}} is 100 {{formula:9656f49e-e592-4e17-9a1d-0d9081dbf332}} 100, but the accuracy for this patch size for GTOS was only 43%, so we use 240 {{formula:17044db6-0868-465d-b606-b68454a4d7c7}} 240. We implement FV-N+CNN+N3D with linear mapping instead of homogeneous kernel map{{cite:cbb2304fa8a650c93801007c60ec2b6e4b361955}} for SVM training to save memory with this larger patch size.
r
0135318dbfc879631c17bcbdae9264f1
Due to the relevance of the topic for applications, it would be important to understand stability properties of such inequalities. Isoperimetric inequalities in quantitative form have a long history, see {{cite:d16d16de7c8da0b8261bc2cd080f7b576ceac81a}}, {{cite:7744c174386c0d512cdf9dbfcb535b45972fdc56}}, {{cite:129b9555c508c0d79a2098df9ddb7cf151e1f5c3}}, and {{cite:fc91d4fbe3d8c5b4403f892fcb5fa54e19346b1f}} for a complete overview on the subject.
i
c8bc73cbba3a4b53511953a852f7033f
It is well known since the works of I. Newton {{cite:1c2383ce954f8762a3f531ef6284d9ccc215064e}} and L. Kantorovich {{cite:56f0f7c5f82403a3c1a7c00c2e2f69fedb20c1eb}} that the second-order derivative of the objective function can be used in numerical algorithms for solving optimization problems and nonlinear equations and that such algorithms have better convergence guarantees. Higher-order derivatives can be efficiently used for solving nonlinear equations as was shown by P. Chebyshev {{cite:25cf6746738a166df4458df29025e3eb71a8af98}}, {{cite:a587f1ec873834df9902b6ac86b56d27f23e7d66}}. For optimization problems, the basic idea known at least since 1970’s {{cite:fc8edd34085208cd494fd76de529c34bc3ba0023}} and developed in later works {{cite:375b1d3082d91bd9747e04e8aa3319f7de959558}}, {{cite:d4d667e9312e84b5c75492bc1a4fa9259e97eef7}}, {{cite:d196300aaddca5c807044bb2c0573e3a4e7b3645}} is to approximate, at each iteration of an algorithm, the objective by its Taylor polynomial at the current iterate, optionally add a regularization, and minimize this Taylor approximation to obtain the next iterate. In this way, the first Taylor polynomial leads to first-order methods that are very well understood, see, e.g., {{cite:7ccb188a6319381a6a22ab678d05b6a45b8df1c8}}, {{cite:c04a8a229a3e878d06996c3c74add000c1c80a4d}}, with the optimal methods existing since 1980's {{cite:7ccb188a6319381a6a22ab678d05b6a45b8df1c8}}, {{cite:19632a2df80bf1abe125676daf5fcecc711fbe96}}. If the second Taylor polynomial is used, we are in the world of second-order methods with the most famous representative being the Newton's method that minimizes at each iteration the second-order quadratic approximation of the objective function. If the second derivative is Lipschitz continuous, the objective is strongly convex and the starting point is sufficiently close to the solution, this algorithm has very fast quadratic convergence and requires {{formula:3f1763bf-23ed-43a8-a69b-4ce74c0b81e9}} iterations to reach an {{formula:cab71dea-d9ab-422d-a9ca-20d39897bbee}} -solution in terms of the objective value {{cite:56f0f7c5f82403a3c1a7c00c2e2f69fedb20c1eb}}. This bound is optimal {{cite:7ccb188a6319381a6a22ab678d05b6a45b8df1c8}} even for univariate optimization problems with the possibility of using in algorithms derivatives of any order. Different modifications of the basic algorithm, such as the Damped Newton's method or the Levenberg–Marquardt algorithm achieve global convergence, but have a slower, i.e., linear, convergence {{cite:673bb5ee7327f8de56d147bca03d3ba73f7056e1}}, {{cite:a4ec6beecdddab89012caaf8ee1bb32705e2a5d8}}. Second-order methods played also the central role in the development by Yu. Nesterov and A. Nemirovskii of interior point methods that have global linear convergence rate and allow proving polinomial solvability of a large class of convex optimization problems {{cite:3cafb5edf2f49bc852318be89b4f474504601c6e}}. This theory was based on the analysis of the Damped Newton's method for the class of self-concordant functions that, in particular, include functions without Lipschitz derivatives.
i
a4cc8c861b59eae1048405fcb43af650
To the best of our knowledge, this is the first instance of non-supersymmetric but perturbatively stable AdS{{formula:dbb942ec-3889-46ff-b6af-0eb5b53e155b}} vacua continuously connected to supersymmetric solutions. This makes them very interesting cases of study in the context of the AdS swampland conjecture {{cite:365edf163ede93572f1870a11cd2f738d1c970ac}}, which states the instability of all non-supersymmetric AdS vacua within string theory. Any possible decay of our vacua must therefore be necessarily through a non-pertubative channel, such as brane-jet instabilities {{cite:eba646212590ab9b9384653e87e60a6098a0c4f2}}, {{cite:228dd12755f1bba03091ae7a8dc3192b651fd45e}} or bubble nucleations (see e.g. {{cite:78d17844a7e6b95c5fd636d52b5b2d53861675c0}}, {{cite:92ad3aff43c8ba4589a71c70802d4de3c907bbcb}}, {{cite:6d02e9fbded36e77d82a41fb77ce44fc9e0bef14}}, {{cite:6cfcb3b7fd708fecfc93a8ccd3a735fa79e53473}}). These possible decay mechanisms will have to deal with the presence of the {{formula:44214c0e-1732-44e2-9c53-61fd1d55ea8b}} vacua continuously connected to the non-supersymmetric family (REF ).
d
1fd0b4a429028605fd5179802fbb6464
We proposed a generalized CNN+LSTM model that incorporates an environment-specific LSTM cell for potentially enabling learning across different environments. This apparently simple yet more general architectural change, compared to a baseline that uses a single-cell LSTM, was found to be even more accurate and robust, while also potentially enabling further extension of our approach onto a multi-environment architecture for training and deployment on different environments using a single model; as in related reinforcement-learning-based research for navigation {{cite:199d0abc2c36f1fa184292c615097f1df71413c8}}.
d
f89c74ab0070dcfb8671007ba3976856
Apoptosis is an attractive model system to study fate decisions, with a neatly defined outcome and a solid background understanding of its regulatory network {{cite:24262078443119b0cf154df2d8afafb303b7ed60}}, {{cite:afff2ee2ff3de0147f366f9283c4db8a6fcdae95}}. However, some key questions remain open {{cite:3b2803b2ab001b7e112343b8b389912bcf11f15e}}, and it has been suggested that the field is poised to integrate theory and experiment to bring insight into the mechanisms that regulate and trigger apoptosis {{cite:41d50cab148de1aae5645e29a14ac3622e64dd26}}. The effective description we propose here has the advantage to provide analytic insights into these mechanisms, and makes some distinct predictions. One of the outstanding questions in the field is how commitment to cell death is regulated and to what extent a cell may refrain from death once engaged in apoptotic events {{cite:3b2803b2ab001b7e112343b8b389912bcf11f15e}}. Here we showed that in the framework of the theory there is a critical feedback strength necessary for a cell to irreversibly adopt a death fate. This critical feedback strength may be regulated both by the relative concentration of inhibitory proteins and by total caspase-3 concentration. In the theory, feedback strength is an effective quantity resulting from intermediate network components, such as caspase-6. Thus, interfering with caspase-6 function could be one way to bring the cell below the critical feedback strength required for irreversibility. This could be tested using transient stimulation and observing the nature of cell response. For transient stimuli, we showed that there is a critical stimulus intensity to trigger apoptosis. The model predicts the shape of the critical curve between the duration and intensity of transient stimuli that trigger apoptosis. For pulsed stimulation, the model predicts a characteristic relation between the number of pulses and the time interval between them to trigger apoptosis. These predictions may be tested imaging single cells during apoptosis {{cite:987cacf7ea2b736a53c2342c49d38ffd98efb8ac}}, {{cite:9a71f433410cace86b22a944eddef4d3f37bcb68}}, {{cite:efa94d5e2f2c2ef0cb4b3cf0840a0522c533a651}}. We expect that our model may be a useful tool to interrogate the dynamics of apoptosis and its underpinnings in constant and variable experimental conditions.
d
8fc5adc5a3a7db9af0ae173c2a5e8e83
We next focus on the problem of Robust Principal Component Analysis (PCA) in Section . Though this problem is not of the form (REF ), we will see that flat solutions (appropriately defined) exactly recover the ground truth under reasonable assumptions. Specifically, following {{cite:e694f8449072550bcd7d64b4becd8f25198124ab}}, {{cite:b90f7c91a263ea06bc596f7a3023c5889738ec3d}}, the robust PCA problem asks to find a low-rank matrix {{formula:90a6b70c-5280-4b3c-9ff5-dbdb236d8fcb}} that has been corrupted by sparse noise {{formula:85b449bd-6b04-4686-b156-b3a82b90a0d7}} . Thus, we observe a matrix {{formula:0e7e2478-7564-49fa-a411-806519f350d9}} of the form {{formula:776d7323-e9b0-4640-a996-21a39bfc6741}}
r
0d8e2bb675719e57eb6bec456f11e8eb
A classic projection method with BDF2 time discretization is used to solve the velocity and pressure. Denote the numerical solution at time {{formula:82f0cf43-4d09-4241-99fc-358eee16a188}} as {{formula:68831ed3-edc8-42cb-8b0e-3ff8b6262e38}} . To obtain {{formula:36c42f3a-267a-4ac2-b0bb-feace7d8e03c}} , the so-called “rotational incremental pressure-correction scheme” in {{cite:9d5df5205cd7a7c0f7d6037b67a2bd96bfe8ba66}} is adopted, that is, {{formula:eee19b1c-d0d2-464f-b032-5f1cd452b6d6}}
m
70d909b2c8f756d93650ce128361686b
In Figs. REF a and  REF b we compare the model predictions for {{formula:8428cad2-34ee-44af-86ba-a4a499516fd5}} to the recent JAM'20 parametrization {{cite:d24a2348e487d318a912a691eecdb0c1e1eda14e}}. The CPM describes the sign and magnitude of the transversity quark distributions well. For larger {{formula:78006313-12dc-47ec-8c1b-f69ac2f8d091}} the quantitative agreement is very good and the model results are close to or within the 1-{{formula:116fa185-0aba-411f-8e9f-d82c7ee1dc37}} region of the extraction {{cite:d24a2348e487d318a912a691eecdb0c1e1eda14e}}. At smaller {{formula:a775b40c-b9c9-4c05-a14b-ccb38dd0ee30}} the model has a tendency to overestimate the JAM'20 parametrization for {{formula:1f563ab8-a514-4686-ac9f-6483a76aa53a}} and {{formula:9323fcba-d0be-4e86-b9e3-ba7a646cc233}} flavors. The uncertainty of the JAM'20 parametrization {{cite:d24a2348e487d318a912a691eecdb0c1e1eda14e}} is still very large. For instance the {{formula:00070e61-2c29-4dc4-a047-0845fd83f28b}} -quark transversity is compatible with zero within the 1-{{formula:72b5228b-2914-426d-97f5-ce2884d79f21}} uncertainty of the extraction. Future data will constrain more strongly the extractions and allow us to test the model predictions for {{formula:f24f8644-45d9-4410-a371-0e573e3f2b4c}} more rigorously. The CPM is in good qualitative agreement with other model calculations {{cite:d0f40b6e3dd76b7a272383e62aef7241daa2c66f}}, {{cite:9c8208fea8f97cb82c8659a4f2888f75b4df20da}}, {{cite:af42cfac77cbc933547ffd4c579b293ee0d7bc6e}}, {{cite:82a71cc89fc5e01a32f765cb2cab4bdcb399bbb5}}, {{cite:95c0cbf183e2c42724f58b0790e2e52852da3717}}, {{cite:23ac3814e024d0a8044fba9ed4d46eed30d2a652}}, {{cite:c4531181e70ebb9c1926ea9b18a10d238c862d4f}}, {{cite:eadbd3a71ad6565e7b256df1e18b96fc801390bc}}, {{cite:72161fafd2ac889eff9f750ea24875238ea26cac}}, {{cite:959f83b5e60d5d991b1dd085d58442abdd455efe}}, {{cite:6484cbe951d6d6fd2598f003b84a0e71ef9b8578}}, {{cite:b10638ed162c746b6995b0fb82edb28ee35f1111}}, {{cite:63116904c92b6298d7705cd23da0b5511469458d}}, {{cite:1db4dc8053fd9119e3d4691d5f040d33ed20b885}}, {{cite:f1dda3e3292c6843c0b92694c18fa51edcae323d}}, {{cite:93b57d84118f8392c2b01274369360a91925f8e7}} and lattice QCD {{cite:3247ff2605c6008a39e04973c119985460ebda28}}, {{cite:45c3004b1e52ad885ca87f6ef42c7a2d4a66452c}}.
r
25bf2c283e1ca8b2c9d1a607a458ee26
As discussed in Section REF , traditional forecasting-based anomaly detection methods are primarily based on auto-regression-based models such as AutoRegressive Integrated Moving Average (ARIMA) {{cite:4ced2419a7243801b78aed54e61194db5c3b0359}}. With the recent advances in deep learning, LSTM has been used to replace auto-regression models {{cite:660b6da7aa6fede84a3ccd2b272fb8c74d357bca}}. This architecture allows modeling short-term as well as long-term temporal dependencies. Deng et al. {{cite:7695abdf0a5fa75d0535fbb27e1ef054fa6fc382}} have recently proposed a graph-based deep learning model with an attention mechanism for capturing multivariate correlations.
m
f254ebdecdcb7f84aa421bc9840b6cfc
The masses and widths of the isovector-scalar resonances {{formula:149cd817-8ed6-4c86-aa18-fcf397a15fd9}} and {{formula:2d60d8da-af74-4230-8d03-d1cc150bac34}} are presented in Table REF . They have been fixed during the minimization of the {{formula:b5a8212b-6aca-4f6b-8897-a836fc119925}} function. The parameters of the {{formula:db98991a-730c-450e-82c3-27f76d973f8f}} on sheet {{formula:9726feaa-0475-4bd1-9679-6d1ce93fdf73}} were taken from Ref. {{cite:17e2dcc0f9f5e4afe9ecca3a4a71d8f54a561054}}. However, we have studied the influence of the position of the {{formula:b5f90388-4e71-494c-a2ec-f87d5fb72424}} pole on sheet {{formula:39bd2c5f-5b99-4985-a02a-ded00278aa8d}} in the complex energy domain on the {{formula:8c8be0aa-f8db-4170-9608-e81eb46881ec}} minimum curve. In this way the {{formula:83f11574-2a08-43aa-95b1-a74a60724b15}} mass and width on sheet {{formula:968dce7a-69ea-421f-9900-f8d3d11adb3a}} have been determined together with an estimation of their errors. The masses and widths of other two associated {{formula:afe58921-a3c5-4ae7-95e3-b15defc2c20c}} poles are also given in Table REF . {{table:74869496-3b6e-45e2-9f8c-9ac820900662}}{{table:a3312841-5ab8-4698-96da-5dd6adc983fd}}
r
e0205d0e7c2718fdc10cc1e27287a4e5
possibly paired with a regularization/physics-enforcing term {{cite:f818bb1a55b4910e8ff400014ae7c99987609ac8}}, {{cite:244ccbb4931a5d5ecd18eaac5ff4df8de838036e}}. Regularization aside, the differences among objective functions typically come from how the estimated output {{formula:3465a15b-f753-42d8-b4e0-148323057c77}} is evaluated. For expositional purposes, let us define a function {{formula:f126a056-cfeb-47e8-a078-e86f657a679b}} parameterized by {{formula:c1c4381d-1f6f-4588-89a4-5a53f86e6226}} that maps an initial state {{formula:eac280f5-c298-49bc-a7ec-e2b3ad37a728}} to the output at time {{formula:c0d5fee8-aa15-47d3-81fe-6258db081400}} with inputs from time {{formula:88a06e20-0fcf-415c-ae4b-f2f54626262f}} to time {{formula:309dd9f8-5148-45a9-be13-ce96e8488838}} denoted as {{formula:ffbfe8db-7924-4349-9daf-d9d0ea1d5e56}} , where {{formula:abe1079b-6c83-4d6c-b9e6-4a88a410991a}} . One fundamental objective uses a single initial condition {{formula:07eb5191-c05c-44e0-8108-728077c9cba5}} and evaluates {{formula:a4587702-ee97-4560-9278-3207e97a9b95}} as {{formula:23867041-6ac9-40ea-b548-f2d26b87727f}}
d
b0634e772469d0769503ac8139ae3b04
Both methods employ an encoder-decoder network {{formula:e91a6706-036a-402c-b6e0-e1f3b5f047c6}} that maps the input image {{formula:dae9dd36-ddb1-48ca-b008-17bfbbe07273}} to pixel-wise weight mask {{formula:38350ee6-764f-4151-ad48-da8a1b9f9d00}} . Such networks are typically used in supervision segmentation {{cite:d12d138e44990f6e8935d01b2324f87140f82532}}, {{cite:a595f92b103d8ee3a4e585db165b8db317a459ca}} and object detection {{cite:efe4b4a82e3945236dc3dac0ad43373f51bd0487}} tasks, where the ground truth map is given and the network weights are updated accordingly. Under the weakly supervised object localization (WSOL) settings, the ground truth is not given locally (per pixel). Instead, only a global label of the image is given. {{figure:6cefe35b-8f5a-47f0-a589-24258763d97c}}
m
298ae287488b82e87abf61420b86e8af
First assume {{formula:9aace4cb-5138-4174-9a6b-fcd375d3ec24}} being holomorphic. Then {{formula:63e23aee-c00c-4db5-990c-a3b51d8b8d93}} must be zero, because otherwise {{formula:992d7540-81ef-4438-83cc-e6dc8d6777f7}} . If {{formula:a8886837-9718-43cd-9911-419672702281}} vanishes identically in {{formula:d340768a-9a24-4310-a867-97ef979ce29c}} , the desired result is trivial. If {{formula:a911fb43-6d29-4538-bc5c-62c168493505}} does not vanish identically in {{formula:e35e6b7d-8429-4385-8296-1d9447c19710}} , according to Theorem 1.1 in Chapter 3 of {{cite:0c06d8afe342f2ef60e59d2c3fb26178e6a43868}}, there exists a neighborhood {{formula:b03ea615-3889-4a30-aec9-d85f66498598}} of {{formula:299608d3-0e3e-4be3-bf8f-7f59609d15b5}} , and a unique positive integer {{formula:1dce1f89-da5a-4cdf-94d6-2def1124260b}} such that {{formula:2c33d983-3146-472f-8eb6-6f5d40a767b6}} for {{formula:03c3af05-e9b3-442e-9cdb-11462309d772}} with {{formula:0e3f4384-d61c-40ca-9d96-4bb4f31312c5}} being a non-vanishing holomorphic function on {{formula:57a4f4eb-63dc-49f2-8f77-55c9c3ffa146}} . Clearly it must hold that {{formula:33f139db-0ec0-4c85-9dfd-a81bf5621242}} , because otherwise we have {{formula:0b789c8b-a99b-4185-a341-e97f89df0ddf}} again. Then it is easily checked that {{formula:ce19f28f-6387-4af4-854d-58826f08f21f}} for {{formula:a95e1c6a-fc51-4606-8cb3-7da064ae949a}} .
r
cb86d1bd5d375e8cb7488804f841bcf9
The goal of our algorithm is to take a series of {{formula:904bc4d9-49b3-4782-8942-a8b431b62189}} synchronized stereo image pairs {{formula:fc97195b-97ab-41e6-b98a-3eece977a4c5}} , thermal images {{formula:0db95152-464e-468b-8227-42a38d12021d}} , and laser point clouds {{formula:387dc1d9-b8af-4034-ada2-18dba90005c7}} , captured in arbitrary scenes, and automatically optimize the initial guess of the 6-DoF rigid-body transformations to get accurate calibrations. The transformation is defined by six parameters {{formula:dd16692e-62e7-4ba0-af82-7e6bce590a95}} , where {{formula:dc0c793c-62e2-4a33-84fd-f1636e70b1bd}} are translations, and {{formula:551aeeec-d402-454a-ad9a-175993f6e0b0}} are Euler angle rotations. We take the stereo left camera's coordinate system {{formula:d8a2ffed-8550-49ea-b9c0-8d7d18314fe4}} as the base coordinate system, and calibrate other sensors to {{formula:680d7bed-d657-4c12-92bc-9211097c8ba6}} . As the stereo right to left {{formula:7b00cd5b-0e05-4811-bb4a-173aa2c68b04}} transformation matrix {{formula:08ee9cd6-8496-4f97-9584-218b35a21a46}} can be easily obtained with tools such as OpenCV {{cite:f2b545f33194c3662c7598c2bf4259147ec4b677}}, there are two transformations remaining to be estimated: laser to stereo transformation matrix {{formula:42aebf6e-04cc-4e78-85dc-200d60067a88}} , and thermal to stereo transformation matrix {{formula:4d32681e-766b-4b76-9010-7c58ce8b5d40}} . We assume that the stereo and thermal cameras' lens distortions have been calibrated such that the pinhole camera model is applicable, and let {{formula:c543bf92-f099-4977-8de3-678ad512a188}} and {{formula:bd884d2c-102f-4fb7-9a78-3e07236c8dc9}} be the intrinsic matrices of these cameras, respectively.
m
c900d89c4b6d06806344d27b2d1d09d1
Answering Q1 to Q3 is trivial by parsing original and augmented sentences using a pre-trained dependency parser and calculating descriptive statistics. To answer Q4 and Q5, we propose to calculate NCPTK for each pair of trees in an augmented batch. To perform the calculations, we transform each dependency tree to GRCT replacing FORMs (which will be different by experimental design) with the FEATS. We can then construct an undirected graph, where each node is a dependency tree in the batch and two nodes are connected if their NCPTK is exactly 1 (i.e., their dependency trees are identical). Then the problem of finding error clusters in Q4 boils down to finding all maximal cliques in the induced undirected graph, for which we use Bron–Kerbosch algorithm {{cite:eb50c1ee86e938d9f0a92be21cb1e2eae22b63c5}}. Similarity of dependency trees in the given clusters can be assessed using the already calculated NCPTKs, which will provide the answer to Q5.
m
dea07892f29abc56f2fa28d23b198e8a
In particular, it has been suggested recently that the soft limits of the {{formula:5040dffa-b1a6-437c-9a6b-dec749a78722}} -point correlators can be the discovery channels for heavy particles and new interactions with masses up to {{formula:b0987e68-3025-4e95-ad65-264c3a0999c1}} . These studies are based on earlier works on primordial non-Gaussianities and are dubbed “cosmological collider physics.” {{cite:4009cd53911065c4f5e25eac36fff3ce81b8b917}}, {{cite:f44345980c3948c3f8ad480432ddc87a0a5cb990}}, {{cite:c5d221d4777c9c7e08a3633f93b438fbbea77164}}, {{cite:4cff204b977ceddcb5f42980e54d13e5bee230cc}}, {{cite:6d2d37d6729942190009a64d2bde1918e94433bc}}, {{cite:5c555b87e74ac1a2b792fdbfed79dd9ea3b98d54}} The general idea is that a heavy particle with {{formula:eaea3837-fa29-45c6-bb02-f0d478086d25}} can be created from the vacuum quantum fluctuation during inflation and its physical momentum then quickly redshifts to essentially zero. Being a non-relativistic state, its wave function would oscillate with a fixed physical frequency {{formula:12e6d9b5-e650-47f4-8ba2-036dbb65c504}} , and this oscillation can interfere with the mode function of the curvature perturbation {{formula:7bd11a5d-13e2-4cf4-8752-1d90f8b66adf}} , producing a characteristic oscillatory signal in various soft limits of {{formula:8c6a3b81-6f95-485b-9d5c-44388a6201f1}} -correlators, including the “squeezed limit” ({{formula:67ffc873-b8f1-483a-8a8b-085a383f823a}} where {{formula:fb7f1cb3-6b13-4470-9d2f-b95af32e954d}} ) of the 3-point function (bispectrum), and the “collapsed limit” ({{formula:9275c3c1-f739-4ad4-a8a8-9928de8d2690}} ,  {{formula:79baadc5-fa21-4bf1-9f05-bda1fc3b9852}} ) of the 4-point function (trispectrum). Later on, this program is generalized to include new mechanisms of generating primordial fluctuations and new ways of producing heavy particles during inflation, alleviating several constraints in the vanilla slow-roll inflations and also producing particles with masses much higher than {{formula:bcf6d829-99c1-44ec-922b-d869fcec3a5e}} {{cite:4cd528e1ec40436c3c652a0eeeb5e59be2827d79}}, {{cite:074e1bf21a77d84957d24817509c1734e63e80ec}}, {{cite:148b99fc742e362d7473d64301f27accd8448aec}}, {{cite:4614196abf973349624abfb8f408354a8b2742a6}}. At present we have already quite a few particle physics models that are capable of generating visibly large cosmological collider (CC) signals {{cite:74c9655e127754a1fefed721254cd61a0b6ae0a0}}, {{cite:b995c567160f9d63c6d480a8f08c102669a9e3dd}}, {{cite:3ecb6f53c58029135f942810e9f77c99908bce15}}, {{cite:7e6aaa1e90bef9cf99988ccf0f26b0e1fe2e5f08}}, {{cite:a7b53a17973fdaf1cd519eea19ac63dd9663f1fb}}, {{cite:a34c2bf2c3db0e304788276fdf1c94c3605689e7}}, {{cite:2c257731a29156706de5ded4e393cf56e459bc58}}, {{cite:4cd528e1ec40436c3c652a0eeeb5e59be2827d79}}, {{cite:204df6185d59bb12974ed9e5c03bca38c9d6d337}}, {{cite:8dc71b4ce97b25e08a8b50b6b6046fa5fa325ebc}}, {{cite:5b9e4e78eeb3b8cb6cf0e30eb9b0b7949b7c7e0b}}, {{cite:d1cf833c8b5eeb79be571c2b9b068c0b84eb6ca0}}, {{cite:e8f0b453c71d80cefc30b4a07d25b813d61928c6}}, {{cite:dccd9d599eba99938f21e6ff8a3bf9bd2680f531}}, {{cite:dac56242ba9d736bf3a18ed2b8f62b2235a5f4a2}}, {{cite:82a6ac10ca9556c7876e4c19e1b4a922f3581da7}}, {{cite:074e1bf21a77d84957d24817509c1734e63e80ec}}, {{cite:2d5c17c7c374d5ce11c5857d99bb74645ecd9439}}, {{cite:7309b4337ef226db2ca07cf9f1474605e84662af}}, {{cite:567953a7e6ecd74c12757dfbb5c037495fb91b5b}}, {{cite:148b99fc742e362d7473d64301f27accd8448aec}}, {{cite:c866ab6474274a37c91c30702f43ad0f63a760e9}}, {{cite:c0f09225001ec132ae253b7355bb7746e4c54df0}}, {{cite:4614196abf973349624abfb8f408354a8b2742a6}}. These signals could be searched for in the large-scale structure surveys in the near future or more futuristic 21 cm tomography from the dark ages {{cite:032f3e1dcc2f5ea5127b0ec640945748d5e1c2da}}, {{cite:77fe60c554750451fce9219be1e77f84bca2540a}}, {{cite:19cf4f2541c22a077eca2d3332a12571982e8f2d}}.
i
9e5084a137b357cc883b4ba2f1761987
We propose an alternative utility model based on facility location optimization methods {{cite:ecd4fe70c7593a3c64b0d94dc59f3c729d708a4b}}. In the facility locations problem a utility can be constructed that uses a greedy algorithm to minimize the cost, or maximize the reward, of building a series of new facilities in a supply chain, while also minimizing distances between clients to the nearest facility [{{cite:7a6ac43c8ffa830dffac9f4dccdc9e02840f1813}}; {{cite:b5c87fae56ef2388e11c1f19cc992f8bf6324049}}]. In the UU query setting, we can draw an analog to the selection of a point to query to the establishment of a facility at that location in the feature space; evaluating the reward for selecting the point, and the distance it stands from the surrounding unobserved points. We propose a facility locations utility function as: {{formula:99831592-f2d6-470c-b0c3-fbcecf20d838}}
m
83934eddbcc0b336d58cfc5a70548223
We evaluate our method on the test dataset derived from the KITTI dataset, and compare it to state-of-the-art video frame interpolation methods quantitatively. The same approaches for 3D-2D projection and 2D-3D reconstruction using Equation (REF ), are applied into the video frame interpolation methods under comparison, where we set {{formula:136822f6-1005-40df-9f38-1952d2b5ff85}} and {{formula:b0106521-e3a8-4b94-a6a1-52860faa1a96}} , as used in {{cite:b21c683d46bf000221de8cf439ede576bc28b848}}, {{cite:565afb732536746b97a62e0cbf13389a209aa45d}}. All experiments are implemented using PyTorch and run on the same machine using an Intel Core i7-7700K and 11G RAM with GTX 1080Ti.
r
ccc185315d0db84d3f6b150c5dad6607
In this paper we constructed and studied a three-dimensional model that exhibits global continuous symmetry breaking at arbitrarily large temperatures. To our knowledge this is the first example of a UV complete unitary 3d model exhibiting persistent breaking of a continuous global symmetry. It bypasses the Coleman-Hohenberg-Mermin-Wagner no-go theorem {{cite:b0466d34f07db086e5d6b77f4e2906c9335a4ab3}}, {{cite:ab2dc87dd1dc1f95617435848a57de7261a2e186}}, {{cite:0f727cc8d736257cdbda3d9bf0e93a9f51e22e20}} by incorporating non-local interactions.Placing a theory on a curved spacetime is another way to bypass the CHMW theorem. For instance, the O(N) model in AdS evades it {{cite:404e7348ff094deaf9b873f89fbb8374cc7f0f54}}, but at high temperatures the symmetry is restored in this model.
d
8847d3b7ab659ed260fea800401e61a2
We consider a uniform linear array (ULA) and a uniform planar array (UPA) for the configuration of the BS and IRS, respectively. In particular, we have {{formula:3f2f0f7c-b7fb-4b99-a870-06f728933f11}} while {{formula:27d6dea9-dfbd-462b-bf01-adfa8fddda2b}} , {{formula:214b3f16-5f07-499e-acdc-60169bc9bb5b}} are uniformly distributed between 0 to {{formula:f105d188-cb1f-4abe-bbd8-ea7548254579}} and 0 to {{formula:3713d844-b0d4-4d63-a0ce-ac9b5107d68b}} , respectively. Also, {{formula:7d1a13cd-aa01-4e4a-abb7-0e7deab8ea59}} , {{formula:3e275925-f35c-4cc3-b170-253d38f26d03}} . Moreover, we employ the 3GPP Urban Micro (UMi) scenario from TR36.814 for a carrier frequency of {{formula:d14fda9c-e412-4ce0-b223-914b7f30f4a3}} GHz and noise level {{formula:667a88d4-2843-4694-ae5e-9a4a6695a743}} dBm, where the path losses for {{formula:f01f9a45-25e6-41be-aab9-117ecb837c0c}} and {{formula:346bcf0c-0601-4869-bbba-00df9d99d56f}} are generated based on the NLOS and LOS versions, respectively {{cite:99aed0355a9407590782881b83b7b71e91a0908a}}. Specifically, therein, the overall path loss for the IRS-assisted link is {{formula:a2e1a308-6f7c-44c7-a101-ad8e658620e8}} , where {{formula:6d49040d-89da-4369-bcb1-e5d3096e3e40}}
r
4b6598784b640da39907ed95f03c2d1e
We used the DCASE 2020 SELD dataset {{cite:ca78da32ae36baff79464d91463244cc2d5aaf8f}} for our experiments. This dataset provides both FOA and mic-array format with 4 microphones. The dataset consists of 400, 100, and 100 one-minute audio clips for training, validation, and testing, respectively. There are 14 sound classes. The azimuth and elevation ranges are {{formula:1706f7a9-f754-4f7b-9d4a-dc366eb0833a}} and {{formula:61024ead-460e-4bf2-9679-cdff2cf2fde5}} , respectively. We used an angular resolution of {{formula:22ffe3c6-0cbc-46db-9024-00ebb3a525b3}} . As a result, the number of discrete azimuths and elevations was {{formula:62bd6d1f-e4ad-4154-ae46-a11d1ebae804}} and {{formula:018bdf67-4e25-45a4-90f5-f127295041ca}} respectively. Validation set was used for model selection while test set was used for evaluation. {{table:91b9d056-3b31-43a2-a3fe-9dba8573469a}}
r
1f428be11ee413d8b2f5925c41bec625
Multi-scale training. We summarize the multi-scale training as PyTorch pseudo-code in Alg. . The training scales are randomly selected from 25 patterns, whose height is ranged from 512 to 1024, while width is ranged from 640 to 1280. We also randomly crop images from [0.83,1.0] of the original scale. The global batch size is 8, while the relation of sub-batch size and resolution is shown in Tab. REF . Thanks for the mixed-precision, it only takes about 22 and 15 hours for our proposed MVSFormer to be trained with 10 epochs in DTU {{cite:499962984a964642b5f9eb0f80a385043b5141b0}} and BlendedMVS {{cite:0b8ef1f03e3740c9a43483d08bc74be52d0369c3}} respectively with two V100 32GB NVIDA Tesla GPUs. {{table:961b7527-effe-475e-abf9-610c515d2c04}}
d
d4a14f86fa14aabc954fe9b8fe40168a
Since the last hundred years, there have been many attempts to modify or test GR {{cite:bee31a404760e80ecc57bc71074d1549d516b0db}}, {{cite:d5c74ad3673d818c5178c16aae6870c3d8b578ed}}, {{cite:f96bdc942c65116a838da41022cac5a1c87ef40b}}, {{cite:8898f95c04f55aca71028dcbcc21fa3b2800618b}}, {{cite:c60a9561b81fe895386f9b7d7b2cac4e84408e93}}, {{cite:17fb2f0bffdb6cbaec0849b675574644665fe118}} and it has successfully passed most of them. There are different regimes to test theories of gravity {{cite:26dd760f42281fc52adc56dba414ca287b1a623c}}: Quasi-stationary weak-field regime (G1), Quasi-stationary strong-field regime (G2), Highly-dynamical strong-field regime (G3) and Radiation regime (GW). In this paper, we use the radiation regime to impose a constraint on the BD theory whereas the Cassini mission (mentioned above) tested gravity in the G1 regime.
i
85c4e4afb147b5eeb07de4ac0a42e767
Equation (REF ) defines the quantum gravity state over the whole superspace of metric configurations on {{formula:77919da6-5b49-4386-aed1-5a0d830978f9}} via the deformed partition function defined in (REF ). However, the fact that the deforming operator (REF ) is irrelevant means that this theory has a cutoff beyond which correlation functions cannot be trusted. In order to probe smaller scales we need to UV complete this theory. Because the deformation is guaranteed to be along an RG flow line, when we go to the IR limit of this theory, we end up back at the original CFT we started with. From the gravity perspective, probing smaller scales corresponds to taking the volume of {{formula:d8263b84-8293-48e2-abaf-f04ef5fc3777}} to be smaller, which semiclassically is equivalent to going “back in time". So this gives us a field theoretic approach to the early Universe. Notice how the usual dS/CFT story {{cite:e549d34bb5dae66965b9d6e16486e69486d9dd23}} is recovered in the IR limit of Cauchy Slice Holography. In minisuperspace we had the advantage that the deforming operator was well-defined everywhere along the flow so we could already obtain some quantum “gravity" information.
d
232e7b51c1cdef66cc989ccb163d1748
With the rapid growth of deep learning, handcrafted features and predefined filter banks in the BoW paradigm were replaced by the `deep features' and have achieved state of-the-art results {{cite:20f58d0a6a71b65f1d507621b4900aad37476433}}, {{cite:2a939667502641b95b68f97228eb90ed5f78e761}}. The main challenge faced by these methods is that each component is optimized in a separate step, and features and encoders are fixed once built. Therefore, feature learning (CNNs and dictionary) does not benefit from labelled data. Recently, a deep dictionary learning approach has been proposed for texture recognition {{cite:3a13d7c4e549b8095a2c92e5b10b093ee7033103}}. It outperforms existing modular methods and achieves the state-of-the-art results on material/texture datasets. Also, it is an end-to-end learning framework whereby features, dictionaries, encoding representation and the classifier are all learned simultaneously facilitated by a suitable loss function. {{figure:4ba89c7d-ae80-459e-87a8-ccf95c63acee}}
i
9f5a8b327be3a7bc8ce150b4e672d360
For visualization purposes, we consider a simplified example with {{formula:c750021e-8368-4e43-b317-bd8a635e5830}} nodes, as shown in Figure REF and demonstrate the difference between the regularity functions. Figure REF shows the contour plots. Under a lasso regularization ({{formula:6278078e-27f0-4efa-9091-781ce77cd5d3}} ), the three predictors are penalized in an equivalent way. Under {{formula:570b0be5-d0ed-41b7-bd17-09cdae85c9db}} and {{formula:391975bf-0057-4700-b819-9b123a4eadd8}} , {{formula:29836a38-5142-40cc-9c52-43160888d07c}} is regularized differently from {{formula:26e34988-7ad3-4476-a1da-ac1429616848}} and {{formula:b4b9b1af-95cd-439f-bb95-717b67bd7bbb}} , given the fact that {{formula:56ea453f-915c-4f67-a83f-60211a1cd8db}} is the parent of {{formula:059edc4e-a8e0-4277-8f6d-7f6049c213d7}} and {{formula:ccbae012-cfac-4bc3-b415-36f1f0d60d30}} . A major drawback of the original lasso is that when the covariates are dependent, model selection is not consistent {{cite:008ca067b8dc81939d4ee8c8a143a5db40c81d98}}. Taking the tree structure into consideration, {{formula:2acc5da6-18e2-4fb4-8d7a-2c3709a3adcc}} (or {{formula:2ee20de6-d4d4-4502-9ab2-6630dc6eeb61}} ) circumvents this issue. This can be shown by combining models (REF ) and (REF ), {{formula:c0710b82-3125-467a-aaa6-0a77471b2c57}}
m
689b040478cd560336abae1096fbdbb5
CASA {{cite:7d897f88c3ba0f44df98c416e46199c2a935ba74}}, Astropy {{cite:ee247b82f0013ab8b51e7d19688bb9399a307170}}, SunPy {{cite:88263d681889b9ddbc923a2e4cf88708937e26b3}}.
d
4808afd70768d0f84275536614345eee
In this note we introduce a category of finite strings and establish some connections. First of all, we notice that the category of connected strings is equivalent to the augmented simplex category {{formula:b1627b18-096a-460b-b304-0537a10a1840}} (cf. {{cite:6bcd398cb102587e9a364d2b1b65a853d3e9eb16}}, {{cite:2fd2a5c87bae64664f50c12a3f57bac3f2e3964c}}), once the initial and terminal objects in {{formula:b22aa419-d2c7-478a-a3fd-362edfd15a64}} are identified. Then we show that the category of finite strings models categories of linear representations. More precisely, we provide an equivalence between finite strings and certain abelian categories (hereditary and uniserial length categories with only finitely many simple objects and split over a fixed field, cf. {{cite:41d11fb729ac76c48c9c88f4a1d5754319fe890b}}), where morphisms between strings correspond to certain exact functors. In this context it is appropriate to include cyclic strings which correspond to abelian categories of infinite representation type. This is somewhat parallel to the cyclic category of Connes and others {{cite:185c9718f3175758a89aef8cd9dc5f733cf0b165}}, {{cite:059de20a22cb03874fa1d41d84f1dca23a8d3871}}; however we add new objects (cyclic strings) while the cyclic category keeps the objects of {{formula:e535339d-3f4d-4c06-82db-67994ad941a9}} and only morphisms are added.
i
7d93a8c0a35eed4df32626341365a2db
Furthermore, {{formula:89e1a61b-083f-43a1-a7f8-4abceed5b036}} might itself be unknown. If so, {{formula:3d04bc1e-e6cb-4194-87a7-a453738cc802}} can be calculated simultaneously with {{formula:9e0ffd19-c3c9-4901-aee5-610723b0deb3}} using value approximation techniques like MC policy rollout, TD({{formula:82730946-f2d4-4f10-a750-a1e3829d07b2}} ) or TD({{formula:c0a02df7-4485-4ccf-b086-e540b0cea148}} ) ({{cite:cbf1bcf8e702c7f593eb026303f0a9249368b665}}; {{cite:2cf4a48699aa8d73161a42b6ae7878fbf101e0bb}}; {{cite:0f50115684525c5cb511b00f3d99478d6ddd11bf}}). Given some method of Q-value approximation, the non-zero variance of {{formula:a8997daf-98e8-4d2a-8b93-363751aca5cf}} can be reduced with an additive control variate {{formula:77397f0d-2ebc-479c-ac5d-0b65ca881c1d}} . Then, the term {{formula:5ab707d8-f7a6-4db2-8892-e23e6930d60a}} is referred to as advantage. Having calculated {{formula:8a328006-57e9-4a65-8c6c-8705251d160e}} for every state in batch {{formula:62f35b3c-3491-40b8-8e46-528f1baf976b}} , the scalar loss is calculated with the batch average {{formula:06c26848-8b3f-426b-97cf-efafe5dd8dfc}} .
m
e962c0c8fb3f1067d62552fd280f6144
Besides, the aforementioned combination of {{formula:b515c159-1460-47b0-bff4-c0d7cbc4a128}} plays a key role for the hadronic contribution. From our calculations, a steeper proton spectrum would require a higher density within the PWN. As for photon energy {{formula:4a776f39-a4b8-4457-93fd-2c7938878d9b}} TeV, all three combinations considered here can make significant contributions to {{formula:d0aaa259-b879-49d7-9660-373e9731b858}} -rays. It is worth mentioning that, in interpreting the Crab Nebula SED with a spatially-independent model, {{formula:0a1f6357-22de-4514-9e72-62e126bbc19c}} cm{{formula:5c0f162c-bae3-483e-9b8f-b504101632ea}} and {{formula:b5e6bcdf-f5b3-4a6a-9cd9-dfa82a66c2d4}} are commonly used {{cite:1527bc558a2dfc82b2d36d6972441364179d184d}}. Meanwhile, the relation between the proton fraction {{formula:08f8cd80-90c0-4930-b4a9-506b49224fd2}} and the medium density {{formula:ad5cf24e-c150-4030-9517-32d9781a1dab}} within the PWN is approximated as {{formula:55b99352-1d9a-4eb5-b353-6b0c8ef81d3b}} (see Model D ({{formula:19c3c6d9-b13c-413d-981d-c34ff442e853}} ) of {{cite:b7a143d10482c5e1e10272fb6c7fefc773dfa42a}}), which gives {{formula:013f5c6d-48d3-4950-99e1-82638682b3d2}} if {{formula:4cc6b003-a6f0-4ef7-afde-6ddc1b7258f0}} . In fact, similar to {{formula:95817e3b-d23e-429d-a086-170f971c26f2}} , {{formula:0aa62222-4ac1-4b00-8f68-e5c24ed04389}} could also affect the amplitude of SEDs. However, following the method commonly used in leptonic models, the SED from the synchrotron radiation is calculated by adjusting the values of both {{formula:ce3add25-7ade-4119-bd2c-07297f87632a}} and/or {{formula:5d026689-c71f-4a70-b99c-75aaf9dfe342}} to reproduce the observed data from radio to about 100 MeV band. When {{formula:de98f52f-5e5c-4ca0-a672-54df28738ffb}} and {{formula:5c30b36e-d6cb-48a0-aeea-9f57c0b1469a}} are given, the proton fraction is determined by {{formula:ec343bc6-c220-4718-8c17-176fc0fece6d}} . Since {{formula:185fc879-8ae3-48fc-b701-6c6cdab76ace}} is determined in previous calculations, {{formula:ccad2e2b-a290-419b-a200-fc7e0e223c12}} become the primary factor affecting the amplitude of SEDs. Therefore, we mainly discuss the effect of {{formula:457e0a1e-20c5-4800-a94d-fca53859e246}} pair here.
d
a9fc6210c5b86f7f7fe0733f1b3cd89e
where {{formula:451e2b47-a230-48c5-90a0-6ebde6f97372}} is the probability of the system in configuration {{formula:02a059df-1969-450e-87cf-0d1b916fcd9e}} and {{formula:65521d7d-34d2-4260-bdc2-f89f79f45b23}} is the probability to perturb the configuration from state {{formula:eb4ef810-a1c8-44fe-bba8-e67126d0af6d}} to state {{formula:5b9d2198-b61c-4d54-8721-7a51eb5679de}} {{cite:00814fbe069d9e9d6aa73f72e8338291f1271a2d}}.
m
99e53f82727d3b2897f7b45881d774e6
To illustrate how the derived expressions can be used, we take the example of an isotropic Heisenberg {{formula:d79f87f0-c361-4402-9f67-45f811af8b00}} model submitted to an external magnetic field. Here, we estimated the temperature and the magnetic field of this quantum Heisenberg model in which the Hamiltonian describing the system can be written as {{cite:b2fa90f720cca2da9d9ec4cf1d195abace1b744a}}, {{cite:756d32e1b3825919cc1936b05eb0ac2b07d52de1}}, {{cite:82641c04385471bec60f508caf8101dca9593141}}, {{cite:675219dfe00b74d3209201a8f1f678fda12f2a89}} {{formula:68fc0acb-0fa2-48a0-8478-09034db2c7ee}}
m
0f27223005c12533568fd36840147d0c
Definition with {{formula:97363084-c238-4476-a6f8-8c7b287d7c3a}} polynomials. The classical Hermite polynomials are defined {{cite:544415e7fbf81af2ceafecc7cfa9a70027e06200}} by putting {{formula:dfe33732-5b18-4db5-be8a-f7f211ff8e12}} , {{formula:4b8226ab-24d4-4d25-b04c-67f50d84e9bb}} and subsequently {{formula:210aff8c-5be1-4aaa-9226-980db9e75fd4}}
r
8616b20beed2acac1a236028357e6465
In this work, we further conduct data-free training on semantic segmentation tasks to show the effectiveness of our method. In segmentation, we only use the feature regularization loss and adversarial loss of REF for data synthesis. The mIoU of the student model, as well as the data amount and synthesis time, are reported in Table REF . We compare our method with DFAD {{cite:1dac0b0c161522a593bab56956b71e4d40416e8d}}, DAFL {{cite:2b396c68c7f2b11843f448d5566bb84105b020ea}} and DFND {{cite:a510bce1ed0db5a0ea043836ecb99356168d1e0f}}. DFND is a data-driven method and assumes that a sufficient unlabeled set is available for in-domain data retrieval {{cite:a510bce1ed0db5a0ea043836ecb99356168d1e0f}}. DFAD and DAFL refer to data-free methods that train generative networks for knowledge distillation. In comparison, our method successfully synthesizes a training set only in 0.82 hours, which is much more efficient than DAFL (3.99 hours) and DFAD (6.0 hours). {{table:c23cf622-98c6-42a9-941d-bbc0fee6e12a}}{{table:7bbe21f2-a784-43a7-b7e2-133ffa84abf6}}{{table:1aadba1a-d76c-400a-b5c3-412d7e036038}}
r
bdfcc10113d5c5a031fd47f131976853
Respiratory motion during Magnetic Resonance Imaging (MRI) has been a long standing problem that leads to considerable reductions in image quality {{cite:547ec3b22cdfc8a32ca4fc21e7ed26e86602bcfd}}. The solutions traditionally proposed in the radiology workflow are breathhold or respiratory triggered scans, which are effective at reducing motion artefacts at the cost of limited spatial resolution, increased scan time and reduced patient comfort. The solutions traditionally proposed in the MR-guided radiotherapy workflow are respiratory correlated 4D-MRI scans {{cite:bd400e81f1c36540fff9b44ecdd076380c5744bf}}, which sort the data into multiple motion states based on a respiratory surrogate signal. These 4D-MRI scans simultaneously reduce motion artefacts and quantify the respiratory motion, but require considerably longer scan times and often show reduced image quality compared to breathhold and gated scans. This reduction in image quality is one of the main reasons why 4D-MRI methods are not widely adopted in clinical exams. Recent advances using motion robust sampling trajectories {{cite:05dd59c2cf15c89c190217937a7676218e8ac10b}}, {{cite:9f41575c7e1dbbe5d025025a1049eb4fc4262325}}, self-navigation {{cite:434a5a285e295002c082bdcd63be3f41cd8bc6e8}}, {{cite:b58d69e469aaaf509e6fbe3dfa91802c03b1f0ea}}, compressed sensing image reconstruction {{cite:341bf1dee9ed250d03f15e1a9f61d7e81d04bb1b}}, {{cite:fdf2f132e6e9947d02d034792da0c8faba434ff3}} and motion compensated image reconstruction {{cite:339d7c6eeb276048a8d82e0636c191ee07df25a4}}, {{cite:3fbd5233442cca7cb138e6752fd88d23d61914f2}} have improved image quality considerably, but are often limited to {{formula:edece0d7-1169-42a5-b8f4-db905cefd969}} -weighted ({{formula:fdb4922a-78f2-4b21-9a54-720cd79d290b}} -w) gradient echo (GRE), while many clinical applications require {{formula:9af24e8f-ea6a-4c0e-8778-a4b6538dc9d2}} -weighted ({{formula:56b0560f-e978-4e12-920b-46f1d8a5ee9f}} -w) turbo spin-echo (TSE) scans {{cite:0a7a91d8939c4c989c9dd55202183035b7859530}}, {{cite:dc4e92470293be38f80c06bb17eaf503f593410b}}.
i
fed85e78efb47a9fbab83527fd86df62
An important aspect of this connection has been the role that quantum error correction plays in the encoding of the bulk in the boundary {{cite:e8bb20f036fea7460c962e3ac53c1bb6e7676d3e}}, {{cite:c5a802b955b64a02cb1ddce4aee901601dce6a05}}, {{cite:54f30288dfce1fec5773d0e772fb0acb1a1ab490}}, {{cite:45a6636d226085663b5cc9068903ca46f7962fd1}}, {{cite:47d0a18c3ed16f1c7984d7d08b1fd87bdac55533}}, {{cite:f540fa8d8b8a556488a5bf79a38d619da3af8454}}, {{cite:c3ab0320f8b522f04fd1a6dabee2c32041e492d7}}, {{cite:ebcbae466aa2625c194deea049c5e8c43c798d7f}}, {{cite:a761bf62542190ca54d5ac8db4cb0e145fff810a}}. In particular, {{cite:e8bb20f036fea7460c962e3ac53c1bb6e7676d3e}} established a theorem demonstrating the equivalence between (i) subregion duality, (ii) the equality of bulk and boundary relative entropy, (iii) algebraic encoding, and (iv) the RT formula.
i
124656176d38f533a6aec2fe9853eeb2
Figure  REF shows the increase in quality of community finding (SN-modularity) over iterations of the SNIC algorithm. Recall that the SNIC algorithm decreases the distance constraint at each iteration. As the geographic constraint decreases, such that community proximity becomes more important, the quality of community (number of connections within vs outside) increases. Here we introduce an axiom - that as the geographic space of interaction for a social network shrinks, it is more likely that those left within the community are more connected. Spatial outliers, which are also social outliers can be conceptualized as weak links {{cite:456101b4f2ba089abf1864867bb80b65d3ea0be6}} and are removed through community proximity limiting iterations. Through 100 iterations, the quality of community increases and in most social networks this value may continue to increase given high enough spatial resolution data. In other words, humans form communities and interact mostly with those they are geographically near, such that the strongest communities will be those shared within small geographic proximities. However, we note that more iterations of the SNIC algorithm will not result in singleton communities, as that is the initial partition considered by the algorithm. Also note that the improvement in SN-modularity as a function of number of iterations of the SNIC heuristic is also likely dependent on the parameter {{formula:88ba08fd-ec84-48d9-9031-7c1f2eda3b31}} . {{figure:71841833-3ca9-429c-b954-84ef5ad0c54e}}
r
b4ce84859174b95774e7032e9896a9c4
Figure REF shows the quantitative results in LDP mode and RA mode. Without the loss of generality, we take Figure REF (a) as example. We choose VVenC as our baseline, and test its performance from QP=35 to QP=62. Higher QP value means higher compression ratio. As mentioned above, our method can implement dynamically-adjustable bitrate through two channels: adjust the sampling interval of key frames or adjust the compression ratio (that is, QP value) of key frames. In Figure REF (a), the orange broken line shows the performance of different sampling intervals from 5 to 100 under QP=30, and the green broken line is the performance under QP=34. In fact, QP value and the sampling interval can be determined freely, not limited to those listed. In addition, we also compared our method with Wang et al.{{cite:7a39bc656386204d86c749e0960cf31922cc55ce}}. Since there is no released test code, we applied the data released in their paper and drew it in Figure REF . It can be found that our method achieves significantly better performance than VVenC and Wang et al.{{cite:7a39bc656386204d86c749e0960cf31922cc55ce}} in either LDP mode or RA mode.
r
491bc94c617cbb49119b97ac4edb19b1
SEMPRE {{cite:4a34ba99abd5ceccd17d76aa9262feec9eea269b}} constructs queries that can be executed over the KG using semantic parsing of the input questions. Using a lexicon to map question surface terms to KB entities or relations and a set of composition rules, it recursively constructs more and more complex formal queries from simpler contiguous forms. The system uses a machine learning model to select the best possible derivation (partial formal query) from the possible ones. The model is trained using distant supervision, using only question answer pairs, not using the true formal queries. Although distant supervision eliminates the requirement of manually curated semantic parse of questions, the true semantic parse of the question is not known and only a likely approximation to the true parse is used for training. This makes the learning step less reliable.
m
43e1ae2af52abf17f84e8c388d8ede73
We used the Gaussian mixture model{{cite:53105e83130a3d591497ed7a959fac831919cf39}} for clustering, where the goal is to find a set of K-normal distributions of mean {{formula:22371661-a326-4c53-a427-2ed7fc7d33b1}} and covariance {{formula:f3e456de-9e40-438f-9985-d67bbda31660}} (where k=1 to K) to best describe the overall data. As the final output of the algorithm, a categorical variable is also inferred. The Gaussian mixture clustering is a probabilistic and more flexible version of the K-means clustering algorithm, in which the clusters can be unbalanced in terms of internal variance, each covariance can be anisotropic, and where decision boundary is soft. The negative likelihood of the data to be fully described by the set of normal distributions is used as the clustering loss. The number of clusters is inferred by our procedure. For the Gaussian mixture algorithm, we initialize the number of clusters (K) = 10 and let the model train with expectation minimization strategy{{cite:53105e83130a3d591497ed7a959fac831919cf39}}. We made clustering optional, once clustering loss stagnated after 6000 epochs. We employed batch processing, which randomly selects subsets of the whole dataset for faster training. This also prevents the model from getting stuck in local minima. We trained the model for 10000 epochs. The Clustering loss decreased by a factor of 5.
m
39d3511c9d8fbf1e04c98faf99b2bbc7
Most DL methods for trajectory prediction do not uncover the underlying reward function, instead, they only rely on previously seen examples, which hinders generalizability and limits their scope. In {{cite:81f3b28c38cfd213be43c1b794581b58034a1469}}, inverse reinforcement learning is used to find the reward function so that the model can be said to have a tangible goal, allowing it to be deployed in any environment. Transformer-based motion prediction is performed in {{cite:b629604b15a2f91cbd58ce844975c0a830e7d157}} to achieve state-of-the-art multimodal trajectory prediction in the Agroverse dataset. The network models both the road geometry and interactions between the vehicles. Pedestrian intention in complex urban scenarios is predicted by graph convolution networks on spatio-temporal graphs in {{cite:977c622f78b39b93812547053f13406eff4cde91}}. The method considers the relationship between pedestrians waiting to cross and the movement of vehicles. While achieving 80% accuracy on multiple datasets, it predicts intent to cross one second in advance. On the other hand, pedestrians modeled as automatons, combined with SVM without the need for pose information, result in longer predictions but lack the consideration of contextual information {{cite:cc28b931eba6b0c86651f2700ac6c5f488960ff3}}.
m
31bc221521df8534831f6581651c75ea
Due to our investigation being innovative, we are unable to compare the obtained results for different values of the fractional parameter. As a consequence, we recover the classical solutions of the DFP and its shifted by setting {{formula:c10fa3c5-5495-48af-bb45-04edcf34038e}} and comparing our results to those found in the literature. In Tables (REF - REF ), we report the numerical results for the ro-vibrational energy spectra of the DFP for several DMs: LiH, ScH, HCl, CO, O{{formula:25cc1943-7ed5-4664-882a-65f9a67c5427}} and H{{formula:a5de1803-4b60-4de1-9e3e-28a728d5db02}} in comparison with the found in Refs. {{cite:f36454dc86a387ff89a46a5226cecced8f743027}}, {{cite:b1067fde7987a84d67af4e88c23caca233f4a5d9}}, {{cite:d54c7a7f396b667e8171362183fc8df85ea5947a}}. Furthermore, the ro-vibrational energy spectra of the SDFP for numerous DMs: LiH, ScH, HCl, CO, H{{formula:b178925a-b91e-467f-9b2d-76e203f5acf9}} and I{{formula:2f2341d2-2881-4e42-ab03-59b251e0f185}} are displayed in Tables (REF - REF ) compared to the findings in Refs. {{cite:f8900f961495e1a82692a81aa7a72d14cec24eaa}}, {{cite:108638dc958176358c5cc5e33696b4a8ef99802f}}, {{cite:035d7362c8faede04bbd241212f85cde45519e3c}}. As can be shown in Tables (REF - REF ), the ro-vibrational energy spectra of all selected DMs rise as the vibrational and rotational quantum numbers increase. Importantly, one can see that our estimates are perfectly consistent with prior works that used other techniques. To our knowledge, such an investigation has never been done previously. We hope that the current findings will be helpful in future studies.
r
8c6a9382e8847d4990de4852edc82d57
Figure REF provides the block diagram of the proposed methodology. The proposed framework for explainable event recognition is composed of two main components. The first component is consisted of fine-tuning a pre-trained CNN model while the other component is based on the Grad-CAM, which generates the activation maps of the images processed by the CNN model. Our CNN model is based on a state-of-the-art architecture namely Xception {{cite:b59f78ca0f26e287b9d8674b994318474413a3bd}}, pre-trained on a large-scale object recognition dataset known as ImageNet {{cite:ecb68547cb80239465914041814167967aa781ac}}. In the next subsection, we provide details of each component of the framework. {{figure:b7d89d0c-bc4a-4448-a825-5b425745582c}}
m
deade7f491d174d809fba4c118fab521
The supervised baseline we consider refers to the ResNet50 architecture trained with cross-entropy loss and full access to labels using the same set of basic data augmentations for 1000 epochs as proposed by {{cite:91198b7f71ce6165aa85c0c9596406e915641c89}} and used throughout the representation learning literature (c.f. {{cite:9386512a961e342652cc81ee702bc200c5471e0f}}, {{cite:3fecbe3d7ab162b86b62ee7c572ecbc0fd57e6e2}}, {{cite:51cb672e418e44759d552aec7160871e7b805951}}).
r
b32fd494a1e7f121cdbd4b4fe42c125b
is always tight in the Gromov–Hausdorff–Prokhorov topology, whatever {{formula:523f3027-66ea-4f29-a5fd-fbd617b394a2}} and {{formula:1453ab8b-d7ab-4493-8b78-1b941fc6b09b}} , thus extending results due to Le Gall {{cite:78b915f49e2dc3c186ce70f9f4c855f0dc10201d}} for {{formula:663b1133-3133-4dd7-b713-7ddc79600efe}} -angulations for any {{formula:e51f326f-e640-42a2-831b-711825c157f6}} fixed, and to Bettinelli {{cite:df3b07e031e45e087f8a8d843d0115a18c6e284c}} for quadrangulations with a boundary. In the case of {{formula:b66372e4-c99e-494f-a22b-88de6e742ffb}} -angulations without a boundary, the problem of uniqueness of the subsequential limits was solved simultaneously by Le Gall {{cite:776bc19f1b2d8c8530abaa9c90787aa76f34bcf2}} and Miermont {{cite:65ef33d4dd8025a90337abff0c1a2c1b2204d9ad}} who proved that, suitably rescaled, they converge in distribution towards the same limit called the Brownian map, named after Marckert & Mokkadem {{cite:f31c82742380c91014cb86b7aaf36ed3bfb560a0}}, which is a compact metric measured space {{formula:b6f6900a-09eb-469f-9127-d180410b0d33}} almost surely homeomorphic to the sphere {{cite:0449efeccb4132fbb193cf4350f0bd18f9f276b0}}, {{cite:3680cd2c37910904a182b4d5bb53e22117b35052}} and with Hausdorff dimension 4 {{cite:78b915f49e2dc3c186ce70f9f4c855f0dc10201d}}. In the case of {{formula:198dae6c-f7ff-4977-8a49-13622a18efbc}} -angulations with a boundary, uniqueness was solved by Bettinelli & Miermont {{cite:e62db669ddf12b30d02c7ecc81d174e79c090927}} : when the boundary-length behaves like {{formula:810aecc1-dce8-41ca-8392-3035b7e13d57}} with {{formula:dd65dd49-15ce-4f49-9939-be1b6c0f997a}} fixed, they converge to a (unit area) Brownian disk with perimeter {{formula:61598634-f972-40f8-87e4-8854d1d17995}} , denoted by {{formula:0a4168e1-db84-4f6e-b08f-010bec4a9e0e}}  ; the latter now has the topology of a disk, with Hausdorff dimension 4, and its boundary has Hausdorff dimension 2 {{cite:df3b07e031e45e087f8a8d843d0115a18c6e284c}}.
r
3a27388d20f89d2b4511ded6c1473fa1
In the plethora of opinion diffusion models, threshold-based ones are certainly the best known, cf. {{cite:383756a23248c66907b18888db69de1f4d4d2baf}}, {{cite:c3b3a2b578f359290c3b0e259a6ae8e8dc1f4fb2}}, {{cite:58c8900c9aadec52c04c0f2f3eb32b88c440b877}}, and {{cite:54e5ae19dc28e44949346f0048dea13c70d3273a}}. There, nodes (i.e., agents) adopt a color (i.e., opinion) if it is shared by a certain number or fraction of their connections. Particularly, the majority-based models, where each node chooses the most frequent color among its neighbors, have received a substantial amount of attention, cf. {{cite:b0e8310538bba2d02f566dec27fa194a8698146d}}. This imitating behavior can be explained in several ways: an agent that sees a majority agreeing on an opinion might think that her neighbors have access to some information unknown to her and hence they have made the better choice; also agents can directly benefit from adopting the same behavior as their friends (e.g., prices going down).
i
fe8b43bdfc4810af67636f72176f06c9
EL with GCN encoders  Graph convolution networks (GCNs) {{cite:d499a9cc8a86b08d83a83fa286c558d7912fa457}} generalize CNNs to general graphs. A GCN is an encoder where node representations are updated in parallel by transforming messages from adjacent nodes in a graph structure. A message is nothing but a parameterized (possibly non-linear) transformation of the representation of the neighbour. Stacking {{formula:97de5268-3e32-4231-935a-c034d85ff5f9}} GCN layers allows information to flow from nodes as far as {{formula:9605baf5-2f18-4cbe-8358-914df9d6b854}} hops away in graph structure. In particular we follow {{cite:8310c8b53fe4a992f3ea247473197847f6b5840a}}, where the encoding {{formula:2be47ef9-d87f-4561-9714-0242848d61f4}} of a node {{formula:fe528e5d-ab3d-4a44-ae16-839a99d995b3}} in the taxonomy is computed as {{formula:725986e0-ddf8-4ec1-8111-e730ee0fb1c3}} , where {{formula:4f2da578-e321-468e-896d-49fa1e97c114}} indexes the layer, {{formula:a2608c00-3c3e-44bf-8e9b-bdb346a5e29c}} are neighbors of {{formula:20351774-e54d-4a2b-be8b-004adf12491f}} in MeSH® (we again discard directionality), {{formula:aa8b1513-9725-40d8-9a21-fedded37fd9f}} and {{formula:52a2a912-0132-49e4-ab3b-d26adeab2075}} are the parameters of the {{formula:ad87dde2-e901-4835-aa60-58a4a58dd063}} th GCN layer, {{formula:40689b05-4826-4f0c-b09d-45f25b4629a6}} is a nonlinearity (we use ReLU), and {{formula:9d664303-6470-4c44-8be0-e73740b75e5d}} is the initial embedding of the disease node {{formula:8a3ca236-1449-4a8a-a577-6edd1d49a72b}} . For the 0th representation of nodes, we use the bioELMo encoding of a node's scope note. As a trainable encoder, a GCN layer is an integral part of the classifier and its parameters are updated by backpropagation from the downstream objective. Once again, our objective is to maximize the likelihood of observations under a probabilistic classifier, i.e. {{formula:f83b8b13-da40-4247-9c71-7123a850e186}} , where {{formula:bbc66f99-81c8-4133-bbc2-184f9fee3f78}} and {{formula:4d57fd6f-e842-4a3b-be34-6923d80e82bb}} are the trainable parameters. Note that unlike node2vec embeddings, GCN encoders are trained directly on EL supervision, which is arguably of limited availability.
m
efa909e643e35b6fd5fd235f76e57075
Numerous techniques for smartphone-based indoor positioning have been developed, yet there is not a single solution that can guarantee a reliable and universal service {{cite:ee5b815b11d49193900ebba3cb36ec9f474a2401}} on its own. Most techniques exhibit their strengths and weaknesses under different conditions. In combination, they can complement each other and improve not only accuracy but also reliability of service.
i
30983eb15358414cda4246c398c3037d
{{formula:5a1f8aa0-b6c9-4a7f-bcd2-dfad14943bdd}} baryon : Table REF and Table REF shows our estimated results for natural and unnatural parity states respectively in {{formula:3189486a-e307-45ef-b2b8-a921887c393e}} plane. Our calculated ground state {{formula:f0abada2-e835-4fbc-a49b-f2482b41d9a7}} mass is in good agreement with the predictions of Refs.{{cite:0535c218abedcdc21a0ef7cbe0b21b3b1ee4247c}}, {{cite:bbb8baeee38a2a7f850ec6371b891b834b13d1eb}}, {{cite:1d8bd422f8cd8fed3d4090d9c7eb333121e55b67}}, {{cite:507527a70ca9ce44abae77301cd4b2e6a7856748}}, {{cite:4ec7762ec0708dd3802480aa9d314a365e6f7098}}, {{cite:eacbc99589bb48cda613b262721efffe1d47dfc8}} with mass difference of 25-65 MeV, and for {{formula:d00eb418-a0d0-4297-a5a0-54a4b704b8d6}} -{{formula:1a2efcac-08df-4b3b-aa5e-1a02d288d6e7}} states our results are in accordance with Refs.{{cite:b7a4a3189d23e771d773d93b749e90e6e939bc27}}, {{cite:5f8c14a8ffd7ebcc741e1429d19f89b3c94172d0}}, {{cite:0535c218abedcdc21a0ef7cbe0b21b3b1ee4247c}}, {{cite:507527a70ca9ce44abae77301cd4b2e6a7856748}} with mass difference of 20-70 MeV (see Table REF ). Similarly for {{formula:6de33aba-8759-4f46-b822-62d04fdbc040}} state our predicted mass is very close to the results of Refs.{{cite:b7a4a3189d23e771d773d93b749e90e6e939bc27}}, {{cite:5f8c14a8ffd7ebcc741e1429d19f89b3c94172d0}}, {{cite:c91f024557c780b0eb40d7cedb0ea0f5ff40b879}}, {{cite:4c874787a8a8eaaa34f7f0398a92b5027de92ee9}} having slight mass difference of 7-16 MeV, and for {{formula:81b567b7-3b7d-45bd-ade1-6745e94adaf4}} -{{formula:9d0165ee-6586-4897-81c9-dca787a89ba7}} states our results are in accordance with other theoritical outcomes (see Table REF ). In the same manner we compared our calculated radial and orbital excited state masses evaluated in {{formula:5f2d64cc-5ee0-4fbc-be0e-6872abf38e6a}} plane for both natural and unnatural parity states and our results are consistent with other theoretical and phenomenological studies (see Tables REF and REF ).
d
d5e012727a3302651c7185f1968abfd8
CVC-ClinicDB is another commonly used dataset for colonoscopy image analysis. FANet architecture outperforms all the SOTA methods on this dataset by a large margin with F1 of 0.9355, mIoU of 0.8937, recall of 0.9339, and precision of 0.9401 (see Table REF ). FANet achieves the best trade-off between recall and precision compared to the ResUNet-based architectures {{cite:15fba9ab05d67bc85f4aaf51097cdeb4f87e3368}}, {{cite:2d90edf4697af1398edf2c125653d19e69cfa9c1}}. The strength of the FANet can be observed by the large improvement of 23.17% in the recall and 5.24% in the precision over the SOTA ResUNet++ {{cite:2d90edf4697af1398edf2c125653d19e69cfa9c1}}. The recall suggests that our method is more clinically preferable than the SOTA. A higher recall is desired in the systems used for clinical diagnosis {{cite:55f583c698d15104ef50e5c66e2a01bf1637bbec}}.
r
cf8d5d8ba287bb3d50dc224f5a85daef
The attention mechanisms fully simulate the observation habits of human eyes, which always concentrate on the most distinctive regions for observing. For example, we can easily pay attention to the head and the wings of a bird and ignore the other common regions to identify its species. Based on this motivation, many methods have been proposed by utilizing the attention mechanisms to detect the discriminative information from the images, including channel attention {{cite:a31f474fadc051b2e8d74d0500374baa60b63fea}}, {{cite:1d718debb85560405d2f7d8fae41d2e368843dd0}}, spatial attention {{cite:9263e52b523898db2659e2c55da377b43f8d3439}}, {{cite:5514a5dc6f44bf4cbf746cfd9351be7a462f7d44}}, and channel-spatial attention {{cite:aab64266e68553c7323854f65acd5d4b9474860d}}. Specifically, SENet {{cite:1d718debb85560405d2f7d8fae41d2e368843dd0}} introduced “squeeze-and-excitation” (SE) blocks to adaptively recalibrate the feature maps in channel-wise by modeling the interactions between channels. The trilinear attention sampling network {{cite:a31f474fadc051b2e8d74d0500374baa60b63fea}} generated attention maps by integrating feature channels with their relationship matrix and highlighted the attended parts with high resolution. The recurrent attention convolutional neural network (RA-CNN) {{cite:9263e52b523898db2659e2c55da377b43f8d3439}} introduced attention proposal network (APN) to capture the region relevance information based on the extracted features, and then amplified the attention region crops to make the network gradually focus on the key areas. The convolutional block attention mechanism (CBAM) {{cite:aab64266e68553c7323854f65acd5d4b9474860d}} is a channel-spatial attention method that utilizes both the channel-level and region-level information. It can effectively improve the characteristic expression ability of the networks. The existing methods {{cite:a31f474fadc051b2e8d74d0500374baa60b63fea}}, {{cite:1d718debb85560405d2f7d8fae41d2e368843dd0}}, {{cite:9263e52b523898db2659e2c55da377b43f8d3439}}, {{cite:aab64266e68553c7323854f65acd5d4b9474860d}} usually utilize different attention mechanisms to generally adjust the distributions of the attention weights for balancing the contributions of feature maps extracted from each part. Although these methods for obtaining the weights are different, they are all constructed based on the original feature maps only, without part information supervision. Obviously, if the feature maps focus on the non-significant parts such as backgrounds and distractions, the attention mechanism is meaningless under the unsupervised conditions.
i
9eb4ce8e8f48deb06ee8cf184e23e4ab
Feature Selection methods used in this work were implemented using “scikit-learn” {{cite:059027e636f54c9cf9cf5acba92c330dfe66fc58}}. For Variance Threshold, features that do not change in more than 80, 85, 90, and 95 percent of the observations are removed. For Chi-squared and ANOVA, features are select according to a percentile with the highest score for each feature selection method. The percentiles considered were 80, 85, 90, and 95.
m
f17ed945d8a56b1ed96d7d1f2893585b
where {{formula:a0d8da61-f29b-4f6f-bc60-92c9a2188645}} is the raw pixel-colored monocular input image and {{formula:8ff04cf1-839c-44eb-94e0-53b2e3bcf35b}} is the velocity applied to the UAV. For Depth-CUPRL, the Equation REF passes through {{formula:ee047736-a474-4c7b-b0bf-8e8130df8aee}} which generates a depth image of {{formula:a0add36c-c02f-4ffb-a518-9b2a5bda6bc9}} , the CURL-based network extracts information from this depth map and then passes through a SAC-based network which gives the robot velocity. A schematic diagram of the Depth-CUPRL architecture is illustrated in Fig. REF and pseudo-code can be seen in Algorithm . It uses a {{cite:66c10a4202af4dcb4d014ed9b3b0af2514210946}} dense block-based encoding network to generate depth maps. The KITTI dataset {{cite:5ca52408bcc2c69476a618a8d3b48940ec1388ec}} is used to train this network.
m
32c3f7ae527b86f6f4ea55e3fb0d651b
One must note at this point that though there is a maximum in the distribution of fluxes near the value {{formula:3c4657ce-5530-42d1-891e-2b7356e7c11b}} , the histogram of Fig.3 has a finite width. Thus there are bursts with {{formula:fb4810e9-d755-4fe8-8411-cc8f9869cdcc}} values as large as {{formula:7c00d6cc-9538-4aba-91d8-e0193c599f06}} and as low as {{formula:cf8fecb6-9b52-4ef7-9b95-f8c0a603bc23}} . Figure 2 shows a burst with a particularly large value of this ratio. As argued earlier, one could assign to this burst a value smaller than that given by our algorithm, given the peculiar form of its afterglow. On the other hand, if one takes into account that the observed luminosity has a dependence on the Lorentz factor of the flow {{formula:03499137-2a46-47a4-ba38-1b75b3eef004}} as strong as {{formula:dfb41e9f-7e80-4213-ac0c-997c7fac5060}} , even a small reduction in {{formula:d5756057-4744-4d55-8495-ccb66e11cb17}} could increase the pre-to-post prompt emission fluxes to values larger than {{formula:0eb2a397-f42b-4536-9994-9782db0264b4}} . Values of {{formula:c9ec24dc-5ee7-4bec-bf1e-1f76440b3b9a}} would appear to be more problematic. An account of these values, put forward in {{cite:25d45e5cbcbba6129452a3d696cfbcd7071ad607}}, is that not all protons "are burnt" in prompt phase, thus reducing the flux of this stage. A different possibility is that in these cases, the original angle of the jet to the observer's line of sight, {{formula:53ba955e-efe9-4758-9f78-6ae634f1575e}} , is slightly larger than {{formula:5396f73b-219a-4129-9e5e-1509dbb17c3f}} , yielding a reduced relativistic boosting for the prompt emission; after the RBW radiation-reaction slowdown, the smaller value of {{formula:105e15db-99e8-4159-b0d7-357bc4b5919f}} allows the observer's line of sight to "peer" directly into the (wider now) relativistic outflow, thereby reducing the ratio of the pre-to-post prompt emission fluxes. If this is the case, then the prompt emission of bursts with {{formula:a1c2da28-c3a4-4115-a771-a7a836d948c5}} should be indicative of this situation, e.g. they should exhibit longer Lags, smaller {{formula:089dfc43-51dd-4389-a5fb-4a3808f23a23}} , smaller L{{formula:45efdd80-1950-45a2-8397-4e52b339ea64}} , issues that could in principle be tested by an appropriate choice of a GRB sample. However, such considerations are beyond the scope of the present note.
d
87ea15e2db9e0d818d8ed71c06fe77d9
Unless otherwise noted we adopt a set of “standard” parameters for our model, in which the stellar separation is {{formula:e6ac4c86-e691-432b-ab50-656a92c08c02}}  cm and the viewing angle {{formula:3430f785-2794-4b4c-a398-d063f1a722fc}} {{cite:4192cc2971a3d39f365e7c71e6f441a9a5d8cfce}}. Other parameters of our model are noted in Table REF . The model is not of any particular system, but its parameter values are chosen to be representative of a WR+O system with a reasonably wide stellar separation. For simplicity the DSA model assumes that the winds are pure hydrogen, but all other parts of the code use WC mass fractions ({{formula:4ef4c8fb-9cb8-482b-aac2-c9a955d02931}} , {{formula:180c343a-dcb8-4063-bda6-cff1967eb70a}} , {{formula:d19a02cc-6bb5-4918-aee9-b1f341d2dc5c}} ) for the WR-star and solar mass fractions {{cite:f82f484a8a7446f02b14df9b9d51ecae19d4a2f9}} for the O-star. The wind momentum ratio is 0.1 and the stagnation point is at a distance of {{formula:ec346f72-3867-4461-a80c-348756ddb63f}} from the O-star, where {{formula:fbab33f2-0fc3-4b20-a517-86766a725f12}} is the stellar separation. The WCR is largely adiabatic. Numerical values of some pre-shock quantities are given in Sec. 2.7 of {{cite:4192cc2971a3d39f365e7c71e6f441a9a5d8cfce}}. With a toriodal magnetic field in the wind of each star, the pre-shock magnetic flux density on the line of centres is 4 mG for the WR-shock and 20 mG for the O-shock. The shocks are almost perpendicular at this location.
r
68efe31204e1600c78542c16bdf24639
The lower bounds of Theorems REF and REF are satisfied with equality by a specific first order method that is an approximate message passing (AMP) algorithm, with Bayes updates. This can be regarded as a version of belief propagation (BP) for densely connected graphs {{cite:2216bb301aa7c5bb9e496d0673f2e087fc02a21a}}, or an iterative implementation of the TAP equations from spin glass theory {{cite:451fc86a0c264f66bb7990fec27ca292059ff1e2}}.
d
b539df69ef0eb9ee09af58081dadfef5
In Ref.{{cite:f1b279f735e8b41a3b616662d860180fc25557ca}}, Chen and Zhu study the hidden-charm tetraquark states with the symbolic quark structure {{formula:a279b9a2-7dbb-4f31-840e-e6d7be025533}} via the QCD sum rules, and obtain the ground state masses {{formula:8ebcdb81-bcb7-4591-a8f1-40d9337c74a7}} for the tetraquark states with the {{formula:f587e58b-5f8a-430a-8888-89c4ca6172c5}} , the masses {{formula:b2eb1955-2fff-4839-b52d-38c0c6f269b7}} , {{formula:725dc5e9-1c12-4d0f-81e1-f15877e33b83}} , {{formula:8179a30c-1b1d-4f6c-8c92-f616fb98bd53}} for the tetraquark states with the {{formula:941614aa-407f-4f7f-9367-005d1ae27d9c}} . The present predictions are consistent with their calculations, again, we should bear in mind that their interpolating currents and scheme in treating the operator product expansion and input parameters at the QCD side differ from the present work remarkably. Any current operator with the same quantum numbers and same quark structure as a Fock state in a hadron couples potentially to this hadron, so we can construct several current operators to interpolate a hadron, or construct a current operator to interpolate several hadrons. The compare between the present work and Ref.{{cite:f1b279f735e8b41a3b616662d860180fc25557ca}} is not vague at all.
r
db5b9d786a45c91d8b3bada2add93ae5
Our approach targets at pulling nearest class samples together with calculated prototypes, this is similar to the idea of some clustering based methods. So we use intra-class similarity as an indicator to measure our learned features. We compare our method with two classic clustering-based learning methods, SwAV{{cite:0f1422a8b188ea9bde39281bb194f0fd98f1c204}} and Deepcluster v2 {{cite:7583a465ff5154ffc75d0eadd8119675e6709f9b}}, and the results are shown in Table REF . The intra-class similarity is calculated by averaging cosine distance among all intra-class pairwise samples, and we report the average of 1000 classes on the ImageNet validation set. Our method consistently outperforms these two methods both in 1% and 10% labels available, which validates that it is better for discrimination when more class priors are available.
m
78a68f8feb2402aaefdd49be0db8cd6e
We demonstrate the framework experimentally for the case of localizing a ground vehicle in a reference map, using only cars as semantic objects. Only stereo camera images from the KITTI dataset {{cite:3a35dd45a43937f30d815b5b5c9f537c58682731}} are used to create the ground vehicle's object map. We consider reference maps constructed from cars seen a priori from ground viewpoints using stereo cameras and Lidar scans, and aerial maps captured on different dates to demonstrate the framework's robustness to viewpoints, sensing modalities, and environment changes (see Fig. REF ). The localization performance is analyzed in terms of pose accuracy and convergence time. With accuracy comparable to state-of-the-art methods, our framework is also view-invariant and robust to environment changes.
i
5e7295ee0eb96e4d271bfb9726857d5e
We observe that the accuracy of the model drops on the held-out LTL commands. This problem of zero-shot generalization (specifically, the ability to generalize to samples unseen during training) has been widely studied {{cite:6056656cffb34dabef98465ab965314ab3f88e2f}}, {{cite:2b871cafb58bc937e5082057cbc49eb20a56785e}}, {{cite:414b38630ea4b02bd70982cb5613e061adf319d9}} for neural sequence-to-sequence models that cannot handle compositionality and the ability of models to learn meaning representations for given natural language sentences {{cite:4ec0d05341987b79e6beccab2678439b81c89551}}. We also observe cases where changes in word order affect the translated LTL output of the model. Consider the command “avoid the blue room until you go to landmark 1”, ({{formula:4a54a3d7-c883-484c-8008-1fc5318e878d}}blue_room {{formula:631271fd-3445-47d4-9f1c-0e36fa82ef11}} landmark_1) for example. Variations in our collected data include sentences like “until you go to landmark 1, always avoid the blue room” that change the ordering of referent words (blue_room and landmark_1) which are occasionally confused, and mapped to incorrect expressions such as ({{formula:109c5c73-25c8-4222-b069-985dcbfeb5ba}}landmark_1 {{formula:210f9d9d-2c3f-46d4-9700-45ded287f1f9}} blue_room). However, in the drone demonstrations, the sequence-to-sequence model correctly translate the given language commands (converted from speech) into LTL task specifications that are then solved using our proposed method.
r
071c3501945f91bffcb3c6965078e609
Here VE is the anticipated efficacy and {{formula:3f04aab2-d5ee-4ea7-ab90-5c16e0b7885b}} is the expected difference in VE in absolute terms. We showed, however, that at low prevalence rate, equation REF significantly underestimates the variance. Using an inadequately small variance could lead to underestimation of the type I and type II errors, potentially resulting in winner's curse in underpowered studies {{cite:0812ba5b0f2442c19d5f1f2a6f86bda6c40125e8}}, {{cite:24cfce70e9e015d70271f8684bd7df8c53e71733}}. If instead we were to use the proposed compound binomial model, one could simply substitute the variance in equation REF . As in {{cite:a4dd9dada5a1c09a65449b849da19d378643477e}}, under the assumption of normality and assuming {{formula:aa181bf0-36b7-4918-8bcf-c7d3d63b63be}} is the difference between the upper and lower limits of the confidence interval, substituting the margin of error as {{formula:21c12fd9-c176-4f51-bf48-e121671c0c4f}} in equation REF gives {{formula:25f5be40-d144-4804-a445-f9f775ec3f7e}}
d
b32be19493e3e62c22a537951f35b470