text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
The univariate stationary time series models, namely, the autoregressive models (AR), moving average models (MA) and the general autoregressive moving average models (ARMA) are popular tools in the statistical analysis of univariate time series data (see {{cite:8ed566a8c7b3a1a6960b19bf6ce1bd0d1a5a69e8}}-{{cite:3343b28d7aa5365318bb9c7595497a5fd93367d9}}). On the other hand, for the analysis of multivariate time series data, one of the most successful, flexible, and easy to use models is the vector autoregressive (VAR) model. The VAR model is especially useful for describing. In the classical definition, the above mentioned models are assumed to be second-order due to finite second moment of the noise term. However, these models fail to capture the heavy-tails of the data. This motivates us to use the family of stable distributions to model the data. Some of the significant and attractive features of stable distributions, apart from stability are heavy-tails, leptokurtic shape, domains of attraction, infinite second moment (with the exception of Gaussian case) and skewness. For more details on stable distributions, see {{cite:fc75dfed638fb78033141f5e143ceaec4bde1506}}. Hence, there is a need to explore the behaviour of the above mentioned models with stable noise for effective modelling of the time series data which also involves estimation of the parameters of these models. However, because of the infinite variance, only a handful of estimation techniques are available for models based on stable noise (see {{cite:5e7282f7e92203b4f1a06fcd894f74818053a3fe}}-{{cite:5bef9cac82440c56c4ac7c9e29d7d28f64d2f57f}}).
i
a3b01d5f470ed9aa68b7d95ab725d83e
Table REF presents the inference time of GRCNN-110 and several popular networks with similar depth listed in Table REF on the CIFAR-10 test set (tested on a single GeForce GTX TITAN X GPU with PyTorch {{cite:1a423ec84c1be9cb850c29f92d08b6492ef79e03}}). The models were run with batch size 64 and the total time on the test set was reported. From Tables REF and REF , it is seen that though the GRCNN and DenseNet can achieve lower error rates, they are not as efficient as the ResNet and its variants. This is a shortage of both GRCNN and DenseNet. {{table:c31f58e0-2c54-4fa9-a438-9814b465c057}}
m
a56d399ee66ffabcb5abf78d8864de0c
Merge-based sorting networks: Classical results, such as by Batcher {{cite:7317cce59717593e62e8d67ef6775f8d6ab7b467}}, Cole {{cite:3f369f14e8c745fd63ac2424b5848374d1a853fd}} and AKS {{cite:7335d762ef7317bd7c66e873f0cb541507f7f443}} use sorting networks and optimize the depth of the network. Our paper deals with large scale parallel sorting which are partitioning-based, and move data only once, while typical sorting-networks naively imply many rounds of interprocessor communication. Leighton {{cite:167ffc69180bb11ab019712dbb2d978d9a9780c2}} presented tight lower bounds for sorting networks, but no such bound is known for partitioning-based algorithms.
d
e6ec0243817d298f090675558ea81ab5
Evaluation metrics. We follow two evaluation metric protocols to compare the performance of different active learning methods. {{formula:8fff6f8c-8f7e-48f7-af29-f6b3a3c830ef}} ) Linear classification. We train a linear classifier on the top of the frozen backbone features (without back-propagation in the backbone weights) on the pool of labeled data for 100 epochs and report its top-1 accuracy on the test set. We apply the mean and standard deviation normalization at each dimension of backbone outputs to reduce the computational overhead of tuning the hyper-parameters per experiment. We use Adam optimizer and lr={{formula:0cb7d9d1-8641-40d7-b8df-1d52c096d9e7}} that is multiplied by {{formula:41c2c466-5204-475a-a6d8-b816ac842ad2}} at epochs 50 and 75. The batch size is 128 on ImageNet/ImageNet-LT. For CIFAR-10/100 experiments, initial pools contain only 10/100 examples, so we set batch size=4. {{formula:9a23d4de-cdf7-4fa7-93b1-285dff3a889f}} ) Nearest neighbor classification. This uses cosine similarity as a distance metric to search for the most semantically similar neighbors of test set data from the pool of labeled images. When the pool of labeled data is small, this metric is faster than linear evaluation since nearest neighbor classification needs no hyper-parameter tuning. To implement this metric, we use FAISS GPU library {{cite:e6b80bc0ebe2e94780bc4ad6deae8214fa660165}}. Unless otherwise specified, all experiments are averaged over 3 runs with 3 constant random seeds. {{figure:7fa34a05-e9f8-4c26-aaa3-6cb6c00ddb9b}}
r
d4afbe6c6f5d05cd145ed668b22bf272
Secondly, the phonon thermal conductivity is {{formula:11fad7ef-0fa7-4df9-92ed-891865c383d8}} , due to phonon-glasson scattering. The thermal conductivity was shown experimentally to be completely dominated by phonons{{cite:9634ca2111b4b379d78e91143b3513d3d73d9b62}} for {{formula:0477b222-428c-4947-8e78-562dec9050f0}} and to have roughly square-in-temperature dependence. From the kinetic theory{{cite:a946c8d6587b82f8d950270d90299cc625056f81}} we have for the thermal conductivity of phonons {{formula:52cc41b8-c8e1-44b5-9ccd-990b297b6b9b}}
r
eeb049bc8465060af0d94e9b4b3d8ccb
While machine learning-based approaches yield stronger prediction than conventional statistical models, (e.g. LR), they are less transparent, which can lead to lack of trust from clinicians. To address this drawback, various methods have been developed to explain the decisions of "black-box systems" {{cite:16165d59c3b6cf30d9c91948fd7c15972d24cb35}}, {{cite:6db1b382e73006231e6266b2e686dc4312633156}}, {{cite:aa0fa425f13ad106213077e18c511239374040b3}}. As such, we utilized the GradCAM approach {{cite:16165d59c3b6cf30d9c91948fd7c15972d24cb35}} that allowed us generating an attention map, in order to highlight the zones where the CNN has paid its attention. While being attractive, this approach can also lead to wrong interpretations, i.e. there is no theoretical guarantee that the neural network identifies causal relationships between image features and the output variable. Therefore, a thorough analysis of the attention maps is required to assess the significance of certain features and anatomical zones picked-up by the model. Such analysis, however, could enable new possibilities for investigation of the visual features. For example, we observed interesting associations in the GradCAM-generated attention maps (Figure REF ), some of which are not captured by KL grading. As such, tibial spines (previously associated with OA progression {{cite:b260edb9b5e02b613b4ee6757f0f04e9e0683d82}}) were highlighted in multiple attention maps. These associations, however, do not hold for all the progressors.
d
c6fcbbdd30642a76dd1d7dc3e8c0e4b4
Next, we will build an indicator function for the sampling method based on operator {{formula:4552b521-7d31-4661-92e7-ce18793c5f07}} instead of {{formula:c4f9319e-77a0-412d-8fda-0e09d9a3bb96}} . Since {{formula:6d88c361-7a4a-433d-8d58-87ff6c26d0c9}} is known one can compute {{formula:632569eb-9fca-4d8b-8d87-610fc4d3446e}} independently in order to construct the operator for solving the inverse problem. Also, note that {{formula:3a52e59e-77a1-40ac-8ea7-a9803124f8c7}} can be computed in a multitude of ways. One can use boundary integral operators for the Helmholtz equation to derive the solution operator for (REF )(see for e.g. Chapter 3 of {{cite:5ab8226101aecccdc528a3c228b78002fabcafc2}}). Then by appealing to asymptotic formula for fundamental solution one can derive a formula for the operator {{formula:959f793c-ad3f-445a-8e87-089fe56b710e}} . Later in this section we will derive a formula for {{formula:261f83bf-ee66-4397-98f1-787ee16c0778}} when {{formula:cda44447-381c-4d59-b225-de3ac975ce09}} is the boundary of a ball centered at the origin. Also note that, {{formula:bac8d0c4-cd5c-4caa-9f77-ed8f05d77d6e}} is independent of the underline scattering problem which implies that this operator can be used to study other problems in inverse scattering for near-field data sets.
m
e42cf6616cca86215e1cafcae6a185aa
In the XX century, the problems of independence attracted sufficient large attention. Great mathematicians K. Gödel and P. Cohen proved very hard theorems about the independence of the axiom of choice and continuum hypothesis from other axioms of set theory ({{cite:5aa6f6b6d0d7458c2f700ea98deb02ae33f1eb4f}}, {{cite:7e5b1ad2f2c2223e9033e378e3eef51f752faf69}}, see also {{cite:1b8f24d72e7e17a2dd24eb07b9cb8ab12bc4c22f}}, {{cite:9ac6ba16c66a791489ac3aa920a40a22a6971f43}}. There were also proven results of another type, namely, of consistency (see, for example, {{cite:ad3b400758c4f2b9aa413e01441a6266e6f400a5}}). Nevertheless, some problems looking simple remain unsolved. Among such problems is the (in)dependence of conditions in the definition of a linear mapping.
i
6b0d382af2a0bc67c55bde4e44ba8b9e
This is a representation of the potential, which, with a negligible correction, coincides with the corresponding expansion in {{formula:2c931eb8-d17d-40a2-a2a3-aabeee19690c}} of the potential {{formula:ce5db3cd-2f0f-4abd-8b1b-aa967b189e0b}} for n=2 found in the paper{{cite:9357c68b0c8bb8a4790ca033fe372249b66073e7}} where it was named "the T-model". Thus, we can state that in the model we are studying, all inflationary predictions coincide with the corresponding predictions of the T-model and, therefore, are consistent with Planck's data to the same extent.
d
32cfa2b39b8d86da6feb6d43d10eb40d
Not all the feature weighting methods are based on conditional probabilities, though. Now we will describe some methods based on information theory {{cite:7d10b5dcc8e52165095594df7b5a22c75efd3504}}, {{cite:276c174fc3d9bae6cb53c4d9a5d21d18802e145a}}.
m
917e58da7ea274461159e18a6a8a4baf
A second possibility might be realization of composite hopping in the context of ultracold atoms in an optical lattice. Quantum simulation is an exciting area of research{{cite:5d611b019f9938c3da5a64f919ae0e3ea4e6aa8c}}, and given the many unusual physical properties displayed by our model such as negative compressibility (discussed in our earlier work{{cite:cb2a57bb4bedaad6d4eed95e89bb2633b9679682}}, {{cite:6e830b40df602b4b31a27da80739512f1b489784}}), negative entropy{{cite:da5fb2780a4a011ea4ff942cbd9d93e788194fa8}}, {{cite:13a4c6eb177cdc11cc1d3d8f231d31c3a8c88564}}, {{cite:d9cdaedb899c0e1ed8bbf0f27ad6cb1eb9fbcc09}}, zeroth-order transition{{cite:ce92ed1df6384cb573b817256802874d033669a3}}, {{cite:44441aee803e05352df8b8b15fab64d2e73633d9}}, {{cite:1d4e9a18214a4188b131eb2921b50135ca5a762c}}, {{cite:ee6bd493d8edf38fabb67bd98040af1469977946}}, {{cite:6fe832d2f51dbb2d3ac93494ce7eeea802240b6c}}, {{cite:54dcb16ab35c095bc8dbaf5dc0ce282a03470938}}, {{cite:223808162ff66d0ecd418ef88c5fbb3e1571f854}}, {{cite:fc759befb4c4fa270d23eeb0bb6f3353375a8316}} and the remarkable agreement of the calculated ratio of {{formula:155bb127-4054-4e18-87aa-e8df58c2b6a3}} /{{formula:ad240731-3bd8-4b56-9c60-fa4b9a3e7ecc}} with a wide range of unconventional superfluids and superconductors {{cite:6e830b40df602b4b31a27da80739512f1b489784}}, it would be interesting to explore the physics of composite hopping in this perspective.
d
353feddd88fac6840899916412d4d43f
1. Stereo Disparity Estimation for Supervision. As reported by Chen et al. {{cite:4c4970893753ef180bd9327814c2c0073e401d4f}}, disparity supervision substantially improves the detection performance. Without such disparity supervision, the network may not be guided to completely utilize the geometric potential of binocular images, and the network's understanding of the scene could be limited to that of a monocular detection network. Thus, we want to use outputs from the traditional stereo disparity estimation model on the overlap region and apply them as a weak supervision to improve the detection accuracy over the overlap region. We empirically observe that this supervision significantly improves the overall detection accuracy.
m
6277ba05c7fb9da657b5cc60b64c92d9
From the point of view of machine learning, improvements to the current model can be approached from several angles. First, the network design can be modified to allow for the quantification of uncertainty. In this way, its output would also include an estimate of the error associated with its prediction. This could be done, for example, by using the MC-Dropout technique as in {{cite:b5baa369386abccb4751929c97d9587ddcf515e1}} to model the so-called epistemic uncertainty. One can also add an additional output node to the network in order to model the covariance of predictions and capture the so-called aleatoric uncertainty, as in {{cite:def551c045b7f3c9bf85296d0d9f5105f3439c06}}. Another interesting direction would be to study ways to use better initialization schemes of the network weights, or use pre-training routines, followed by a fine-tuning phase for each new lensing potential. Finally, to the best of the authors' knowledge, this is one of the earliest applications of implicit representation learning to cosmology. In light of the extraordinary success achieved by modern methods designed within this framework, particularly for the synthesis of 3D images {{cite:5025c1b833e9a1d5e986df377c9f4f63ced1c478}}, we believe that the exploration of their application to cosmology represents an exciting avenue for future research.
d
88b3c065d44f7d35d222c3622870c6bd
The proposed estimators {{formula:ad7ca92c-9b34-421c-b429-aecf5e8f4eb3}} are related to the hard singular value thresholding estimators (HSVT), which were previously studied in matrix denoising {{cite:78c28a882f9ffc4cab0cf353b39ea32a4c308d73}}, {{cite:b3a49ca3882f685ec2671a1bce0074c206819afd}} and matrix completion {{cite:ad3907e2a467784e6771bc6bd8fb634efa000ff5}}. Our method and its subsequent analysis differs from that of HSVT in two aspects. First, our estimation problem requires {{formula:954a2994-d7cd-4712-8d79-da4b62d740ed}} ,{{formula:1e37a2c7-f215-4f43-9f91-c76b76046e33}} to be stochastic matrices that belong to particular simplexes(see Lemma REF in the supplement). This is achieved by normalizing rows of the matrices and truncating negative values, which complicates the analysis of the estimation errors. Second, the analysis needs to account for the Markov dependency of the data, while the data are typically independent in matrix denoising and matrix completion.
m
e984e46abd8edb1e181c5adc83fea3d4
There is recent news from PTA groups on their search for a stochastic GW background. The NANOGrav collaboration found Bayesian evidence for a `common-spectrum stochastic process' in its 12.5-yr PTA dataset; and more recently the EPTA collaboration also report the detection of a common red noise signal, when analyzing a timespan up to 24 years {{cite:8527496ee5e0ed20700fff1ca89f8e41ba8f4d18}}. However, so far, no group has found significant evidence of the quadrupolar Hellings {{formula:83496732-3ede-47a4-84b0-da1b01f96a89}} Downs inter-pulsar spatial correlations {{cite:350e42651659dd4367cea957ec427bf9f8709f57}}, and no individually resolvable GW sources in the PTA band have been detected. Moreover, {{cite:f26793d7fabadbe53e4bb2e814bb9369cd61da26}} showed it is possible to spuriously detect a common red process in timing array data set simulation when no such process is injected into the simulated data.
d
05cd87c0367ed8a79d34ffca14336fd1
In this work, we incorporate two automated methods–dynamic time warping (DTW) with spectral clustering {{cite:7525e77b68167102c0a8dbf2b38e2a2cf420bcc3}}, {{cite:ba2986fc043d72897df885210976dfc673c57262}}, {{cite:e99cf34816fe9bab96e63500bfaec65e9e3a70ed}} and Granger causality tests {{cite:eb25e770cc8939854e20de9a0cac2893471c5773}}, to allow users to determine which references are appropriate for predictions and whether a model employs proper references for the predictions (R2-3) {{cite:1eae4ec6d75d6db89086ca714a6348a3fa5c6652}}.
m
7ce1e492218943486fac879a6a1f597a
Despite the growing interest and use in both research and the industry communities, currently the creators of benchmarks for Data Management Solutions (DMS) {{cite:874bf773cac7d03ca84a25e87d695e2e9eb27c75}}, {{cite:b30a2ea0abaefaa26fde52f5da1832c5f6fd608a}} do not offer a common suite for performing cross-domain benchmarks (i.e. one-to-one comparison of RDF, Graph, Wide-column, Relational, etc stores). In addition, there is no significant baseline to compare these cross-domain DMSs one against the other. Moreover, reproducing benchmarks is a non-trivial problem owing to reasons such as non-standardised setup configurations, lack of publicly available resources (such as scripts, libraries, packages, etc.) and lack of transparent evaluation policies. Results in areas such as named entity recognition and linking {{cite:2a93062e023b9a1215b1a060c49e0730cdeecec1}} as well as question answering {{cite:a0c98a532decc352349fb878a864c186a7bdbd29}}, {{cite:8c0ad49ae8db8477c9d7111f882de380da1bf502}} have, however, shown that the provision of standardised interfaces and measures can contribute to the improvement of the performance of software solutions.
i
4f32475eceb2c1a7c39d5b3146568fec
TotalText. The proposed FANet achieves the SOTA performance on TotalText. If compared with the methods without pre-training, the proposed FANet outperforms the previous SOTA method PAN by 1.3% (84.8% {{formula:5c73314b-5995-4914-bd3e-017d5b267f1f}} 83.5%) in F-measure. If compared with the algorithm with pre-training, the proposed FANet can still outperforms the previous SOTA method {{cite:b1e3564d3dbb017cd3f487402e1b0d25ae299844}} by 0.2% (85.9% {{formula:d05c3849-a00d-4dbd-8140-41e97aa0dc37}} 85.7%).
m
49355b014ed7ccd0bc3218f5c8864a3c
We study the effects of chemical potential, related to a baryon reservoir, and of a magnetic field on the chaotic behaviour of a strongly coupled {{formula:a7c173e1-7211-421a-a834-d829933e84d3}} pair in a finite temperature background {{cite:dbb2b59494478a9e1ca53788d990b5733112aad8}}, {{cite:4b141d1d90dc2d0cd9b774c90c582906776b79ac}}. Such systems can be investigated through holographic methods analyzing the dynamics of an open string hanging in the bulk in presence of a black hole (BH). The BH properties, as Hawking temperature and charge, are related to properties of the boundary gauge theory. In {{cite:b0f419b9c2fa2a1e2370f9bfb7661dc61a9ba6f9}}, {{cite:19bd680be48682da459a36d9dfd24fdf5dd3585b}} it is shown that BH belong to a set of systems called fast “scramblers”, in particular BH are the fastest scramblers in nature: the time needed for a system near a BH horizon to loose information depends logarithmically on the number of the system degrees of freedom. As shown in {{cite:e4448612a42c9ff207143a0d5a4df63c8f91a0f7}} these systems present a bounded chaotic dynamics, with upper bound (MSS bound) on the largest Lyapunov exponent {{formula:222ed8bc-6d40-4260-846a-82347fd8b320}} characterizing the chaotic behaviour of the thermodynamic quantum system with temperature {{formula:3b0bb019-cc4a-4868-90f1-a20728932e2c}} : {{formula:1c3d0a97-4cfd-416b-be5f-c0914b682780}}
i
8c8767595bbe5191ea2c75a7374ecf54
We first evaluate DAP on the transfer segmentation task from GTAv to Cityscapes. The comparison against recent approaches are shown in Tab REF . To ensure reliability, we run DACS (the baseline) and DAP three times and report the average accuracy. DAP achieves a mean IOU of {{formula:86337d73-c24e-4231-8f3e-c3a51e248bae}} over 19 classes, which claims a {{formula:e1120cf4-cb00-4169-9291-6b2e2884ebdf}} gain beyond the baseline and also outperforms all other competitors except for Chao et al. {{cite:b67d638918078f26cc82c108f0be831f6f27cd59}} and ProDA {{cite:64cd9aa617153b6db2b32aca1d6af993bea73063}}. Specifically, Chao et al. {{cite:b67d638918078f26cc82c108f0be831f6f27cd59}} used ensemble learning to integrate the prediction from four complementarily-trained models, including DACS, but DAP used a single model; ProDA {{cite:64cd9aa617153b6db2b32aca1d6af993bea73063}} improved the segmentation accuracy significantly via multi-stage training, yet its first stage reported a {{formula:8a99c623-6f90-4e13-b6ea-0a2fa4458aec}} mIOU.What is more, the results on transferring SYNTHIA to Cityscapes, as shown in Tab REF , demonstrate the similar trend – DAP outperforms all the competitors, except for ProDA and RED, in terms of either 13-class or 16-class mIOU. To show that DAP offers complementary benefits, we feed the output of DAP as the pseudo labels to the 1st stage of ProDA, and the 2nd and 3rd stages remain unchanged. As shown in Tables REF and REF , the segmentation mIOUs of ProDA in GTAv{{formula:7014fe7a-df90-4ea1-9460-7aad66d2287a}} Cityscapes and SYNTHIA{{formula:546537bb-6ce6-4b1a-972c-b52f1f3deb4d}} Cityscapes are improved by {{formula:d710bc7a-9267-48c9-b47a-7d7df0eb055d}} and {{formula:7a6f887b-873d-4a80-84eb-1658e473ac78}} , respectively, setting new records in these two scenarios.
r
5a87619605e0040bed49b69f63b26b34
We, therefore, propose MemStream, which uses a denoising autoencoder {{cite:a78af7db732e2c1b23485a14b52e75548a79581f}} to extract features, and a memory module to learn the dynamically changing trend, thereby avoiding the over-generalization of autoencoders (i.e. the problem of autoencoders reconstructing anomalous samples well). Our streaming framework is resilient to concept drift and we prove a theoretical bound on the size of memory for effective drift handling. Moreover, we allow quick retraining when the arriving stream becomes sufficiently different from the training data.
i
41f0de4c6cfa311f39d4d0cc60c55f1b
We preprocess all of our corpora to eliminate a list of stopwords specific to newspaper publishingEx) “op-ed”, “sportsmonday”, “business review”. We identify this list through iteratively training LR and examining top coefficients. We train four text classifiers: Logistic Regression with BOW thresholds (min_df=.01, max_df=.5, vocab_size=13,000) (LR) {{cite:d805d1308b0124268019d0db04d58148c3b05143}}, FastText{{cite:28a43839225eeec8beae7aec24c0ae213871f232}}, pretrained Bert-Base (BT),{{cite:21fc27da6f7d2055a3c461f8b29a9a68def556d1}}, and pre-trained RoBERTa (RT) {{cite:aa3f49166b5f24f3c00ae258c554100988e5c0b3}}, on a balanced training set of {{formula:ba059f0d-445a-43ca-9e3b-327a92c0d723}} articles published in 1987-2001 ({{formula:b087b7b2-1ded-44bc-a333-e219ce247239}} articles). We use AUC as a metric because we are most interested in the rank-order of documents that our classifiers generate. {{table:27f9d542-eaff-46be-8eee-8fe7411a6b06}}
r
0483ff57155fd24c84970a3032b686f4
Methods based on unsupervised pre-training and supervised fine-tuning for NLP have achieved phenomenal successes in the last two years. Most of the proposed methods in the literature choose language modeling or its variant as the pre-training task. After the pre-training stage, ELMo  {{cite:ce79b5235c8a542cc272b9871a8937babf8bb63e}} and CoVe  {{cite:c376c8377e2d4320eba46f54df9cc655023ae57e}} directly use the learned representations as additional features for downstream tasks, while BERT  {{cite:5c5fc61d70f2735fff96a0c5be96e6b7964f4b23}}, ULMFiT  {{cite:8561b5e86421b97fc61522909b7da3ba50f0053b}}, XLM  {{cite:2a858dbcc18fe0fc34b93b116e7537790cde19ae}}, and OpenAI GPT  {{cite:729976fe5adedc9d141b8c6a0149e61ecf6a91c7}}, {{cite:c6e89283f095434313637752ac783be836354fa2}} require fine-tuning both pre-trained parameters and task-specific parameters on labeled data. The state-of-the-art performances have been significantly advanced for classification and sequence labeling tasks, such as natural language inference  {{cite:ba862f795ba2437d827b32086c6e226b6edd2600}}, named-entity recognition, SQuAD question answering  {{cite:d610526874095467bd4429e40763831db15191f6}} etc.
i
bce092e778d911a6d92599fd4c694acb
Previous work has resolved this problem by using a spectral regularisation scheme, in combination with a residual network structure, motivated by the following argument. If we parameterise our feature mapping as {{formula:c185ce11-c11e-4d58-9526-d12482923927}} , and then apply spectral regularisation {{cite:83db235fe3385de7f448a01efd37ce9d856de357}}, {{cite:850eb7ccd9ab6ac19673c04387aee9c638e2d826}} to the function {{formula:b4e8f6e5-d997-4467-9e8e-a1594b26fff6}} such that its Lipschitz constant is less than 1, then {{formula:ee36e49e-02f9-46bc-a0cf-5e0ccf3f6b2c}} is guaranteed to be bi-Lipschitz, that is, to satisfy: {{formula:607d2c7d-9a85-4e7f-bf5f-25a4c44c7053}}
i
4016cad8b6039e93bfc6cc637f50e492
Grasping ability is one of the most important abilities of modern intelligent robots, especially for industrial robots, which will bring great power to society{{cite:9436da2575b6b1e027273e10715889bdd0474559}}. As the most common basic action of robots in work, robotic autonomous grasping has great application prospects. Because of its significance, robotic autonomous grasping has been studied for a long time. Recently, robot grasping has made rapid progress due to the rapid development of deep learning. There are many tasks in robot grasping, including object localization, pose estimation, grasp detection, motion planning, etc. Among these tasks, grasp detection is a key task in the computer vision and robotics discipline and has been the subject of considerable research.
i
a9b7bd7684073a2acd995e11414b5df7
Local polynomial regression.  Perhaps the most similar problem to our setup is the rich set of work on local polynomial regression, which has been around for a long time since the pioneering works of  {{cite:ffcb2e34d56340da18045c9e7e0bb06fa8ddc1c3}}, {{cite:4affbc4a5e0a14329f09e1255a5d2be43ba65b9f}}. This line of work aims to fit a low-degree polynomial at each point in the data set based on a subset of data points. Such approaches gained a lot of attention as parametric regression was not adequate in various practical applications of the time. The performance of this approach critically depends on subset selected to locally fit the data. Towards this, various selection approaches have been considered: fixed bandwidth {{cite:06a4bda065175bcd3dce5f3eb7ce7da0e6dfe629}}, nearest neighbors {{cite:7a89808905313d577e9bc2000d6c3df0b96c684c}}, kernel weighted {{cite:a3dfe9d101ca9dce4bd8ceaacd89161e057ef229}}, and adaptive methods {{cite:ba66971be5bfad3a10c9b37299c4867aeddf4d4a}}. So far, the analysis of local polynomial regression has been mainly restricted to classical techniques like minimax estimation, on which the literature is a vast for various settings. First results on asymptotic minimax risks were established by {{cite:7071d059366e9b90c939de0f0ac026c51f723be2}} over Sobolev spaces. Minimax risks over more general classes were studied by {{cite:0f9f7f10aedaf88014e93b61d05d00a514d7b52e}}, {{cite:057c850b2088b0077a0989a02dc448e8f2070767}}, among others, for estimating an entire function. But none of these works provide finite sample generalization bounds, which we obtain in this work.
d
62a1682355d03aa3277c664ddd457af4
({{cite:2be94424e209cbdc71a5e2f8f68db03494debb2c}}). From the definition, {{formula:56daee87-7868-4f4a-a8dc-c89b3702b008}}
r
2815b663f7a51d7efab5f6e53578f6d2
3DPW {{cite:e621aefb2656fe8a9510b99c0d07a126f882762e}} was chosen as the dataset to augment since it has a reasonable size, provides ground truth body models, and there are state-of-the-art DL HPE systems with open source code that used it for training and testing.
m
d4c15dafc469c266c77466910eed8420
The cumulative energy flux in the focal plane can be treated as a single-channel image, see Fig. REF . Neural networks are one of the most versatile methods for analyzing data with the same type of continuous features. For image analysis, the best results at the moment are shown by convolutional neural networks {{cite:2c8a05219ea1349d1e9c0b9429d8e8b4fba65ac6}}, {{cite:b2b73d21d0655d10b0ad2f27649d13e2f7c8b43b}} and transformers {{cite:7a23e6443ecc834c635680f7517c763d69ec9d80}}, {{cite:b9aca14bfc77e5ade486a4bb7bc76c1bd1c1617f}}. It is known that transformers are difficult to train, they are very demanding on the amount of data, and also require pre-training. At the moment, in computer vision problems, there are not a large number of successful applications of this method without model pre-training {{cite:54414b5020b7e1b68ca5cdf484f4a7593ba9552f}}, {{cite:e034766af692c5a30fbd2c19d6e85356009cf92f}}. Therefore, convolutional neural networks were chosen for the ML-based analysis of the flux.
m
7c8d1b453020cfca14912192b5d48e44
We found that although the CFD results qualitatively agree with in-vivo clinical measurements, they tend to have substantially larger deviations in the distribution of the hemodynamic variables compared to PINN results. Conversely, the PINN approach trained on the clinical data, despite the physics-deficiencies of the same underlying axisymmetric model, recovers the brain hemodynamics with higher accuracy. Furthermore, the deep learning model does not suffer from the major limitations imposed by the pure computational models such as precise prescription of boundary conditions, hefty mesh generation, or even constitutive laws {{cite:811633352ebca5f9ba8ebcc3de736032590f567c}}, {{cite:48c6b86282406486d292592894f9bc511d25ad94}}, {{cite:20d3729ea622244fb1213b315be9d35b57d53e1e}}. In hemodynamics simulations, in particular, inaccurate prescription of inflow/outflow boundary conditions can severely degrade the simulation predictions {{cite:c4450ee8c52bb3268813e2d5c28fa9c3e551620c}}, {{cite:849ed784e73010146b5cf66aa86d57533f21b554}}. Although blood velocity measurements via TCD are available, the spatially sparse measurement points usually lie in the interior of the domain of interest and not at the boundaries. Incorporating these measurements into the CFD code is challenging because: (i) one has to solve an inverse problem to infer the boundary conditions from these measurements, which can be cast as an optimization problem and it is in general costly to solve, and often becomes ill-conditioned or under-determined in the absence of sufficient measurements, (ii) the TCD measurements are noisy, precluding their utility in numerical techniques that demand smooth profiles for differentiation. On the other hand, our PINN approach does not require the exact prescription of the boundary conditions and they compute derivatives and solve inverse problems using noisy data. The results from our PINN model will further improve the axisymmetric CFD simulations of pulsatile blood flow in deformable walls. Indeed, it appears that the purely physics-based CFD models can predict the general trend of hemodynamic variables rapidly. However, they fail to capture the velocity peak and mean values and induce a time delay in a cardiac cycle. This is due to the combined effects of physics deficiencies and uncertainties in outlet boundary condition subscription. The former is the result of model simplifications and the latter is due to the lack of knowledge in internal blood pressure distribution in efferent vessels. Conversely, our PINN model, which is trained to both fit the clinical data and satisfy the physical laws of unsteady blood flow, can successfully capture blood velocity peak and mean values as well as its spatiotemporal distribution. Therefore, our findings in this paper may be used as a benchmark for refining the CFD models to better capture brain hemodynamics.
d
fe7531d5d6749b6b8b834278bb1e5f99
To validate the registration performance of the proposed framework, a set of typical multi-source remote sensing images are carefully selected, and six effective registration algorithms are used for comparison, including SIFT {{cite:38d3e88f5066d19a400e9da9cbc97f30b0fd9c14}}, SAR-SIFT {{cite:c74e4cdb04120fd5762d1120e77149fec5e216d8}}, PSO-SIFT {{cite:c74e4cdb04120fd5762d1120e77149fec5e216d8}}, PIIFD {{cite:8c2e8e8ee640874371f1aec686a01bb9b8e2e9ed}}, MS-PIIFD {{cite:c6e471758b7f4393355827fb04c00659b00a7e50}}, and RIFT {{cite:4e47244190f80cefe58d1a86c262a679235d9a08}}, all of which contain processes dealing with multi-modal properties. The experiments are implemented using MATLAB2021a on a platform with AMD-Ryzen-7-5800X 3.80GHz CPU, 32GB RAM.
r
3544ec614d6173ae7d7b8084d9f12351
The sample complexity bound presented in (REF ) reveals that the increase is exponential in {{formula:1c870f7c-33b7-4b05-8ea9-13261e3e79b7}} , this rate is clearly not desired and it turns out that it can be avoided using different learning rates, {{formula:9bb1fd35-3ff0-43b8-b72e-e4870569b6b6}} , rather than using linear learning rates. For example, using polynomial learning rates, {{formula:b33cece1-1179-4739-a1f9-9afd910b3420}} for some {{formula:5a055022-aba4-4bdc-88cd-5dcb507b289f}} , one can achieve following sample complexity for {{formula:89cb7ea2-0c60-467f-8d06-77c38eba148c}} -near estimates for small enough {{formula:9b92373e-92b1-4d0c-8af3-04ae62af14bc}} (see {{cite:49d086fc5ba9b590414c6d4ce2576f706f5fd178}}): {{formula:bd6c2f07-3cab-4b29-92f8-df25bf7dd15d}}
d
10a7e62045b6e7ce4fe2aa8cff422f35
Another way to solve Eq.(REF ) is based on reparameterization tricks to reduce the variance of gradients {{cite:2cc730e3f9a98918f3d572c878cac3ec018f6b61}}, {{cite:847e70ab08d49eaf667107ffe7c49eda5ef27b75}}. Specifically, we set the entries of output {{formula:effccb7a-d0f0-4e93-bb65-72d8f19eda99}} as the parameters of Bernoulli distributions to generate {{formula:fdce0510-e014-4482-8c2d-9364d4b906c9}} , i.e., {{formula:a16fff0f-a52c-44aa-b532-6e8f101670a6}} , for {{formula:05484ad9-93ce-4431-b997-a978d75b3b60}} . To make {{formula:74bf93e1-2af5-44c4-8e7f-63a0a1d4a8de}} computable, we may use the Gumble-softmax trick {{cite:4d277b27c4cbd8d08f885c7b251c41adb1e6f1f1}}, {{cite:40ae97d4b2924f4bf0705673cbae82aa615f5922}}, {{cite:8ea7982836fbd3663aaf2314a6a275847130cd82}}. However, this approach suffers from two issues. First, the estimation of the gradient is biased. Second, as {{formula:ca6b6e29-1860-4539-a30c-7314ebc6f4cb}} is essentially a randomized algorithm, sampling sufficiently many {{formula:cdd7f4f4-e42b-43a9-8117-0c531e788978}} is needed to guarantee a valid and low-cost solution. However, such evaluation is costly as discussed in Sec. REF . We compare different aspects of RL, Gumbel-softmax tricks and our relaxation approach in Table REF . So, empirically, we can also compare {{formula:0e4d84f3-57cf-4ced-bd81-aa18efae336c}} with a threshold to determine {{formula:96bb290f-71d9-490f-b8d2-1f1bd98f54e5}} , which does not have performance guarantee.
m
e4f09d3fb31006de7e8d2304bdc30875
An interesting observation from Table REF is that anomaly detection based methods such as IForest and LOF achieved their best performance on {{formula:7ea1a8ae-d7c9-4d28-bb8c-cd62c95fbe97}} . A possible reason for this is that both of these methods rely on distance metrics. LOF computes Euclidean distances, and IForest scores are related to the L1 distances between points {{cite:b522701bcadca5984c6bf5d58fd8cf6e484ef8e2}}. This suggests that it may be important to match the properties of the representation to the requirements of the anomaly measure. Recent work such as {{cite:75e61d07307710577d6291061edef49859a95435}} and {{cite:1f52262e46e61d6b81c0985dd883e4f95de69979}} have demonstrated a significant boost in performance on open category detection, and their performance boost could partly be because they train a representation to maximize the effectiveness of their anomaly measure, which is based on a density estimator.
d
5296fe6e3ab3c30c6ce4f9bb4452a50e
More complex event-based algorithms have also been developed, which have not demonstrated real-time performance, but show promising results. In {{cite:926295fe2330d02ab466de09d98b52fa35aabd6f}} a phase-based optical flow method is discussed, which is developed for high-frequency textures. The algorithm is compared to other event-based methods {{cite:150e673f8a227875d35ff684df2d71ca0ad256fd}}, {{cite:e4156209d39a0c4dffc009fc29c3190d6b01099c}}, {{cite:588477ed85b4cae8061f3feb08ee58616fd5829e}}, indeed showing significant accuracy improvements. Also, an approach was presented for simultaneous estimation of dense optical flow and absolute intensity {{cite:aeccb6b761331f1a8f04a350d72cb1538c6947a3}}. This is the only available approach aimed towards dense optical flow estimation. Visual results of this method are encouraging, yet a quantitative evaluation is not performed.
m
e3dd387c56f76da489a66241d5227136
The inverse scattering transform method is an effective tool to study the integrable nonlinear equation with sufficient decay boundary condition or non-zero boundary condition. Besides, some novel and interesting properties for solutions can be given by the inverse scattering transform. There are so many works involving these problems. We here only refer to some of them {{cite:8dc08a78ecc2ecd509f3d3129dba72c3e4ca9962}}, {{cite:b6cff7298d2ec86347f36e24783e28653a9d0a28}}, {{cite:02d952144d8c332b298e77e72fe202dbd6fdd742}}, {{cite:267762dd771edba577d8035cee01624dc4fa6157}}, {{cite:fffcc5e1e99c09ae6eca89f12ddc51cbf436eca8}}, {{cite:b3cfb51a018e8f6a2c24f493939d4bec7cb3d068}}, {{cite:ed2c06c35898b3d5e3e07fbcfad15f8c75bef712}}, {{cite:56a34762c6b4ee99c284601a2bece7cf19d1b17d}}, {{cite:f119e876e09e63c4177ae0656f8d2b19fde6070b}}, {{cite:c35e39f842500751752d673eecd9ce45b2418569}}. In contrast to {{formula:97c35bb2-e3e4-4537-b150-bfa0c8b5f4dd}} matrix linear spectral problems, the {{formula:b5937a62-0561-46fa-8c76-03369c284a26}} or higher order matrix problems are more difficult to be investigated by the inverse scattering transform method {{cite:b3961a8475eb2ce38424fd82c39901541b334e20}}, {{cite:63f163bf85437520b8c91cf95da9e425ff673af9}}, {{cite:083c3914139005bc94181a924a53c35726a139b2}}, {{cite:f9ef29fe9de1236f84dbe11970af3d18eb02d1ad}}.
i
dd9fb1d869024de9d6dc26c6ddf3b1a7
After connecting orbits to QNMs and considering the QNM expansion of the heavy-light four-point correlator we noted that there is a natural expansion in which QNMs look like energy eigenstates or primary operators. First, we take the large {{formula:e2222a61-6a45-48b0-96f4-6b83766878d1}} limit. Second, we consider the large {{formula:4c9edc73-ce21-43c7-bfc3-731ce825c7e6}} expansion. In this setting, orbits become stable: tunneling is nonperturbative in {{formula:c450c19b-8da5-46e8-910f-89c8e57e1a98}} ; gravitational radiation is {{formula:07bf713d-39d6-4200-96a8-af15f587dc09}} suppressed. In fact this is precisely the setup studied in the light-cone bootstrap {{cite:e1425a29fae3d5f613e69cbfc4df67e0d63a555a}}, {{cite:39ba05b19868ba147093458f2ca89006571f09f4}} that has been recently applied to the heavy-light correlators {{cite:5ad8a1358123656ed6546e01f7a9659cc143c6f0}}, {{cite:9ad1c5f4c9fc117d218c39ab3034aee72748f242}}, {{cite:de55ca1b03191cd251322a3e6f73d71885f00266}}, {{cite:d1d6778d1ffcd04582fc841df5774e71f25a31ae}}, {{cite:c7c5fd40d9bcab06b8e45b222e2315dfe280c7d3}}, {{cite:bd431a95f4cca6964839b789fa2bac467c37445d}}, {{cite:726b691ec9598815d50c780bd58887a036af344f}}, {{cite:07399f20c5f38c5626989d8b073a3b92a3153824}}, {{cite:20c62425a00a3648cd6e5d58ead227d6aa3abb59}}, {{cite:56fc45c185d7be6430a088f9b3c91054e076b827}}, {{cite:4df1c84d1fca576cdb3a9cbe575a401bc8462733}}, {{cite:b10355d38abe3a8ef9ef67a49088cb9854f9c4de}}, {{cite:790b7f71079e168b4580f1e126d79a2bf998806d}}. The orbital quasi-normal modes then become nothing but the double-twist operatorsThe relation between the double-twist operators and stable orbits has previously been discussed in {{cite:e091c67fe90529e21a5ae6e10b0dc4c0ca17db1a}}. {{formula:b747bcf0-2011-4564-a1eb-d442699e5586}}
d
b18d0e1bbef083499eabe3dcdea02792
It is possible to define the eigenscheme of a tensor which is not partially symmetric, as in {{cite:c58286be2f267608432002181ca35fd8c9541a64}} or in {{cite:0451cd3618591e46bc35035434031d7569977a11}}. However, with the choice of a basis described in Remark REF , it is apparent from the definition that for every tensor {{formula:d8a04317-1ed3-42be-b509-c09dc9fc11dc}} there exists a partially symmetric tensor {{formula:42c5ef4b-9bfb-4630-bb44-204f1a4d1a53}} such that {{formula:c3aa0571-bd70-4750-bca7-e01eb4b0c884}} . For this reason, it is not restrictive to consider partially symmetric tensors.
r
cd66bc32a31b322c7418ff3f5583d69e
{{cite:775d42ab0c289f95e683b50828d539eca5ca8817}} extensively studied gender bias in word embeddings and proposed two debiasing strategies – `hard debias' and `soft debias'. Hard debias algorithm first determines the direction which captures the gender information in the word embedding space using the difference vectors (e.g., {{formula:f8ec269f-18eb-402e-963d-e3ce35eb71f6}} ). It then transforms each word vector {{formula:ecfc2960-d35e-4a55-92a2-69dd75d526d3}} to be debiased such that it becomes perpendicular to the gender direction (neutralization). Further, for a given set of word pairs (equalization set), it modifies each pair such that {{formula:14359056-d709-4736-ade0-8b9ed6b8ef0e}} becomes equidistant to each word in the pair (equalization). On the other hand, the soft debias algorithm applies a linear transformation to word vectors, which preserves pairwise inner products amongst all the word vectors while limiting the projection of gender-neutral words on the gender direction. The authors showed that the former performs better for debiasing than the latter. However, to determine the set of words for debiasing, a support vector machine (SVM) classifier is used, which is trained on a small set of seed words. This makes the accuracy of the approach highly dependent on the generalization of the classifier to all remaining words in the vocabulary.
m
a8648268e6c2ecb9f34c8cfdd10b6041
With the explosive growth of available data and computing resources, deep neural networks, i.e., deep learning {{cite:53aaf6c92356278d4779131bc84dfb0675d71bf9}}, are applied in many areas including image recognition, video surveillance, natural language processing, medical diagnostics, bioinformatics, financial data analysis and so on {{cite:3dcfc48ee8643bda9e1bd2659ded4ce6e2b25567}}, {{cite:bdba9539300372b66612340371e3bbe005784345}}, {{cite:76de3223783e994f43aa91b9b18778fedba592ff}}, {{cite:999d26c4368bbdc78f91abe61cd182af4f90d888}}, {{cite:458062a045f51979282d0d47d8d865f0416c4e38}}, {{cite:3fe6f73a86162f15cc53d9e27bf07a5a583aa341}}. In scientific computing, especially, the neural network method {{cite:cfd6cb56347deb7164113919f38e2ef6f95cd30c}}, {{cite:e96520e901328df07d2a6f4e209cffc96bbdf1e0}}, {{cite:4883f3fd7b0c411cc7edde65e96f04bc8d64fe5f}} provides an ideal representation for the solution of differential equations {{cite:d8e2edeabe288a04d6feb5f1457aa2c75e4d2934}} due to its universal approximation properties {{cite:0ff0048a1bcbea28fcd5a7a1b1ebfac471c2bc67}}. Recently, a physically constrained deep learning method called physics-informed neural network (PINN) {{cite:405d362c1899e6f03e0c6e029d3a89b38a9341ef}} and its improvement {{cite:cabbf33646f9d38195e84254c6bfe21371d9d277}} has been proposed which is particularly suitable for solving differential equations and corresponding inverse problems. It is found that the PINN architecture can obtain remarkably accurate solution with extraordinarily less data. Meanwhile, this method also provides a better physical explanation for predicted solutions because of the underlying physical constraints which is usually described explicitly by the differential equations. In this paper, the computationally efficient physics-informed data-driven algorithm for inferring solutions to more general nonlinear partial differential equations, such as the integrable nonlinear Schrödinger equation, is studied.
i
b57aa8c34ba22fae6757a9aff9c268cd
In quantum mechanics, one of the fundamental problems is to characterize the quantum and classical features of physical system of interest. One generic characteristic of the quantum system is the presence of non-commutative observables in the operator algebra of the system—the set of all operators forming a vector space {{cite:520a33ed2f5188ae42c5f1f2e7c4eca12023f0cc}}, {{cite:15019f1fdf402a6a5f6238af226a8721684874e9}}, {{cite:ce4538a145d1b305dbba2da5d42635ea6acf346d}}, {{cite:b61aa9837437f7daba31e9676edee637c087b2a9}}. The non-commutative structure is the basic ingredient for deriving the Heisenberg uncertainty principle {{cite:5c27448d3d90117e27325986cd9ef416daab14ca}}, {{cite:4a4175fcf684a5fabba0017d51764b738f094a13}}, and the violation of Bell's inequality {{cite:e82e079add7d0c5fb2f012866faa09197587c070}}, as well as classical-quantum discord and related measures {{cite:490d9380ecee0ba7831f4515bc25e8439fa850ae}}, all of which are used to testify the quantumness of the underlying system. In this sense, a verification procedure for non-commutativity of the operator algebra can be considered as a witness for quantumness of the system {{cite:b314aacd358b0426f5ee17f0296f07ec48180fb4}}, {{cite:15019f1fdf402a6a5f6238af226a8721684874e9}}, {{cite:4f6e51241be3acbf2f3a31d7134793ef08f65c74}}, {{cite:905b3a4951c13535e88489081c8a58389a45ebc6}}.
i
58414f1392da36e40fc65fa6ab4469d6
Since the year of 2012, deep learning has been developing rapidly and achieving remarkable results in various fields. Inspired by this, researchers have brought the deep learning methods in solving the problem of HSI classification, and gained impressing performance. Traditional HSI classification methods primarily focus on jointly utilizing spatial and spectral features. In 2013, Lin et al. first introduced deep learning to HSI classification task. In specific, this work utilizes PCA to reduce the dimensionality of the HSI from hundreds of spectral dimensions to dozens. After that, a neighborhood of the pixel to be classified is cropped from the compressed HSI data and stretched into a feature vector. Finally this feature vector is fed to SAE to produce the deep spatial spectrum feature {{cite:b51f79059768b3e54ce209f18e32524537f6295f}}. From 2014 to 2015, Chen et.al introduced another spectral dimension channel based on the {{cite:b51f79059768b3e54ce209f18e32524537f6295f}}. This additional channel directly takes the spectral features extracted from the pixel to be classified as input, and its output is integrated with the spatial spectrum channels to form a dual-channel structure. Then SAE and DBN are used for feature extraction respectively and their extracted features are fused at the end of the dual-channel network structure {{cite:6600fc40f22bf15e5bb79dd20ad8e23851d4641a}}, {{cite:8fb72af751929f6ecc7b4aca132beead5d820c79}}. In the same period, some other methods tried to apply 1D and 2D-CNN in the HSI classification. Specifically, 1D-CNNs are used to extract the deep spectral features {{cite:83582464b37f5820af5d185a0d98880b52a71e08}}, {{cite:cafeefb05e4c9e63df53ec096d44090ba255a3f1}}, and 2D-CNN are employed to extract deep spatial features from HSI blocks that have been compressed along the spectral dimensions {{cite:57a105d23665ac454c3118626f3f5f2aebe1d01f}}, {{cite:e31d705135faceb45252d50e27b5e0ea70507659}}. After 2017, deep HSI classification methods primarily focused on extracting spatial-spectral features. Some work construct a dual-channel network structure to obtain spectral features and spatial features separately, and then merge them to form spatial-spectral features {{cite:0a7e64dcf1d548f61b7d8cca3e84a7d4b231158f}}. Additionally, 3D-CNN is also a popular choice to capture the space-spectrum joint features directly {{cite:195dd0e796e826dac08df09dc5fc33afe297d1b8}}, {{cite:020a7f33aafc17a393a8bbf4baae7e9b7c4a0225}}. Since 2017, various optimized 3D-CNN have been applied on the HSI classification task {{cite:d775a1c5ee2b909b955105536df28db720574e44}}, {{cite:6a078d7da0e2d458e7ba82d9d9b32afc11b1eef4}}, besides which some transfer learning methods have also been drawn into the classification of HSI images {{cite:6a078d7da0e2d458e7ba82d9d9b32afc11b1eef4}}, {{cite:191511e56677f59a28ba3cd3272091aee9ed755c}}
i
812803895d4adc3662b8714353abc1e8
Envisioned by the promising prospects, both empirical and theoretical studies have been carried out to understand the foundation of QAOAs and improve their performance. One critical line of research is unveiling the connection between adiabatic quantum computation {{cite:734f2ad8445aed1307c5bbe3d43f94cfb45fec83}}, {{cite:adae6c6e6d155ff30f03f1806ea0f3317f0758ff}} and QAOAs and showing that QAOAs can be seen as a parameterized Trotterization of adiabatic evolution {{cite:04f5ade2b7cf9ce78bbffdb9288fde8edb0576bb}}, {{cite:386ddaf66330e479fe0fa29714bb68acbc6f36e8}}, {{cite:f6c54c16e48cf13d98bfb34b5be1f8f943fd9795}}. Making use of this relation, the parameter initialization of QAOAs can be simplified associated with an improved performance {{cite:682c902c992e795b532095ef92317780f6368166}}, {{cite:e7fc9bb698d216a2f800270704b86063476b03c7}}, {{cite:f6c54c16e48cf13d98bfb34b5be1f8f943fd9795}}. In parallel to explore the initialization strategy, another crucial topic is designing advanced training strategies of QAOA to avoid local optima and accelerate optimization. Concrete examples include modifying the objective functions {{cite:413027fc7688a63dc4a22134c6957914ece47a32}}, applying the iterative training strategy {{cite:e8021e5a1eeded99f85bf77a50abd77bcba7947a}}, {{cite:ba5e7dd380967e4fed8cdaa2036c5eca3927ab1c}}, and using adaptive mixing operators {{cite:e8215cebe6747b93563c750e7a4349a835e3bcab}}, {{cite:e7fc9bb698d216a2f800270704b86063476b03c7}}, {{cite:f131b6ee08fb556a094958f5768461762f9555eb}}, {{cite:4ad6f5ba15aab82501a8133c294b084f5460fa85}}. Despite the remarkable achievements, little progress has been made in overcoming the scalability issue of QAOAs, whereas the ultimate goal of the most advanced QAOA is solving a problem with hundreds of vertices {{cite:878e9cac530037634ce3cd9ab40f8762c45ea30c}}. The main challenges come from the fact that manipulating a graph with {{formula:95786541-b7b9-4d0b-a56a-2d862dc363c2}} -nodes requires {{formula:f9c1f570-91ea-41f9-be82-e56b6eb05b89}} qubits but the most advanced quantum machines nowadays can only provide a very limited number of qubits with {{formula:0122b3fa-885f-440b-8003-b50b0a33c244}} . Moreover, due to the a high-level noise and barren plateaus, QAOAs may suffer from the trainability issue for the large {{formula:b33b6220-b0b9-481b-b0e4-d722c48f421b}} {{cite:5b2dcc1ee6405580f3ff84b241848cceb8003907}}, {{cite:8296a95e750059d377d8cf42c31b07f587dde7c7}}, {{cite:166dcba8ccea45a45e50bdac1d3d0a2aecef6bd5}}, {{cite:320302cb233eec5abb3ff94092cecd345c3e699b}}, {{cite:9ebdaf96b59389b136fc6ad4213306aff0699bc7}}, which degrade their performance {{cite:844db4bf558fedb904618aee3201833f901049ec}}, {{cite:5a8a11813aeac27d9f171b139be1940ca1b686a7}}. Although an initial attempt of the scalable QAOAs has been addressed by {{cite:c10f8d61c105ff57e3d4828458c2168e37380e38}}, {{cite:3ee19060ddf26c400a476137e046c8f5c1f07682}}, their approach encounters the sample complexity issue The approach proposed by {{cite:c10f8d61c105ff57e3d4828458c2168e37380e38}} breaks one graph into two subgraphs sharing common nodes. To sample a good candidate solution, the local solution of these common nodes should be exactly overlapped. In this respect, the sample complexity of their approach grows with number of common nodes which makes it harder to sample a good candidate solution.. To this end, it still remains obscure whether QAOAs can outperform classical approaches towards large-scale combinatorial problems. {{figure:cae37cc9-6696-45d5-855e-fafae7f11401}}
i
7248fad93aa8fbdec5fbc3696b5ad2eb
Advent of quantum information theory initiates the second quantum revolution where non-classical aspects of the theory are well utilized to design novel information protocols. While quantum random number generator and quantum communication models are already available at the commercial level {{cite:3d80e5f9d8afd0a76036777f54630b221f500dd9}}, {{cite:5ae0486fa202e4f0cabe898e9cdc2bc9e6724dea}}, {{cite:4d61cbb7176070ca008f260fa3dfb32ce6f5fdd8}}, {{cite:a5a7e80e90dbecac5ad7599423db4d76cc7433f7}}, {{cite:de042041204692d5cf809d4a508bc74172aeeee5}}, we are also at the verge of developing much advanced quantum internet network and quantum computer prototype {{cite:3fffd44d828e82f5d94c7b2b36a784db1c69d49b}}, {{cite:1e15df15e138b7bff62d80c9db7387b6ba8570fe}}, {{cite:e81361ee92c4542e9eb1c1d19f3851bb9a0bae41}}. From a mathematical point of view, however, there exist several other generalized probabilistic models that incorporate quantum like non-classical features. This motivated a huge aspiration to identify quantum theory uniquely from some physical/information theoretic prescriptive {{cite:99b374802c5fd160ed13ae9f14f00cc6312d8bb9}}, {{cite:a74b923f3e2fe7dead63500a78b1ee46f19f08f5}}, {{cite:71c20323a5da00981247a004a0ab54e5457f4e05}}, {{cite:5fcd2768418f8a5e3a2b8e2f69f37ee788de4f09}}, {{cite:89ddc79a8f99a53e5fada64f60bd87180a390179}}, {{cite:a7a3397b3e6b29355ccaa8e5d18257cc56872ab5}}, {{cite:8d8513e03f8856def1a236ede7d308bfbb3444d5}}, {{cite:afcd700029673b57b5dd9d8df29b882c5d7cf870}}, {{cite:a6643d0da32e0af94c66baacbe3b2e04273925dd}}, {{cite:a1a7d05b4081378d91f0a9b6a2c03a1a11952a55}}, {{cite:aa61435a8a6f41a4620565292041f6cc0c5e490e}}, {{cite:093528996a2066ea3feeb9c1f5b35c77eaa1cedc}}, {{cite:11106162d708b9617966c7bf17a4c5ecfbfc6eb8}}, {{cite:6fd0973bc56a0b515ec9f75770340abbf5e033cc}}, {{cite:1ffd7fdf457835bc3e069b2eb2863d7af0cd6ca6}}. Our distributed computation scenario is a novel proposal in this regard as it identifies computations in DCLC({{formula:5a018fad-2d4c-44d3-a282-8635913e5d6f}} ) scenario that can be perfectly accomplished in quantum theory only but not possible in other GPTs. For {{formula:aefc51fe-e3ca-488e-a51b-e1b7e515c341}} we, in fact, have fully characterized nontrivial computations that can be perfectly done only in quantum theory.
d
dd4dc7cfd09a0368e4c94b561a2563f3
The Connected Component (CC) based methods {{cite:c105157a7c0488948447bc5eaeecb3597be2e29e}}, {{cite:09b3acff0911620fb91c60d7c5ec50e83161594e}}, {{cite:0f7404be4808147a17ce6e0aaa700ccaad6973f7}}, {{cite:174288f14cf6ecd1f9d048e70493688bbef1b6e9}} generally link or group the detected individual text parts or characters into final text instances by post-processing procedure. CTPN {{cite:0f7404be4808147a17ce6e0aaa700ccaad6973f7}} modifies the Faster R-CNN {{cite:69b6bb0fa500137be028de18166bc7042a9976a0}} to extract horizontal text components with a fixed-size width for easily connecting dense text components and generating horizontal text lines. SegLink {{cite:174288f14cf6ecd1f9d048e70493688bbef1b6e9}} decomposes each text into two detectable elements, namely segment and link, where the link connects a pair of adjacent segments that belong to the same word. CRAFT {{cite:c105157a7c0488948447bc5eaeecb3597be2e29e}} detects text regions by exploring affinities between characters. Zhang et al.  {{cite:09b3acff0911620fb91c60d7c5ec50e83161594e}} use a graph convolution neural network (GCN) to learn and infer the linkage relationships of text components to group text components. CC-based methods have a more flexible representation and can adapt well to irregular shape text. Therefore, the CC-based methods are popular in arbitrary-shaped text detection, even if the clustering of components is unsatisfactory. {{figure:8c68ae25-2bb9-4d12-8efe-1044c5854c55}}
m
90c405d37688a2cf0d1d3014a5e230b2
Quantization-Aware Training Quantization-aware training {{cite:0614a961ad197bb3e852f4cfdb023b547b4ca302}}, {{cite:577e5fa933edfc4e211572454e40fbcf65f5d7fe}}, {{cite:a3502f4467855262aca59b0c7d38cab452cc8a0b}}, {{cite:64d6dcc4c609409de84813adeb4e2cb87f7dec83}} generally focuses on minimizing the gaps between the quantized parameters and the corresponding full-precision ones. In {{cite:b5cb8ac3741889a8d0569da35ea9fe28186256a1}}, {{cite:9e5f4047141cd83d9df16c38cb3362efd235bbaf}}, {{cite:004130f6c12d0a294735c34b321f712c3423c681}}, {{cite:ea11a41ee9d484758301e20d2798c224300e15ff}}, the scaling factors for quantized parameters make the approximation of full-precision parameters more accurate. In {{cite:36b686b2a89623d449f1f4da1b708988c4782033}}, the weights and activations are quantized separately in a two-step strategy. Mixed-precision is widely employed to achieve smaller quantization errors, such as LQ-Net {{cite:577e5fa933edfc4e211572454e40fbcf65f5d7fe}}, DJPQ {{cite:712ab44439038b9dbbd3852a4b32923bab196b0e}} and HMQ {{cite:87cc00da679dd3bd3f81cb9d6453094d004144c7}}. In HAQ {{cite:d8ca130b4c7bb059758d26d9cbd1df2d32572e08}}, the training policy is learned by reinforcement learning. Given a bitwidth, adaptive non-uniform quantization intervals {{cite:d556f28b7c3b9153bc2d8d54ebd75aa235699fc0}} can also reduce the quantization errors.
m
0af5987caab24c18236150861f3c1a32
Second-order methods are known to have superior performance compared to their first-order counterparts both in theory and practice, especially, when the problem at hand is highly nonlinear and/or ill-conditionednonlinearity and/or ill-conditioning. However, such methods often have very high computational cost per iteration[id=VM], due to the need of forming and solving a {{formula:8dee433f-3e74-47fc-bc1b-73ffe7759072}} linear system of equations.. Recently, there has been an intense effort to develop algorithms which use second-order information with a more reasonable computational burden (see, e.g., {{cite:952d4a5d969d2bcb4809e331f85809ffd07367fa}}, {{cite:af184229c52a1ae38a40887a816e3e65b9aa94a0}}, {{cite:0fec3948a2f391abae8e19421dc4be4dbdd26fff}}, {{cite:5cd7b8fcf7a815cd19317496dd06dd4e09254db9}}, {{cite:3f3ca178b078e5cb0a7753d1a39f51efe8354d9f}}, {{cite:e52b95217b35c3a9ea8a3f9e566279de7be39c59}}, {{cite:55eddb637174c0b51a1d84d2391f1a527ec84b94}} and references therein). Those methods use techniques such as random sketching, matrix sampling, and iterative estimation to construct an approximate Hessian matrix. Local and global convergence guaranteesconvergences have been derived under various assumptions. Although many experimental results have shown excellent performance of those methods on many machine learning tasks, current second-order methods for finite-sum optimization tend to have much higher time-complexities than their first-order counterparts (see {{cite:e52b95217b35c3a9ea8a3f9e566279de7be39c59}} for a detailed comparison).
m
4fb82e651b2fb03112865207498fa731
The classical data structure for approximate membership is the Bloom filter {{cite:f0117fa4e4d3e307a98cba58fa080840103afa38}}. It may be the best-known probabilistic data structure. A Bloom filter is akin to a set data structure in that we can add keys, and check whether a given key is present in the set. There is a small probability that a key is incorrectly reported as being present, an event we call a false positive. However, Bloom filters can use less memory than the original set. Thus, Bloom filters accept a small probability of error for a reduced memory usage.
i
9ac663654cf80eecc7132e3df3016a49
At the start of time-frame of length {{formula:24b468c0-1207-4101-8070-a72644059f79}} , user device generates {{formula:40858582-2709-4ab4-98ed-0b2d33040ad7}} Kbits of data per unit of time, where {{formula:09f3c4a0-4818-4629-a69a-cb2bbe5dfd6c}} is distributed uniformly between 40 Kbits and 50 Kbits. If the device wants to offload its task, it sends a message with relevant information to the PTV-AP which makes decision regarding offloading or local computing. In case of offloading, {{formula:26215c67-600a-4d3a-987a-325b525ef9da}} Kbits of data is offloaded to MEC server and {{formula:007ba090-f004-4dd9-99fe-b29a5c627e9f}} Kbits is computed locally by the device, where {{formula:d4c01694-981f-4aec-9250-13c2376ac8a5}} . Device's CPU takes {{formula:a9256065-9e34-451d-8702-350deab90284}} cycles to compute one bit of raw data and energy consumed by CPU to compute {{formula:2044c584-688b-4e6e-8cf2-53ad2d2f2c94}} bits is {{formula:3f0c33b1-daa3-4716-84d3-c219a6c82fb2}} mJoule, where {{formula:4e37d868-c5ad-47c5-9b21-667088df1805}} is the effective CPU capacitance {{cite:66246ca1a54dba8b817e443f1574fd9ffcf2a231}}. After offloading {{formula:33775fd2-eac7-4fee-9cb2-62bf81fd4342}} Kbits, the device saves {{formula:2e0237b3-e905-4aef-9f88-b126445a4a6f}} mJoule energy. We considered PTV-AP transmit power {{formula:60c2b19a-68cd-4a54-b282-e29434286459}} Watt. For the IRS, amplitude reflection coefficient is set to 1 {{cite:59b0c060145da4835903d9dcd017a967080fe7d7}} and phase shift coefficient ({{formula:efbde1c7-203e-4d86-ae3a-92161e9dc474}} ) is set to {{formula:e144fbbd-b3e1-406c-b896-998bf73ffac1}} . Each simulation is iterated 1000 times and the averaged results are reported in the following.
r
54e0f74ae442b4aba2a31a01c5a5add7
It can be observed that {{formula:f6cc6105-1ecb-4e7b-844e-7aab245927d7}} surpasses MatchDG {{cite:26b1ca793136c8d2914bb293dce34a02d87de0fa}}, a quite state-of-the-art (SOTA) causality-based method that aims at learning domain-invariant representation, by about 5.0% and 1.3% in Art-Painting and Sketch on PACS respectively. We compare our methods with CICF {{cite:47733cbdb52c5c615d47e4273484f32391bf0aa3}}, a recent method that interprets the causal front-door formula (Eq. REF ) via the meta-learning perspective. In some challenging domains (e.g., Clipart of Office-Home or SYN of Digits-DG) where testing images appear with diverse domain-specific confounders such as colorful backgrounds, our methods significantly outperform CICF. This suggests that the style-based information of our style transfer algorithms is more crucial to mitigating the spurious effect of confounders than the gradient-based meta-learning information of CICF. On PACS, FAGT beats CIRL in terms of the average accuracy over all domains. It is worth noting that CIRL is a current SOTA method in OOD generalisation and an improvement of 1.4% over it in Art-painting of PACS is considerable. However, on Office-Home and Digits-DG, CIRL outperforms our methods in some domains (e.g., Art of Office-Home, MNIST-M and SVHN of Digits-DG). We hypothesize that the improvement of CIRL could be owing to their Factorization and Adversarial Mask modules, making the method more complicated than ours. Similar to FACT, we try but cannot reproduce the results of CIRL. We rerun their source codes ten times, and report the best results among our reruns as CIRL{{formula:e474711c-afa0-4df8-97a3-3e3727b08cad}} for fairness. Compared to CIRL{{formula:ddd44365-53d6-4379-bbda-2671c8feaa13}} , FAGT surpasses it by 1.1%, 0.7%, and 3.2% on PACS, Office-Home, and Digits-DG respectively. {{figure:d22e2681-d836-40f9-bdba-96a8be0b956c}}
r
b3cac95eb157dcf5adb3e099b9ec95d9
We use the 3D U-Net model for brain tumor segmentation proposed by Isensee et al. {{cite:ef4f16b44d61d0ada53bdb0cad87836be6183156}}. This is the highest ranking and simple model in BraTS 2017. Like the U-Net {{cite:0a0fd2adaa3e6e7c05b4bf374a99f8e769ba46c8}}, this model {{cite:ef4f16b44d61d0ada53bdb0cad87836be6183156}} comprises a contracting path to extract more feature information with increasing network depth. It has an expansion path to generate a segmentation mask with precise localization information and a skip connection for better feature reconstruction at every stage of the expansion path. In our work we have used the bias field correction, normalization, clipping maximum/ minimum intensity to remove outliers, rescaled to {{formula:4675a613-8a0b-407d-9666-e00f13e35090}} and setting non-brain pixels to 0. The model was trained on a patch size of 128×128×128, randomly generated from all the input MRI modalities. The obtained dice score on the BraTS 2020 validation dataset is 0.880(WT), 0.858(TC), 0.759(ET). The segmentation of tumor tissue of a validation sample is as shown in REF . The figures show a visual comparison of an input flair image and a predicted image. The segmented parts are then used for survival prediction with the prognosis methods with 1) Image-based features, 2) Radiomics based features, and the following four predictors. {{figure:dfad0c5a-b6eb-49e5-b4bd-d899ee67831a}}
m
b243dfff2a882551f4a5a385921d4b09
The physical interpretations of 1 and 3 are quite intuitive. In the case of 1, the physical interpretation is immediately obvious, since this feature satisfies the typical definition of a splashing drop, namely, the ejection of secondary droplets from the main body of the impacting drop {{cite:bf4294f2d21987c1964465ca6372ff137711237e}}, {{cite:015e46006ae469ca1596b4e50e2a72f2bcea9277}}. In the case of 3, which is characteristic of a nonsplashing drop, it can be seen that the lamellae of a nonsplashing drop are shorter and thicker when {{formula:d6f9a408-7ceb-4687-8e77-3ae966b7593a}} compared with those of a splashing drop. The lamellae are shorter because the ejection velocity of a lamella of a nonsplashing drop is lower owing to the lower impact velocity {{formula:96a007b3-0c4d-4d76-828f-3a100beb025c}} (smaller Weber number {{formula:4466bbed-b017-4f9a-a6b8-00d8c02df5b0}} ) {{cite:f2499f057b0f7f1e1464d37dc6f0a0514fbaff0e}}, {{cite:3162241f113523047acee65f82c25dab11bc32ca}}. They are thicker because secondary droplets are not ejected from the lamellae of a nonsplashing drop.
d
a956db1556906adf9d624000d00d743a
Previous studies have shown that natural image priors such as patch recurrence property are useful for degradation modeling. In {{cite:01e2c5698e0b1fcc33404d1c11d800dc1391ca72}}, Michaeli et al. point out that the Point Spread Function (PSF) is not the optimal blur kernel, and they further propose to obtain the principled MAP estimate of the blur kernel via maximizing the similarity of recurring image patches across scales of the LR input. The project homepage of NBSR {{cite:01e2c5698e0b1fcc33404d1c11d800dc1391ca72}} is at http://www.wisdom.weizmann.ac.il/~vision/BlindSR.html The estimated blur kernel can be used to degrade the LR input or natural HR images artificially. In this way, the blur kernel estimation approach is smoothly plugged into both self-example-based and external-example-based SR approaches {{cite:de52e668211b91c8ea88f10af0e36f507eb7cece}}, {{cite:ceadfc03502805b07514e96d0256da5d43a53602}}. The results in {{cite:01e2c5698e0b1fcc33404d1c11d800dc1391ca72}} show that the accuracy enhancement of blur kernel estimation leads to an obvious improvement of SR performance on synthetic as well as real-world images.
m
944147b1619501d0816a50aee84c861e
Gravitational waves (GWs) {{cite:f137c0348df29fd8c19fd5e4fbe4924a13273023}}, {{cite:05d1569bfbdad72838414d7715bfcdbdca35bce3}} from compact binary mergers are often referred to as “standard sirens”, in analogy with the term “standard candles” coined for SNIa, thus underlining their role for cosmology. From the GW signal it is possible to directly estimate the source luminosity distance {{formula:f308c4d9-60b8-4d07-af6f-517bec4ae2c1}} {{cite:723ae86304640d69a7cbc9a850b91ae5cbd63171}}, {{cite:11e9bbbcee2e0038de155cded69f0a916be9a773}}. When combined with the redshift of the host galaxy, this estimate can be used to measure cosmological parameters and thus probe the expansion history of the universe.
i
c52136612c2c4cde8cbf725b6a6f14aa
Recently, there has been success in learning multiagent communication frameworks to solve partially observable tasks. In particular, these successes have been achieved using neural network architectures in conjunction with a reinforcement learning framework. Examples of recent work that has been successful in multiagent communication include CommNet {{cite:8f38a35080b54e252c7ddc8f72e09e7c510798a8}} and IC3Net {{cite:59060456d2791e6556fba6bb5eee633d84b8bf90}}. They benefit from using individualized rewards for agents. However, their best results focus on continuous communication, which is unreasonable due to limited bandwidth in real-world settings.
i
16cd59c565f87e9f41ff2ec3087817d5
While powerful, the resolvent method makes heavy use of the matrix structure and independence between different entries; while {{formula:918593d2-5ef5-4db4-ae12-a622ae18f7ff}} is associated with a matrix process, the correlation structure between entries is complicated and renders this approach intractable. Instead, the process {{formula:b6217f16-598d-4a74-b890-1ac7b33daede}} somewhat resembles what is known as (Hermitian) Dyson Brownian motion, whose definition we now recall. In the work {{cite:921b6560acce8a7b1ae4643afe667282f4f6c958}}, Dyson showed that the eigenvalues of {{formula:e430a745-4ee4-464c-a973-943d9783621f}} for any Hermitian {{formula:4f1f3ff7-dc15-4785-b17e-ac7d9ce85943}} (and {{formula:1e34f20e-c57f-48eb-b724-1aa8fccc9904}} as in (REF )) obey the system of SDEs, {{formula:f232e802-4d85-4fe5-90d8-8411069e843a}}
m
b32c1296355b7adbc6e7b2a16ea04fae
Following {{cite:f632a5dd25a8279c733ca1d96ae81b89514133c1}}, we benchmark five mitigation methods: oversampling, corpus constraints (RBA) {{cite:391361a3ec7a4543ceb7f3930fd535fa293d74c6}}, adversarial de-biasing (Adv) {{cite:0870676de9d9c59c5d5c3a780fa7cea04468a80b}}, domain independent training (DomInd) {{cite:f632a5dd25a8279c733ca1d96ae81b89514133c1}}, and data repair {{cite:6c47e97f607c191b4285c47bda66666db6ece9b8}}.For Data Repair, the models are trained on a smaller number of instances since the method involves subsampling the dataset. Oversampling refers to the method from Sec. , which greedily samples to balance w.r.t single attributes. For each mitigation method, we train the method using the proposed hyperparameters from the respective papers. Finally, we train a ResNet-50 {{cite:1764192c878c8c43056a43c802e4230690bb2237}} without any mitigation techniques as a baseline using the same training protocol from Sec  (Original). See Appendix  for more details.
m
a85a5a88a1f5881ed591b535c2c702c2
i) Computing the Cost Volume Using Multi-Scale Census Transform. Most current stereo matching methods use DNN-based features to form the 4D cost volume. In terms of matching, DNNs can increase the uniqueness of the feature for each pixel, but they also suffer from the inherent adversarial vulnerability. In contrast, traditional methods often use simple window-based similarity functions to initialize the costs, then rely on the optimization or cost aggregation stage to integrate all local cost information {{cite:7fbbb133336a3fd4d6f5f71a13be4cfeb8c18018}}. Following the same philosophy, we propose to use hand-crafted feature descriptors and similarity functions that are less sensitive to adversarial perturbations to initialize the costs, then rely on DNNs to integrate the local cost information. Specifically, we want the feature descriptor to change as little as possible when local intensities are perturbed. This specific requirement lead us to the Census Transform, a traditional feature descriptor that is developed to eliminate the issue of radiometric differences caused by different exposure timing or non-Lambertian surfaces. Previous studies find that Census Transform is the most robust and well-rounded cost function with global or semi-global methods {{cite:1428bc3487fe3c1a240795840fafe620ae7d788d}}, {{cite:2497bd75d97136dcb60d95c53bda265f50896d89}}.
m
3349ebbc8d776dcc437a0061cb636fd3
On the other hand, singularity analysis is an alternative way to study the integrability of nonlinear differential equations The main requirement for the singularity analysis is the existence of movable singularities in the differential equation. Singularity analysis is associated with the French school led by Painlevé {{cite:b8dd0c925caeda8be72eda5660c46c4c57ff910d}}, {{cite:5fc8ae8f424152ece76741bda654d34b14c9697d}}, {{cite:399ee4a7e711facb92ec03c3c6741134827a0185}} and their approach was actually inspired by the successful application to the determination of the third integrable case of Euler's equations for a spinning top by Kowalevskaya {{cite:e9979cf9ecfcb7bf99866ad31f9fb0e1dae8d5b5}}. For modern approaches of the singularity analysis we refer the reader in {{cite:902e4d57b53aee29e3fc4944ce43f1265cfa75e1}}, {{cite:365c03880732469dd7d379c39b0d9ee96fd20b45}}, {{cite:b5245611921437a369961689868642c6614da4be}}, {{cite:74dbafa01d767c87e9706b86745daa5c41254a6f}} and references therein, while some applications on partial differential equations can be found in {{cite:75c1e645709beb8859528d68c3461d3eec064bca}}, {{cite:ad590d2fc6b3a608113b79a9ebb744cf5f06d211}}, {{cite:bf20d5983df201782d0284d85ff116a87c0d5b61}}.
i
e9cefd753b348d4bc7b0ab395485e240
For {{formula:c4c8fc64-5c65-41ed-a657-994a96d1ad3b}} we denote by {{formula:cfa51b1b-635a-43c6-8600-027ae903cc87}} the standard map on {{formula:a84f8927-4af7-40d5-b13a-3750f3019dee}} . For every {{formula:c417a780-a940-41ba-beba-837b7bb04117}} the map {{formula:6c2b9a38-bc61-48da-b3f8-2b3a41245bce}} preserves the Lebesgue measure induced by the usual metric of 2. This map is related to several physical problems, see for instance {{cite:c7c99294b84c1acb66023251a5869a91f3298206}}, {{cite:365ef0d336eb49ec35780d8d2e20f35aebcf9ea0}} and {{cite:10346bad72cf48f2a0562c80286009d95145d650}}.
r
4a54b249a56342bc08a01cfb71909686
in Review of Particle Physics {{cite:9c47e4f76e0fe71784eaa6778d50746e5bb7d9bd}}, one has {{formula:c10f5fc2-13f1-4745-b9ac-0a15cad9e142}} . The PQCD predicted branching ratios provide {{formula:889daffd-f544-405c-b297-695b89fb884d}}
r
e2ca015cbad5c9ede999f21eb89c1dd9
The second overhead, and since more memory is involved, is memory access time and energy. Increasing memory footprint may force the DNN accelerator architecture to use a hierarchical memory structure to balance the memory footprint and the platform cost. For example, external DRAM memory is two orders of magnitude cheaper than a local SRAM memory but at the same time it consumes two orders of magnitude more energy {{cite:9d18377292325be1c2568ea5c90391a3bf44a657}}. To have a clear picture, imagine a DNN accelerator with only 1MB of on-chip memory. For the normal inference with ResNet-50, we could use the local memory for activations state (read and write operations) and external memory for the weights (read operations). However, for delta-based inference, we are forced to use external memory for both weights and neuron activations. This will results in three times more external memory accesses (reading weights and states, writing states back) which is almost equal to three to five times more energy consumption (since external memory access consumes much more than any internal operation, and since neuron states normally have higher bit-width than weights). As a result, although we have observed that temporal sparsity typically reduces the number of operations, on average, by a factor of five or more when compared to spatial sparsification, in practice the amount of energy savings is less than 5x and very dependent on the memory hierarchy in the hardware. On the more optimistic side, however, our comparison against video compression rates in Fig. REF suggests that there is plenty of room for achieving even higher sparsity, perhaps exploitable with more advanced algorithms, or future ramifications of the Delta Activation Layer. In addition, new memory technologies (like resistive RAM, embedded DRAM, and embedded Flash memory) may change the aforementioned cost calculations for on-chip memory in the near future. {{figure:58fa974f-1192-479b-9017-d04b7c274952}}
d
7afb83026aacba0c40048f2eb2df65d6
Similarly, for a given input, local representations obtained from the model are compared with the prototypes to determine which prototypes are present in the image. Based on the activations of prototypes, the model recognizes the image. Hence, it becomes important to generate similar activations of prototypes, for a given input, to recognize an image like the teacher. Patch-Prototype Correspondence loss helps to achieve this objective. It mimics the local representations of the teacher for which prototypes become active. Unlike {{cite:a69a76662c05d2a64fd73f588de95c24f8bf5cd9}}, which mimics the entire feature map of a teacher for knowledge transfer, we propose to mimic local representations of the teacher that activate prototypes.
i
1d56558d4fce59ba5c2ee8e56b5510f1
In upcoming work we propose to analyse the cosmological constant within the framework of causal fermion systems {{cite:b5072640cbfb8ee8a17f6fd24bc0a82dc32f3862}}, along the lines of the already existing mechanism for baryogenesis {{cite:7c564b0d0444c95f8bda7f4c469e68fbf45969f5}}. Our approach shares the geometrical point of view of refs. {{cite:f0116498b4dac774f796da75d5ffdcdd73741bd2}}, {{cite:c338b68aad1fc94d757701d8d9a11e15f0115a65}} in that we describe the cosmological fluid as a simultaneous eigenstate of a Schroedinger operator and of the FLRW Laplacian operator. Our results are also in line with thermodynamical approaches such as those of refs. {{cite:ea431e6f4eaa6a8bcb6493f8127b24349c2178eb}}, {{cite:293fa3cdde9e0794a52c9eab9382567717e3727e}}, {{cite:bbf4c8defc949b3fe8be946bc8a7ca5a1ba3c5f6}}, {{cite:0e3104eeef26c6213d5d8cd8a17bb6246f4e2637}}, {{cite:3fb0a7b96e811fabab8a5f433d31be7af329cfb4}}, where spacetime is argued to be a derived concept and the Einstein field equations arise as a thermodynamical equation of state. We hope to report on these issues in the future.
d
076be888fb479f2b077778c90f790fb1
In this context it is worth pointing out that during the last four decades explicit kink solutions have been obtained in a number of lower order field theory models like {{formula:23938f52-be4e-4c08-a872-5fbe264ce52c}} , {{formula:b71a5004-3eb3-4778-8c94-047e8b004284}} {{cite:9400ef55460691a2110e716f74e0e51b7de4679a}} and also several non-polynomial hyperbolic {{cite:f1efe02714e5bcdacea09ed66707bc689abf4344}} and periodic potentials. Beyond sine-Gordon {{cite:9400ef55460691a2110e716f74e0e51b7de4679a}}, the latter include double sine-Gordon {{cite:6ced25b77810562cb400d4ddfe3a1d0ab035af58}} and Lamé {{cite:57811cc4613d218b514fec09da654d7a6b3a54ad}} solitons as examples. Further, recently explicit kink solutions with super-exponential {{cite:9af8c4488f371892898d36ed04152c8b16cffb55}}, {{cite:ee1e7209302aca9b10c804bb3db059987a3fa3ad}} and super-super-exponential {{cite:a086e9516230272d5b2eead5773b103d87d2733a}}, power-tower {{cite:2c81fe7d01c550ccc3227538bf2ccf652e0db227}} as well as power-law tail {{cite:250326620f805826067d5bc60ea864d04b5ba8bf}}, {{cite:affce3dc7dc0d54e7ed1b62c66cd8342408a9033}}, {{cite:040a8f3f2a3858837a6505d36267dbb61e839f4f}}, {{cite:dea41b61aae0b7d5d7717e4f9884aa33cdd8c752}}, {{cite:2d731fdfd246a9795d817f94746201c19cf99031}} have also been obtained. Thus, it is of immense interest to obtain explicit kink solutions in as many higher order field theory models as possible.
i
76a8ac97f235e876aea04bb687bb6f97
It is well-known that (see e.g. {{cite:df36956d9a8153bc0ac5bfeac0aab01d1bfedf50}}, {{cite:da0efb43daa9ec9732fefe1ab2f4388e1e670e72}} and {{cite:97d6233c38810cedf9920d3f5f6a488184f383df}}) Vilenkin systems do not form bases in the Lebesgue space {{formula:69392e94-3c8f-4a33-b907-61af1e34c115}} Moreover, there exists a function in the Hardy space {{formula:5fc1585d-b85f-4c63-b6db-13206ab28807}} such that the partial sums of {{formula:edb81f41-6b43-4c6d-bf3b-34986bb8755f}} are not bounded in {{formula:0b367c05-54ff-4a2d-9ffc-f693be7be5bb}} -norm.
i
2db65536303706a52a7aea7c02dbd3b5
On the other hand, 6-DOF pose is defined as a transformation on a template model. Pose estimators typically rely on the geometric matching between the observation (image, point cloud) and the template, where the matching is used either directly or as ground-truth for network training. Thus, 6-DOF pose with geometric template is a global and task-agnostic representation of the object geometry. However for a category of objects with different instances, the geometric matching can be ambiguous, as shown in {{cite:46b2de23ccc97274e6747367c91d3468bb0cb668}}.
d
5a79a48ff57a66bf88766ff3e06cf6d8
Comparison with Baselines. We first compare our approaches with several representative CNN-based methods including I3D {{cite:b216403480dbb8529051bafb5260f1854a687d80}}, TSM {{cite:d602532c3fa48a8c51436f50c6928c9056fec712}} and TAM {{cite:3e8997108d5e6f415b72eced70f0e489201ff8e8}}. We also include two other models based on TimeSformer {{cite:5a12732aee892b9c6dd4f9f3960bd32eaae7e7e7}} but using the same backbone Swin-B {{cite:c664c98ed087e117dcada1d0723fdc93148ee5b3}} as our models. All the models in comparison take 8 frames as input. As can be seen from Table REF , our approach substantially outperforms the CNN baselines on Kinetics400 while achieving comparable results on SSV2. Our approach is also better than TimeSformer on both datasets. These results clearly demonstrate that an image classifier can learn expressive spatio-temporal patterns effectively from super images for action recognition . In another word, an image classifier can suffice for video understanding without explicit temporal modeling.
r
b1169df0c36e5f3dbc63a641648a650a
The DP constraints are defined on any neighboring datasets that differ in data from one individual. These constraints are local, relative and dataset-independent contributing to the success of DP as a privacy preserving framework. However, an undesirable byproduct has been that many DP implementations are agnostic to the actual dataset at hand. Indeed, a vast majority of output perturbation DP mechanisms take the worst-case query sensitivity between any two neighboring datasets to determine the scale of noise {{cite:88ebc665252f4a8d058bd7b9a32dc5f960e1cbae}}. This is a pessimistic approach and can adversely affect query utility {{cite:2b7cb76896e44cfb46e706f2317f7f8d52a42945}}. Several fixes have been proposed to improve utility. In one direction, noise calibration to smooth sensitivity was proposed in {{cite:4946c9f79982283fb92558944baf838b8e33ee20}}, for which a chosen utility level is not guaranteed and the mechanism suffers from a heavy tail leading to outliers. Another direction is relaxation of the DP constraints {{cite:2b7cb76896e44cfb46e706f2317f7f8d52a42945}}, {{cite:ccac9ed960fc7072b7386cb7185ca1012919e877}}, {{cite:5fd9c6a4b38b52b05dc293e249f448a174024348}}. For example, {{cite:2b7cb76896e44cfb46e706f2317f7f8d52a42945}} proposed individual-DP, which defines DP constraints only between the given realization of a dataset and its neighbors. This, however, destroys group DP, i.e., implied DP constraints between non-neighboring datasets no longer remain. Recently, {{cite:aaee28dfaf8400c9b22e450a67914eb2a43a5ed0}} proposed designing dataset-dependent DP mechanisms for binary-valued queries that guarantee optimal utility and yet, do not weaken the original DP constraints in any way; see also {{cite:21ddd946adfcf383332d1bbae9b40d36414aaa98}}. Each dataset has a true query value (e.g., blue or red) and is represented as a node on a graph with edges representing neighboring datasets. Let mechanism randomness be homogeneously pre-specified only at the boundary datasets. {{cite:aaee28dfaf8400c9b22e450a67914eb2a43a5ed0}} showed how these initial constraints can be optimally extended in closed-form for all other datasets, where the probability of giving the truthful query response is maximized by taking into account the distance to the boundary while tightly satisfying all {{formula:4b33246d-9ce6-4165-9f1b-40fa08aaa6fd}} -DP constraints.
i
94a2850f8b1ef8e250835d2095b1f8f6
Early studies that applied the Hawkes model in the field of finance include {{cite:153b3671c6059ba5dbd8e47fc3c5c34d2c077ca6}}, {{cite:f5c6c4a6435eded9b90fec4afa635d6c9ec6228e}}, and {{cite:36df7929f4eb1fe939a3cd62ca7432116cd425e7}}. The Hawkes point process or similar intensity-based models have been studied in terms of price dynamics, especially for microstructure, {{cite:f711e5f942c09f44be97a2f5b117d464473f7795}}, {{cite:606d0e40e9b80a3cd21756fd5066e7946e88e3b9}}, {{cite:6825acbe8d522504fc4cff904dfb9534710b9324}}, bid-ask price {{cite:16d996586cd944146d16b0617951b592fc9fd667}} and limit order book modeling {{cite:b0a668ba800a694a8b1317f1fd0349520cbaa05f}}, {{cite:35ce5921ab9d0fe79b0f4cbd4a7c21d640c3f420}}, {{cite:c6618b8c18d4366fec90a7f942b15062533c77fc}}, with various applications such as optimal execution {{cite:c245722998a4d9c90275428f1dc81519a7457e32}}, {{cite:545483adeca1bfca9e6343c558a10848d2539b6d}}. It has also been used in other financial studies on credit risk {{cite:f21357e9a51ca059da5bf7bc60f8304858fa1fc4}}, {{cite:e3ea53e12e1269a9e78050410fd0edb4705de4b2}}, {{cite:3552d574d9d4c713009ae11302650bd6a5e082a3}}, extreme risk {{cite:871abe4c0f410e76e82ab161c9f5e6c7b1be67fc}}, and systemic risk {{cite:e3cffce462b75500729ce49fe7a53a6e1edd3d43}}. The theoretical and applied studies on the Hawkes process in finance were reviewed by {{cite:2f3a7ac26f11ae5066d0762c7c2b6f571a40627e}}, {{cite:123b1d4b2331a08b007b4ee04d7cce46ae9ba2ee}}, and {{cite:9171adb07d80b7c9aa65d971f984cb90b8d3fbe4}}.
i
5fb0272c3aaa5af66433633c762d526a
Using {{formula:1b3660ad-c01a-4451-b816-ccf73ada0dd2}} , we are enabled to relate the two sorts of parameters. The solutions in Eqs. (REF , REF ) demonstrate that the two {{formula:4bb8a5d2-21bf-4fe3-9c99-0adc80e1ab4c}} approaches can be equivalent. Consequently, the {{formula:2cb2bd05-b9b2-45cc-8caa-1c3bf34c8df3}} amplitudes are no longer some complex numbers to fit the data, but realized to mix with different topologies. Since {{formula:575ada23-3284-4f48-b9c0-73224a9fc77a}} can be associated with sizeable topologies, it is unlikely that the neglecting of the QCD disfavored parameters can be applicable, whereas the neglecting has been commonly used to reduce the parameters in IRA {{cite:b0d88ff6e470e43e7c85aa3dde917f84a144411f}}, {{cite:f8ef2a8fc4ccaef61c17afd795d4d6458b8436de}}, {{cite:44e6df37702cbf595f0d2189ca2b77be7cb32ba9}}, {{cite:659340e23f0b59155fdcc05b2e0d190707e2a005}}, {{cite:79389e5c3cc49ac98c9a47587de1f5ac18773aa6}}.
d
558dd5168a51d42515cfd3b4796b2c9c
Our lifetime for the 3{{formula:8b5dc774-7d63-4ceb-8f78-378532f9f725}} 4{{formula:62c30ef4-59cf-41f3-8a3f-f01ac0342757}} {{formula:ccf9b098-7157-4e7f-8c62-b385583be2a8}} level of P IV is {{formula:6e1e8696-b6e9-4d2a-a147-2b6e4e22042d}} ns, a value that agrees with the theoretical prediction of {{cite:0d7cc6dca42353653a5d3ec766e7b970d4559b2b}} (0.2205 ns), who used the CIV3 code of {{cite:b139b0a6fafe0002453f2df031658910b28a2bd2}}. The isolated measurement confirms our supposition that the short-lived decay found for the {{formula:ae9d1b06-5fba-4fad-88c6-828785ed68a2}} {{formula:69edb240-35dd-45d0-8a01-6a52738210c5}} 2 level of P II 3{{formula:7abb2ca6-a775-4dfe-a921-d3d012ac97fd}} 3{{formula:77502b54-ec07-42f8-94f7-676541dbb1cc}} {{formula:c462e85c-991a-4966-ba25-2d86071dfd0e}} (see Figure REF ) arises from P IV. The result for P IV leads us to the following suggestions regarding earlier experimental determinations. First, the shorter lifetime obtained by {{cite:869ef7bb9977e7ce777607595af62c3c96c0d24e}}, also based on beam-foil spectroscopy, could be the result of contamination from the P IV decay. Second, {{cite:ab4cd5d97ef99ebabd26656c3eb5272ffae8bfdb}} noted the presence of a second decay besides the one associated with P II with a lifetime of {{formula:e982ffd9-1081-4ebb-b37c-74af22298779}} ns that they ascribed to a blend with O I. Within the precision of their measurements, the shorter lifetime could instead arise from P IV.
r
26dc484093f2b81abe01a2ad3329c4e9
This section describes the conventional incremental TTS method {{cite:878023b66ddb83bbd88040420d3b9f82908f9cb5}} that uses GPT2 {{cite:5153bf84801dd16f014532164f0bbbe53a50386b}} and Tacotron2 {{cite:849297d7b2b79f265d93b8b66c3da2b5c89c170d}}'s encoder-decoder architecture, as shown on the left side of Fig. REF . Let {{formula:f7d25047-a005-4ca7-b06a-69178a7527d6}} w{{formula:7d937bcf-a07f-4a6e-9d53-005f6eee655b}} be the {{formula:d799683a-3d84-4842-88d7-b9b26e9afa6d}} -th word of the input sentence, where each input segment consists of {{formula:8571ac65-7ccc-4094-a05d-1e64a180df66}} words. {{formula:987ebbd2-f27e-40a9-8b96-13e04ee1341e}} w{{formula:3c18d524-e09f-469e-8974-737876d4da1d}} w{{formula:fa6526fb-10ff-4a34-9ae1-436d4e0d223b}} w{{formula:61626494-6a63-4879-a5e4-988cdc93d63c}} w{{formula:e3826c69-c047-46dd-bd21-156a34e5d061}} represents “the observed segment” and the last {{formula:6a3629c1-ef10-435f-a173-ccb5214f3eba}} -word sequence {{formula:e1b585ae-9c89-4782-8220-463f147fed43}} w{{formula:ab87b35c-ef35-4c6e-8acc-ce47c5f9e1d2}} w{{formula:32b6a77e-84f1-4b05-b49e-8915aaeece13}} w{{formula:805a4198-c641-4f9b-a8d9-f1f3fd02c3c6}} is “the current segment” in the time step {{formula:b4a3a16d-32ba-4a5d-a9c6-0764f63b0d49}} . GPT2 assumes that the probability distribution of an {{formula:c483e9ec-cee8-49ca-9504-2f4fe06ffed6}} -word sequence {{formula:8a6069c9-b72e-4af7-a58c-cbd7b098028d}} w{{formula:427735f0-e41d-4941-a4e1-ba242555f5c2}} can be decomposed into the product of conditional probabilities, as {{formula:ba6cc9ff-4faf-4bf2-a910-de8494569a31}}
m
89c4a462b2f1dc3070ad0bcfdbdba7b4
Nevertheless, the global features used in joint embedding methods might not be able to conduct fine-level matching between words and local regions of an image/video. In many cases, only few words in the query sentence are relevant with some small local regions in the video/video. Therefore, to conduct local matching more effectively, some methods {{cite:00c6a15ec4fce8a2d1f4d66636d49f52d4a9ef71}} rely on local features. Basically, they represent an image by a set of bounding box features and represent the query sentence as a set of word features. Then the relevance score is determined by the set-to-set matching. Recently, inspired by the breakthrough achieved by BERT in NLP, many vision-based BERT methods {{cite:128c1381d68ec36e2df0949f5b4765a0e1e0ef8e}}, {{cite:f44cf52e7ea735b2ee3b30afe0dfafe7736e299d}}, {{cite:9e91a4926213b826f8e222aba6ed41ca3a5a0e73}}, {{cite:54109e6f6afc0061e8de65629616a9d1e28464fb}} have been proposed, and achieve an excellent performance in cross-modal tasks like visual question answering (VQA), image captioning and cross-modal retrieval. In parallel, Baidu has launched a combo-attention network (CAN)  {{cite:43f645044e3dbc99d4659466195b699d86103773}} for an effective query-to-video retrieval in dynamic video adverting (DVA) platform. Nevertheless, since the cross-attention mechanism used in CAN takes expensive computation cost, it is impossible to use CAN to compute the similarity between the query and all video ads due to limited computing resource. Thus, previously, we first conducted the coarse-level retrieval through title-based retrieval and deployed CAN in the re-ranking phase for the search efficiency. Nevertheless, in this case, some relevant videos might be filtered out in the title-based retrieval due to their low-quality titles. A more reasonable way is to incorporate the cross-modal attention in the early stage.
i
9b87c883db69d6334434c7e716435a14
The quantitative results of our approach are reported in Table REF . The competing SOTA approach in deep learning based visual odometry is DeepVO {{cite:74bc163e9a508b8a5ee478e76859d07394ebe712}} while VISO2-M {{cite:929671952c1264206aef6a4a03f8427fe8372910}}, a SOTA feature-based algorithm, is chosen as the classical approach that we will compare our results against. Since the codes for DeepVO were not open-sourced, we implemented this method as close as possible based on {{cite:74bc163e9a508b8a5ee478e76859d07394ebe712}}. It should be noted that the structure of DeepVO closely resembles our network with the exception of the attention layers. As can be seen in Table REF , our average and sequence-based results consistently surpass DeepVO. In particular, the averaged accuracy of our method surpass that of DeepVO by 32.4% on translation and 37.7% on rotation, proving the advantage of attention on VO in terms of performance. Additionally, the performance of our method is consistently better than the monocular vision based VISO2 in terms of translation accuracy with the exception of sequence 6. In terms of rotation, our method outperforms VISO2-M in sequences 5, 7 and 10. Overall, the average results on all test sequences show that our method is able to surpass the performance of both classical and SOTA deep learning based odometry baselines. {{table:3549e7f1-fbd1-4bc0-b0d4-7bcdf0e44611}}
r
5bdf837a1332122e80fc3f41e14edb4f
For computational reasons, simple models are usually preferred in the MPC scheme. Hence, the MPC model often does not have the structure required to correctly capture the real system dynamics and stochasticity. As a result, while MPC can deliver a reasonable approximation of the optimal policy, it is usually suboptimal {{cite:5e4d5ae3d6b2adcb5f35c61ac1bc003c2526daa7}}. Choosing the MPC model parameters that maximise the closed-loop performance of the MPC scheme is a difficult problem, and the parameters that best fit the MPC model to the real system are not guaranteed to yield the best MPC policy {{cite:ba2209ffd570c95b2f14faed134dfc8a695ff235}}. In {{cite:891578b8ca5923a57476560401bc13ca1e361fd9}}, {{cite:ba2209ffd570c95b2f14faed134dfc8a695ff235}}, it is shown that adjusting not only the MPC model, but also the cost and constraints can be beneficial to achieve the best closed-loop performances, and RL is proposed as a possible approach to perform that adjustment in practice. In the presence of uncertainties and stochasticity, if constraints satisfaction is critical, Robust Model Predictive Control (RMPC) provides tools to ensure that the constraints are satisfied, and can be used in the RL context {{cite:85af594a6f8e635712255f33dd9e74e8c411a897}}.
i
6136c27795f08e8edfa39e770f3c33ed
Given the highly non-Gaussian nature of the process of reionization {{cite:5b88b0a1db1d4dd278a71efa57eb8453dd96a155}}, {{cite:321a31a842426c7e360e8bb342b2e1f034f41ec5}}, {{cite:63e1d47381e18694e9b09353f0e9dc49a51c570d}}, {{cite:28edee0a50568326ee21ad616e9cff4a82660547}}, one would need to employ statistical tools which encapsulate information of higher order correlations. For a Gaussian random field the power spectrum contains all the spatial information and no new information is obtained from higher order correlations. Therefore, to probe the EoR one needs to resort to higher order Fourier statistics of the 21cm brightness temperature, such as the bispectrum {{cite:2da9153c949dcb37b4ffd3cb8a1d94be17fff071}}, {{cite:e66e6c06c804d4fc3bdbeead6e07152f0894636d}}, {{cite:e7c59fd28f3065c61aedcfec88775ff892ecdc66}}, {{cite:a76bf3a04e9c6d8b16402a1d5306089551dda969}}, {{cite:efecfac45c631928ce32d15ff74d9cd0bbcce7f6}} or use the phase space statistics {{cite:9e0e17ee464827175e739defcce57bac7980895a}} to infer non-gaussian features of the 21cm signal.
i
ec6ef9d4d8510bf7210081021e6fb53e
Let us remark that explicit computations for the eta invariant of the odd signature operator {{formula:5ac89d3f-6856-4b8a-8182-0cb6b137f01f}} for hyperbolic three-manifolds have been implemented in Snap {{cite:3fc1853335329bfabf2733e42e5b8ff6562a6926}}, and are based on a Dehn filling approach {{cite:9a2c9516c51ea7bdd086cd8c4b9ac7935c34d34a}}. The key insight is that for a fixed oriented compact 4-manifold {{formula:174f5de5-ef05-4e4a-9456-a26ef2d67b8d}} bounding {{formula:72926cdf-56f6-4aa4-8f00-c06ef611e2f1}} , the general Atiyah-Patodi-Singer index theorem {{cite:24f85ebf787be6a5f8d21653e926506f5cae84f7}} relates {{formula:73fce532-c9e3-4108-86d0-44119e6d2543}} to the kernel of the odd signature operator on {{formula:200cf0f7-d6ea-48f4-b3e7-98842aab3a17}} and the signature of {{formula:f703db66-3692-4060-b7af-1d87e53ae973}} , both of which are topological invariants.
i
546da996535f2d81b89da25f5c7019bd
On the other hand, it has become increasingly evident that neural networks fall short in many aspects of human-level generalization, including those that symbolic approaches exhibit by design. For example, it is difficult for neural language models to generalize syntactic rules such as verb tenses or embedded clauses in a systematic manner {{cite:9e4bf3ff07f5e9ca88178adeb4b0b0d115b0baf8}}, {{cite:87f3b050b6c9da769ef9e6ddf47e09edd4cc3e5e}}, {{cite:ad19b8742a55c329bfeef2e45b47729b5d48d4f4}}, {{cite:11b0a1c9b1c2bd1b966534ff3b6ee2cc299dbe0a}}. Similarly, in vision, neural approaches often learn overly specialized features that do not easily transfer to different datasets or held-out combinations of attributes {{cite:9937a7815968ab975a745a6adef3f4c2dba35870}}, {{cite:c284865e29b7d6243b8e36d513cf51cd7527339c}}, {{cite:ce004660b5f1bb04429c35a7356671218133ddae}}. In reinforcement learning, where the use of neural networks has led to superhuman performance in gameplay {{cite:d9cb71cbba3633b221e94e3e7169c69924c52682}}, {{cite:748347e0caad620c9d3181e0b7fecd4713766c0a}}, {{cite:a06b0eb1dfba62b82ec4305eab6e73494220c0e7}}, it is found that agents are fragile under distributional shift {{cite:00c7469b62c80e5fc944d89870b1657a94bc7b9e}}, {{cite:7addf98d2ad6388f0a53c2499a1a43a7787f4a59}}, {{cite:a98ff8fc17d8e5dc0bf4b566a163839ea9fba3b3}} and require substantially more training data than humans {{cite:90db131618fc254aa0b259665fb2378de76a2f94}}. These failures at systematically reusing knowledge suggest that neural networks do not learn a compositional knowledge representation (although some mitigation is possible {{cite:eb9dd3240fcb240ba9db630d379e95b53f5024a2}}, {{cite:09be64d4ac4dcc87cd954dc096ce923e18a3f59e}}). In some cases, such as in vision, it may appear that object-level abstractions can emerge naturally as a byproduct of learning {{cite:c56da186656f39f850742250ffb42673b661f570}}. However, it has repeatedly been shown that such features are best understood as “a texture detector highly correlated with an object” {{cite:1d148e97278d73d7907c361a57fa10ae831e1f16}}, {{cite:fdac84afccee6185f80f85a49e0e8d6fe34da928}}, {{cite:62c441309d0e55fa3f3c02b6c9e9bfb8b8cab9e1}}, {{cite:e05aadaac8fc480dcf983df3e32e8646eced9c05}}, {{cite:eaef6400e20e47a4068f7fd543497f55f3cdfddc}}. In general, evidence indicates that neural networks learn mostly about surface statistics (e.g. between textures and classifications in images) in place of the underlying concepts {{cite:8da08f2590141c1503ee504e3e9eb9a8074ef8bb}}, {{cite:6ea74f20878bb98c37400ccbacb55ac1a38aa8ec}}, {{cite:87f3b050b6c9da769ef9e6ddf47e09edd4cc3e5e}}.
m
10202f2d74d869de035912e061835a87
In this method, different data sources are used with the same algorithm, and the respective results are combined according to a chosen combination metric. In a variety of other machine learning applications, such approach leads to improved results {{cite:c153c2ab2944c705ba6421be833a9d2f521a6982}}. Fig. REF illustrates how such an approach could be applied to phase identification. Voltage and power data can be separately collected and used by different methods to obtain a phase identification result. The results are then combined, e.g. by majority voting. A simple variant of bagging ensemble is implemented in this work, as a reference to compare the proposed boosting ensemble against. The implemented bagging reference method uses the result from the voltage-based technique and if voltage data is not available of a certain customer, it completes the phase identification with the power-based technique. This approach, although simple, leads to good results in the analyzed numerical illustrations. {{figure:71f125fe-8b34-4f45-8924-7a7ce2452234}}
m
b0a2095428685c31ac2041bdc9534538
In experiments which continue the study of burst-like growth effects within the 0.1 – 0.2K range we have noticed a curious feature {{cite:5433b0508b23a5d8a4e7d2668d5c287e04428879}}. As the crystal stops to grow, the mobility of crystalline facets remains very high {{cite:09d249a96985173d9eddfa7e1b8bb0bb9eafeb4a}}. The crystal starts to remelt in the hydrostatic pressure gradient and then detaches from its nucleation site at the wall in the upper part of the container. The fall of the crystal is accompanied by a decreasing pressure in the container. It is unexpected and intriguing that the pressure changes non-monotonically. At first, the pressure drops below the difference in hydrostatic pressures between the crystal nucleation site and the container bottom. Then the pressure relaxes to the pressure difference indicated. Such behavior of the pressure implies two essential facts. Firstly, the pressure decrease in the course of fall is possible only provided that the crystal is growing and its volume increases. In other words, as the crystal stops to grow in the burst-like growth regime, the crystal facets keep their high mobility. Secondly, the extra pressure drop and the following pressure relaxation prove unambiguously the increase of the crystal volume during the fall of the crystal not only due to hydrostatic pressure but also due to an additional factor.
i
ae4f133b3d6131a4753459c9aade2685
The logarithmic scoring scale for P-values goes back to Fisher, who initially suggested it as a method of ranking success in card guessing games {{cite:1dffb31acd093433fbf4bc1406e30909317ef0b8}}. For global testing, Fisher suggested the statistics {{cite:1263e37f6fdcf5a244320b06bdc7d7584bc040e7}} {{formula:a251f4e6-9f84-4940-a8f3-d29158b13a52}}
i
529be54781efb8c81a40f8624a61c23b
DL is currently one of the mainstream methods in ML {{cite:680d091bb0d2d76edabc3e0d8f90939e79ade777}}, {{cite:9b9c48e5dd4529b3e785c61390960bb5a9aae6ae}}and handles ML tasks with high accuracy. DL is based on neural network technology. Neural networks are inspired by the biological neural networks of animal brains and consist of nodes and edges. Nodes are connected to each other by edges, and each edge has a weight. Fig. REF shows an example of a neural network that predicts the weather for the next day. In the figure, {{formula:d46c813f-274b-4b62-9ee3-b1155d87a0aa}} , {{formula:3e1fce85-c4bb-4b25-8e22-9b340cba7ef9}} , and {{formula:e644a19d-700f-45a5-a953-805f88e8764d}} are nodes, and {{formula:f38215d6-fe73-4c00-97a2-1d7c00ca84f3}} represents the weight. The input data, {{formula:4bd38009-179c-4828-96fb-1767b5036f12}} , {{formula:2f008a57-ba5c-43f1-9dc5-b6a4fd3dd948}} , and {{formula:c8da6306-0088-4e4a-90e3-e0d6cdc8bbc5}} , correspond to today's air temperature, humidity, and weather, respectively. The output data correspond to the probability of the next day's weather: {{formula:69cc9365-9e5e-4e08-93cd-7ea7249866f1}} is sunny, {{formula:89453dc0-81f6-400f-85d8-4bb0510aad19}} is cloudy, and {{formula:c068bad2-89bb-46c9-9ce5-abcd625c8bda}} is rain. This model is called a three-layer neural network: {{formula:eb3d7f48-e528-4590-acda-c17b2e1c7fa2}} represents the input layer, {{formula:122de204-7426-4250-98ca-8bbe38120ef4}} represents the hidden layer, and {{formula:0585fd46-238f-4a07-affb-9878af858ee1}} represents the output layer. {{formula:4179918f-19da-44b7-8d37-a7a9adfa4db1}} is calculated from the inputs {{formula:45a4e78d-ab53-4eff-ad2d-fc3c559c6af6}} , {{formula:8a582a05-4dd5-4ce1-a3ec-03a4707ee938}} , and {{formula:2b13a5d7-951f-4363-985d-2948ecfea60c}} and the weights {{formula:68e0a66e-70c8-40da-a19a-3953055fbb79}} , {{formula:4e8202ec-d07e-47c6-a492-cde37d326b58}} , and {{formula:ad04d4d6-4b5d-427d-963e-f345fb1b1edb}} by using a function, e.g. a sigmoid function. The other outputs, {{formula:d56fb61a-4685-4fe6-a4f5-d72115d1e8d8}} and {{formula:01d3bd58-54ef-4711-9f9c-4765e732bc30}} , are calculated in a similar manner. Before using a neural network, it must be trained to estimate the appropriate weights, {{formula:19042bd8-ddc1-46c0-a8d3-15b4c72254b8}} , by using weather information from the past. This step is called training, and the weather information used for the training is called the training data/dataset.
m
0fef0369b9cf3e1a2ff7aa017bab2569
The Kac normalization ensures the intensivity of the energy density in the {{formula:31d09ff2-8f14-4125-9b1a-62d3c7d999b5}} regime. The static and dynamical behavior of this model is strictly governed by the interaction strength {{formula:f5110237-7c72-4269-900f-00b54157080e}} . At {{formula:150fe63f-fd00-4843-b6cf-85eb41bff2c8}} the model reduces to the transverse field Ising model (TFIM) which is exactly solvable by mapping to a system of spinless fermions via Jordan-Wigner transformations{{cite:153f21df2f0ee09c0917812b14d113b7a88d6d90}}. TFIM shows a quantum phase transition from ferromagnetic to paramagnetic phase at {{formula:79143b9c-d2f8-411d-9f29-2490495ad1ff}} ; the quantum phase transition persists with decreasing {{formula:3c63110c-4f67-4c27-8bc1-29afde18e4d3}} albeit with increasing {{formula:d408a6a0-25c5-4656-b886-76f8d4f840c2}} {{cite:9073fc8ef1e9ece1fe387b4a1efa72647192c540}}, {{cite:2514e2855b13bd1976a68abf0af4cb4bce348b59}}, {{cite:c52145dbef6c1ae033f5d310cdb1ecce8ab262e3}}, {{cite:9e9fd947fee64ccbfe32951966a528c72c4e3e15}}. The other extreme at {{formula:0b0ab411-ac2a-426d-957a-061033d02ef1}} is the fully connected regime and is analytically tractable for both static and dynamical properties{{cite:c7d1d42c54bfb2bc9fcbfc631e8f06b46c7c9738}}, {{cite:8b15339b8883637e226f621fbbc50967b4101ad8}}, {{cite:755496c1d5b5ade4fe8d311455bc634ab3c55a33}}. The model exhibits long range ferromagnetic order at low finite temperature for {{formula:e4be269e-6980-44f8-8ae6-716f801d6e67}} {{cite:b96b30a47817613f9d5d055cee71ed71256485ca}}, {{cite:4fc00baa687a27613cd04e0501071247fe9fb399}}. As there is no spontaneous symmetry breaking in finite systems and the model REF is {{formula:71030b90-e552-4a15-98b8-7505eb8f1507}} symmetric, the finite temperature states with ferromagnetic order also have {{formula:0e7509f6-fd18-4654-b113-0578acd7ead3}} symmetry, see appendix . The region {{formula:0a302152-f489-4baa-91fb-90597e72289b}} is extremely rich, and exhibits a plethora of exotic phenomenons like prethermalization {{cite:8de2211ffabd83d31fa882a80ac97feb0d88f423}}, nonlinear propagation of light cones {{cite:910cde6b8cde1ba4aa330bcce351feb82ed93be6}}, {{cite:f78d159838face7aa37969a6620a46dc1c22d4ca}}, dynamical phase transition {{cite:99e9dd78dba06955a9ed410a33025e4e785b74ee}}, {{cite:6252fc26128791d35c690191ef38496e76f8db63}}, {{cite:e94748cc170daa50d4dbc96d6a8357a5c9f13006}}, {{cite:4e1014c189d341654f5f619b7e1fccc0d5f886df}}, {{cite:4626a6080bad997814f6a91cb370e2a2e2bfc051}}, {{cite:755496c1d5b5ade4fe8d311455bc634ab3c55a33}}, dynamical confinement {{cite:15d63600a700abb338b462b47fdb1fc3850c227f}}, {{cite:c7d8d8262c7cd3236207d41c520a1e46a727e19e}}, {{cite:d6874fff507fc46b5fed4c3e13dfbdf57b7b1ba9}}. Furthermore, the increased interest in this model is also due to its great experimental relevance, primarily with systems of trapped ions with controllable strength of the transverse field and interaction range. Several static and dynamical properties of LRIM have been studied in lab so far {{cite:564177460bd2941270d104ef921179b43e0ffded}}, {{cite:4e1014c189d341654f5f619b7e1fccc0d5f886df}}, {{cite:4626a6080bad997814f6a91cb370e2a2e2bfc051}}, {{cite:d6874fff507fc46b5fed4c3e13dfbdf57b7b1ba9}}, {{cite:8de2211ffabd83d31fa882a80ac97feb0d88f423}}.
m
20686ba15fb035ac95490c1e252ea539
In the second experiment, the effectiveness of the local motion planner is verified. First, in order to demonstrate the superiority of the proposed planner (Planner A), we compare it with the planner proposed by Alonso{{cite:baa6931cf60d7eb570eb2894daf81b54507ac212}} (Planner B) in the simulation environment. The size of the obstacles in simulation are consistent with the experimental setup. However, the obstacles are considered insurmountable in Fig. REF (a). Both Planner A and Planner B are effective and perform the same in this case, and the path length of the object is 5.673 m. In Fig. REF (b), Planner B fails, while the obstacle crossing ability of Planner A is activated. The path length of the object is 5 m, which is shorter than the path planned in Fig. REF (a). Therefore, the obstacle crossing capability of Planner A can help to shorten the moving distance of the transported object. The yaw rotation angle of the formation in the two scenarios are shown in Fig. REF (c) and Fig. REF (d). In Planner A, the local path of the formation is obtained by Algorithm REF and the maximum rotation angles for the formation to cross two obstacles are {{formula:f67a3cbb-a8ae-47b1-b237-015d901e73ab}} and {{formula:a3497ee2-da95-49d6-a2d9-f854c3feb4f2}} , respectively. Through the purposeful rotation of the formation, the robot team can successfully cross the obstacles in Fig. REF (b). The changes in formation shape and the object height during the obstacle crossing by Planner A are further illustrated in Fig. REF (e) and Fig. REF (f). Due to the low height and small radius of obstacle 1, the robot team can cross it without changing the formation. So, the formation is an equilateral triangle with a side length of 1044 mm and the object height is 90 mm. Since obstacle 2 has a higher height and a larger radius than obstacle 1, the robot team has to expand the formation to ensure safe passage before crossing. The optimal formation is an isosceles triangle with a base of 1250 mm and equal sides of 1304 mm, which is formed by the Optimal Formation Function and Algorithm REF . The object height is 240 mm in this formation.
r
35c8f73faf3f1d02b05e230b3396f8bb
There are two minor drawbacks in our experimental setup that rule out the quantitative comparison of the measured intensity with the model. First, the combined response of the speaker and microphone has a frequency dependence leading to a broad peak centered close to 5.5 kHz frequency. This can clearly be seen from the single cavity red-plot in Fig. REF . Second, some sound gets transmitted from speaker to microphone through media, including the cylinder walls, other than the intended cavity. Although this is very low in intensity but it affects the shape of some of the resonance peaks due to interference between the sounds reaching through the two pathways. The effect is more visible when the sound intensity through the cavity becomes comparable to the other media. As a result of destructive interference, just after some of the peaks, one can see the sound intensity dipping below the background level and the peaks acquiring a Fano-like asymmetry {{cite:b2f6ad2686ee234ef498dcecaa97c18dc1a2625c}}. In fact this effect was seen to be more drastic with another microphone which was not so snugly fitted into the end cap leading to more sound transmission due to media other than cavity.
r
29b31b24e1feeef78d7cd0be303fea2a
UC Merced Land Use Classification Dataset: Table REF shows a quantitative comparison of our method with the baseline for the UC Merced Land Use Classification Dataset {{cite:e90b335cb1ce1aa000ce25fa0e31aee8a5b2513f}}, {{cite:873d41fbb908a5109f61b290b63dc13a9cdb2fa6}}. We compare the performance of entropy and margin sample selection strategies with the baseline and show significant and consistent performance improvements. Both the active learning strategies out-perform the baseline by a significant margin. We report a maximum mIoU improvement of close to 15% with as little as 2% labeled data, a maximum improvement of about 18% over the baseline when training with 5% and 12.5% labeled data across the two active learning strategies. Figure REF shows how our proposed method qualitatively improves over the UC Merced Dataset baseline for different labeled ratios. Our method predicts a finer coastline with no false positives, even with only as few as 34 labeled images which are 2% of labeled data (Row 1 of Figure REF ). Similarly, we demonstrate that even when using only 5% (85 images) of labeled data (Row 2 of Figure REF ). our method predicts the green river that is camouflaging into the background while the baseline method completely misses it (Row 2 of Figure REF ). This shows the importance of having a representative pool of labeled data, especially in a low data regime, as is our case. With 12.5% (211 images) of labeled data (Row 3 of Figure REF ), our method accurately predicts the complex shape of the airplane (Column d), as opposed to the baseline (Column c), which was confused between multiple unrelated classes.
r
b0db98cdaedae95c1c6a9749f4c71c06
Hence, to overcome the second source of non-i.i.d. updates, we take the standard approach by requiring the clients to share the same initialization before the training phase starts. Note that there is a line of works {{cite:dcfc5c65b4b65bff5f29b9b4457bc0ff010ca619}}, {{cite:9ac3eb38176a61de00457df1ab820e7fae49d2f8}}, {{cite:986a89aecce84d01bffd550cd757e57d542697a8}} focusing on automatically addressing this issue using matching algorithms and Bayesian non-parametric models. We deem it as an interesting future direction to incorporate these works in F{{formula:82152cc8-bbb7-421a-9bfd-c94ea3e004d5}} ed-Learning.
d
24b1f511c5c40c2afc158ecb3ae67b79
In these ten inclination bins, we calculate the mean magnitudes of the galaxies applying weights found using importance sampling (Appendix ). The calculated means of bins two up to nine (avoiding possible detection biases in the first and last bins affecting the results) are compared to the model value at the mean inclination per bin per waveband using the MCMC python package emcee.py {{cite:057c4eb5beecd6a52aee2ea40d54e3bd1d649558}} with the following likelihood function: {{formula:b11fb960-fddf-4e2a-b4cc-fce5064cdc82}}
m
9834c4aef5c4528766a48a95f4f97b72
Model Agnostic Counterfactual Compounds with STONED (MACCS) {{cite:df9b8c955e3f32323a23d35739476e7360addbe2}} works in the molecular domain. The method takes in input a molecular graph represented as SELFIES (SELF-referencIng Embedded Strings) {{cite:f61c53ec1a281e62812ef2f837e846e35d9e2c72}}. The approach employed for molecular counterfactual generation is built on the Superfast Traversal, Optimization, Novelty, Exploration and Discovery (STONED) method which enables rapid exploration of chemical space without a pre-trained generative model or set of reaction rules {{cite:79c6d63b6d2c0a253fb9ea005f28acb7b6b7d193}}. The STONED protocol consists of string insertion, deletion, and modification steps that can generate thousands of perturbations of a given molecule engendering valid molecules that are close in the chemical space. This method works because the molecules are represented as SELFIES whose modification/perturbation entails a valid molecule as well {{cite:f61c53ec1a281e62812ef2f837e846e35d9e2c72}}. After expanding the chemical space around the original molecular graph, MACCS identifies similar counterfactual molecules with a changed prediction, selecting a small number of these using clustering and Tanimoto similarity. By clustering the counterfactual examples and selecting, for each cluster, the closest counterfactual to the original molecule, the explainer is capable of returning multiple counterfactuals that are different from each other.
m
0c122f5a3ffbbe4dbdebb04f8f88b226
Figure REF (right), taken from {{cite:754d9fa53c24049d337fca2272b89a5af0a54972}}, shows the impact of changing the ratio of electromagnetic to hadronic particles, {{formula:5b5388bd-d60f-4c93-afd6-50b8bbd9931c}} in the observables {{formula:e3db3c86-1a99-46c8-9352-91366c8d2f9b}} and {{formula:a30f1833-cc6a-42bb-818b-0d04247d65f4}} together with a measurement by Auger {{cite:7bf964829342f0e2c9d73e14114a6f1d4360c901}}, by following the methodology described in {{cite:7bf964829342f0e2c9d73e14114a6f1d4360c901}}. When {{formula:f74a3cd5-294d-4d93-9b02-ff93857c8e77}} ({{formula:1d63dd4b-64f3-474f-921c-725a5b335e41}} ) is modified the simulated line shifts parallel to itself: the multiplicity has a correlated effect on {{formula:b47a0d61-305a-40e1-a622-b6376e60a0fd}} and {{formula:a9b533d3-8765-494b-b430-b06c6abefe3c}} . On the other hand modifying {{formula:bfca6eaa-1115-4d9a-bac5-c795ab5c9a43}} changes the muon number and leaves {{formula:a7cc61d7-6c6a-4a86-ba74-8b6a18d4b9c4}} unchanged. A decrease of {{formula:0ae2109f-a799-408c-8ab2-bf8c65f821e9}} of 15% at the {{formula:ebbf2d65-17cf-435c-a8eb-c646f5fe89f9}} TeV would be enough to make simulations compatible with air shower data at {{formula:3bb821c4-9bb8-4e7d-878f-8ab4c73616d5}} eV. In {{cite:754d9fa53c24049d337fca2272b89a5af0a54972}}, {{formula:04808145-0502-4a8d-ab25-5529dcd4357e}} was proposed as an experimental observable to be measured in LHC calorimeters as a function of pseudorapidity and central charged particle multiplicity. It is claimed to be a new handle for the explanation of collective hadronisation in p-p collisions, and distinguish between quark-gluon-plasma-like (QGP-like) effects, from alternative more microscopic effects. Precision measurements of {{formula:9709c884-3bdd-4dc4-8ba1-f7694fd72d2d}} to 5% at the LHC could contribute to a better understanding of muon production in air showers {{cite:e367441f074ae399f40cbd89a1f978a1e885e622}}.
d
d835df8892372cdea9dfbf46548d6360
In checkpoint averaging, the final model is obtained by averaging over the most recent or best performing checkpoint. Checkpoint averaging is commonly associated with Transformers {{cite:53dae22e9b364197c91b5987b5561436e7019bd9}}. {{cite:0cd9c4cf00ca2db956d61900d3495d05efc22ad9}} argues that averaging all the checkpoints along the trajectory of stochastic gradient descent leads to wider optima, and, consequently, better generalization. Checkpoint averaging has been used by {{cite:1f98ced7a06c71ca0680cc48b49a437e60e24202}} which combines averaging over checkpoints stored every 500 iterations with Elastic Weight Consolidation {{cite:dfc07d6af53dddf7b035923f4d2e50b7280429b4}}, to overcome overfitting when fine-tuning a pre-trained model to a target domain. {{cite:f03342efdb6e65350f36db0b2a6313268a08cb86}} proposes to average the most recent checkpoints after each epoch during training, in order to reduce training time.
d
879ddd6725289968e65cdaa6ccb39cb3
The parabolic Littlewood-Paley inequality (REF ) was first proved by Krylov ({{cite:ca0c07cb742fba749bdc350cda2586d522646cda}}, {{cite:c74d4808fd62fda57907dfa187edb889604d8b37}}) for the case {{formula:8be2000b-eecc-4e6f-9f4e-718cdd0ce7bc}} with {{formula:c2bef1b6-a030-443a-b9f4-e9678b9df3f4}} depending only on {{formula:9c8214c2-8a84-4dd2-be2f-cdd0a996dce3}} . In this case, if {{formula:e18ac08a-cd5a-4a4d-b76c-a5e966a6d885}} depends only on {{formula:570ec292-471c-4f78-8ef7-6d0f420af1d4}} and {{formula:a9417ef3-751f-41b6-8510-039d39b8faff}} then (REF ) leads to the the classical (elliptic) Littlewood-Paley inequality (cf. {{cite:14c4b330135eb19cee53e1de49b9fe81cc9be6da}}): {{formula:696d7cb4-c58c-41c3-9a83-fd1ce94cbaa3}}
i
938dbff04dd32c5f60b2c6f0b8dcf2be
Since GAN training heavily depends on network architectures, for fair comparison we only list comparable results using the same network architectures. For unconditional generation task, we present our evaluations of FID scores on the three datasets averaged over 5 random runs in Table REF . We compare with methods that are related to our work, including WGAN-GP {{cite:bf8ad0d99734d8e55210ade2bc565afd62f12504}}, MMD GAN-rq {{cite:b0d10ad30e0fbcb72eaa771ed6c1608247c8a603}}, SNGAN {{cite:d9f82cf60677bf40e185a7ebf6473dea70d68620}}, CTGAN {{cite:0363f5a0bd4baefdb8471c0523eff4ece67b1b5f}}, Sphere GAN {{cite:77a535d14228a3c2ce84ebcc8d7e0ff4d62e7cff}}, SWGAN {{cite:c576b2d90ec222f74368db994aa720a062b1fd4c}}, CRGAN {{cite:7d584ee134ecfa76476b7b6d9dd08d9da2a029d9}} and DGflow {{cite:1ebeff843c3b61c316aedd0ba35147b687ba446c}}. {{table:c0283d19-399a-4434-a91a-359c40de1bb9}}{{table:5fd4e820-1de7-45d2-8518-3b5f0db77169}}
r
735068fc487a6eb532e61cde118edba5
For multi-coil data processing, one could choose to simply apply the PRoM algorithm with {{formula:502ad2bc-5a68-4e5b-97ee-84740247058c}} times the number of congruence equations, where {{formula:b8145089-c719-4bac-ba19-e1e214a2372e}} denotes the number of coils. However, we find that first combining the coils is not only faster but also provides lower RMSE (results not shown). Let {{formula:5e8644d1-5bf1-44b5-8214-62aeab4138a6}} denote the {{formula:68291314-3414-4624-8b6a-d2ce9bbad3ac}} acquisition observed in the {{formula:8dc773bd-ce22-4a4c-90f9-b94573ca8d6f}} coil; the coil combining is given by {{formula:dcab86c6-a246-4e30-8e02-c829333f43c3}} . Indeed, this is simply the phase from the off-diagonal entries of the “multi-snapshot covariance matrix" typically used in array processing {{cite:8756c49d19d67e063abaa557578f410a83b59083}}.
d
6618fec441af519017fa90c9ac5ea400
As discussed before, NSs are the final stage of the collapse of massive stars. During the dynamical process of collapse of these stars, nuclear processes occur, such as electron capture, which make the stellar matter richer in neutrons, in such a proportion that the proton fraction of the star is close to {{formula:438a1eca-3858-4579-a254-87e3d3d0b5ae}} or less {{cite:c3cb77a15ec48548963520bf428422acdde0872e}} and most particles are neutrons. Therefore, in a good approximation, a NS is composed just by neutrons. However, it is known that in the NS we also have protons and electrons in beta-equilibrium with the neutrons, and in the inner core of the star more exotic states could appear, such as hyperons or quark matter {{cite:c3cb77a15ec48548963520bf428422acdde0872e}}. Since the purpose of this work is to analyse the modifications due to deformed fermionic kinematics, a single species is the simpler situation. Therefore we continue our analysis applying the DSR equations to pure neutron matter.
r
10a6e504f7a58a1475173d7a925c047e
In July 2012 the ATLAS {{cite:54d9c9f23a0f238c847b683f7a4eafc7c29d959c}} and CMS {{cite:ed8aea69212029e8b982192276983d173ca9c7bc}} experiments at the LHC announced the discovery of a new particle consistent with the long-sought Higgs boson {{cite:d31455b6914dd3f0b10926600335857c86f9da98}}, {{cite:771fef4901b02af97136083c8ffa97eebf08b18b}}. The results presented here constitute an update of the {{formula:dabb98fd-4af2-44f4-9099-df42650ac9ad}} analysis with a dataset of 13 {{formula:13f228e9-0157-46fa-a645-2fd2efb92ea6}} taken at a center of mass energy of 8 TeV {{cite:77846547a1b73c18853c1bc3a1551fa39bcc31f3}}. In particular, we summarize the methods of background estimation for this search channel, which focuses on the low mass Higgs signal region.
i
21eccd67554e34b962e877b1393a6786
Here, we briefly review the standard Nyström Approximation (Nys){{cite:3895366300626607dedb4da928c2002ed6066599}}. For the kernel matrix {{formula:407a2627-18d8-4aa2-b654-1c0e1f16c0bc}} , we first randomly select its {{formula:f7602a60-c525-4884-86da-e70e86e9468f}} columns uniformly without replacement and denote the matrix consisting of the selected columns as {{formula:6b699ec6-97c2-4cb5-a91b-32fc189681c6}} . Then we use {{formula:ecd0f82d-2cbc-4c77-98f8-59b91ebcb2c9}} to denote the intersection of the selected rows and columns of {{formula:25f8b0f0-a1ae-4891-b7ec-be4960dea0f5}} . We can get the standard Nyström approximation matrix is {{formula:c792be02-5154-46e2-8ea8-1441dc0bd871}} . Note that {{formula:39f28023-5312-4035-b8cf-1a2c075fe0f1}} may be non-positive definite, in which case the positive definiteness of {{formula:df74edb0-be28-472c-a3f6-0d2b1861c92c}} can be guaranteed by slight regularization.
m
2ae70644f4ec9a41e87cf8e7eed3ea4f
The sweeping success of self-attention mechanisms shifted the focus of our community from Convolutional Neural Networks {{cite:df319d9e4a8f10100ff86faea21d8e305a445085}}, {{cite:79fe97baf9c1252a4ef8d66d2686e4db288c720b}}, {{cite:2e091469f37a9eb4de2fae41070e55bd3eafdb30}}, {{cite:95aafdbb446845afa199c73f49f7e8955f1fff26}} to seeking software {{cite:a0d2d8790493ca6e32df09ab40f120fccc8b8aa0}}, {{cite:4ed79976f62f01de590a94a7d0e923069803fa2b}}, {{cite:88bb3c8c192e514830d34f5e3e952efde7b3d0e6}}, {{cite:d4c4b9a756576ae745f34ebf6b0dd3c9ec8a355d}}, {{cite:9f8fae609f0001143acbd9a4f5419f677e6ee763}}, {{cite:ab47e23d7333a502e83f02fdba5288930a468897}} and hardware approaches {{cite:72ebc34e4b67caeb44e0e45fd67c8f2e1409f99d}}, {{cite:94b58a611ce4bce8aa1a2696f81ae7da1ac4f49d}}, {{cite:bab81b3cf4371dff3573a11e2d1353fa92688911}}, {{cite:e84bfadd4dbe516920167f2d708734e0f18fa2d3}} to improve efficiency of the attention mechanism. At its crux, it creates and employs three abstractions of its inputs (e.g. words or pixel patches): query, key, and value embeddings. The core operation of self-attention is the computation of pairwise correlations between query and key embeddings, followed by computing a weighted sum of value vectors proportional to measured correlations. Despite its compelling performance, the associated compute and memory footprint cost of self-attention mechanisms can readily become inordinateThe cost of pairwise correlations grows in the order of {{formula:2afca537-cd76-4c33-a420-02e88218b905}} with respect to input sequence length., especially as the input sequence length increases (e.g. > 2K), a prevailing trend in recent deep learning models {{cite:88bb3c8c192e514830d34f5e3e952efde7b3d0e6}}, {{cite:9af868f53ee71b4c09b66332a9e66b9b28ae5b6f}}, {{cite:22f0dbe0dc95515ca7e78cb68d71ec7d1803e0e0}}, {{cite:2f845413c0151ed7de854a2698821b047e348566}}, {{cite:a0d2d8790493ca6e32df09ab40f120fccc8b8aa0}}, {{cite:b8ae37ef7a329ce26e6394fdd080bb0470560ac9}}.
i
d472b704799e8289740009f8ea1374fe
The existence of equilibrium states for {{formula:dcf423cf-bb5a-47e1-83a2-1f9c916977f7}} depends not only on the potential map, but also on the dynamics of {{formula:fe5ac2ac-bdd6-4331-ac40-493301a46004}} . And, as {{formula:d94d131b-9366-4ce7-8a81-c3e970a65b67}} is an Anosov diffeomorphism, the ergodic properties of {{formula:c4bb66b6-8150-4fd8-8081-545367a88433}} strongly relies on the underlying dynamics {{formula:6d07ad6c-57ab-46ad-96ac-8fcfdfc9b385}} . For instance, if {{formula:df3b6c87-72e3-43ae-a546-1787c4e8c65c}} has no probability measure of maximal entropy, as the examples described in {{cite:5a769b82d07eb18af86aac3dfec45a610e1c0b87}} or {{cite:92126b3f2644bf27b8ab3b1e6ed8c6fe3951224f}}, then the same happens with {{formula:9790ba42-d187-4d5e-8dfc-5e1aaf6324ad}} (cf. {{cite:f6d9b96eefade26cdd1e0beeb496a57c9efcc37d}}). Yet, if {{formula:ddabdbd3-8dfc-484d-ba19-203bb2e5e132}} is expansive, then there exists at least one equilibrium state for {{formula:96773d89-a8eb-4599-9b18-4f66c9c5fe15}} and every continuous potential. Indeed, in this case, the set of points that prevent expansiveness must be contained in a compact subset of the center laminations, inside curves whose lengths remain uniformly bounded after forward iterates of {{formula:7266936f-c719-496f-a5e2-8717c4d55cda}} (more details on {{cite:f6d9b96eefade26cdd1e0beeb496a57c9efcc37d}}) and so {{formula:dbe63f4d-ac45-4749-ad7a-e82574c129b4}} is entropy-expansive (cf. {{cite:a8b07be0793c0c3a995eb580ea917b10c91dbb39}}). Thus, if {{formula:ddb54590-914e-412b-b658-1c6a481df8a8}} is expansive, then skew-product {{formula:53a68287-0d2a-453b-9e20-bf2282774c18}} is entropy-expansive.
r
8e23bdebbd26e5b187c39bcdf09f0fb2