text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
One aspect in which the second-order Trotter formulas that we use perform well relative to popular methods, such as qubitization {{cite:e4e0611b79197a5367df58053c3b39830027b792}}, linear combinations of unitaries {{cite:cdf774f112b2a33e27b05f75961426d09a7ed77e}}, {{cite:e4e0611b79197a5367df58053c3b39830027b792}}, {{cite:dc2012c76cdeb20a46d054cdab531d08e7b56005}} or their classically-controlled analogue QDRIFT {{cite:031801b465f718882e51f20b12931b21782e1f00}}, {{cite:0a10299b9dc6c643da2565325bd381afdd369de9}}, is that the complexity of the Hamiltonian simulation scales better with the size of the electric cutoff, {{formula:621ff027-7368-45d0-8712-b6763e9a36ff}} .
The complexities of qubitization, LCU and QDRIFT all scale linearly with the sum of the Hamiltonian terms' coefficients when expressed as a sum of unitary matrices.
This leads to a scaling of the Trotter step number with {{formula:1af45998-28a1-4f1f-9fcb-5483af482749}} of {{formula:cc43fbc1-5666-4b9a-b666-2e45c59a2a7f}} .
Instead, it is straightforward to note that the norm of the commutators {{formula:3d2edbf9-bdba-4f8e-859c-32c40cc6324d}} scale as {{formula:7bf08d8d-7434-4abd-8346-ef93a4461568}} and that all other terms scale at most as {{formula:ccd3a865-06cb-4a06-8f81-21aea8bb9f63}} (see [lem:combound]Lemma lem:combound for more details).
This leads to a number of Trotter steps needed for the second-order Trotter-Suzuki formula that is in {{formula:491c9579-7824-428b-a998-41d2ccf74777}} from (REF ).
Thus, accounting for the logarithmic-sized circuits that we will use to implement these terms, the total cost for the simulation is in {{formula:d6361479-2bd2-41f8-a918-a9bf851abf0a}} , which is (up to polylogarithmic factors) quadratically better than LCU or qubitization methods and without the additional spatial overheads they require {{cite:f6de9457295461c81102975392f158cdbce8cf8a}}.
| m | bd694c1e9cbf2eac5835dfb951aae0e2 |
To evaluate our inferred spike train {{formula:6092d0c8-4205-466a-8976-3c7c15ddefcf}} , we employ the following standard practice {{cite:a11bde4a92e034c4cb8cc2ce88fe76303033f1d2}}: The GENIE dataset provides ground-truth times for neural spikes. Let {{formula:35bb3a8b-de93-43d7-a9d3-b0040c0785bd}} be a one-hot-encoded vector containing indicators of when spiking occurred. For a particular bin length {{formula:28550f82-2823-41f5-a8c4-846ee921e596}} , we respectively reduce {{formula:e38168e9-9608-48df-a406-63413fa51595}} and {{formula:4d7dbab9-5dd8-4c15-a23a-cedc6458ec3a}} to vectors {{formula:ab97148a-862a-4633-a3cc-d777a2a73133}} and {{formula:093491c1-acf5-4e34-9738-45848f1fb402}} of length {{formula:a5bb8c6a-5d90-4725-a775-4fe205e3bc76}} by summing across each set of {{formula:3e283706-a318-4005-afd0-926312cfa687}} consecutive components. We then compute the Pearson correlation coefficient {{formula:901b19ef-8624-4a38-a610-590f05be2e56}} between {{formula:4ffa2531-fbfa-4341-a1c5-a704647f7d15}} and {{formula:c416afa3-e358-4f28-84bb-16ea5812e13f}} . A high value for {{formula:628be936-3f18-4720-981e-171c676d114b}} indicates agreement between the inferred spikes {{formula:ecb8b945-4d43-492c-989c-d339d5f53f52}} and the ground-truth spikes {{formula:0fdf8e8a-4fe8-4a80-8336-6649d6708417}} . We generally expect larger bin lengths {{formula:918df6ca-f159-498a-be1a-c3446eb43ebe}} to yield higher {{formula:3835addf-daa7-4989-ade6-26c0b05ebbda}} .
| r | 193b7988ecf4028fd07e3637b58e5d69 |
The both massive and massless particles which are capable of carrying nonzero orbital angular momentum (OAM) are of a great interest for researchers from different fields of study. The first seminal description of the concept of such twisted particles was introduced by Allen et al. in the early 90s in Ref. {{cite:a85c997caa8d2e911fb83a25153234e11f81c04f}}, where it was discussed in the context of photons.
The wave front of the corresponding beams has a helical spatial structure allowing the beams to carry a nonzero OAM projection onto the propagation direction. Soon afterwards, the fact that twisted photons possess discreet OAM was confirmed experimentally in Ref. {{cite:2b734965274ff3c13e26ead039828c4da5d27ae2}}, and two years later He et al. managed to transfer OAM from light to matter {{cite:3a7f27ad011cdf5dc8b738ed4b0597cb0b56df2f}}. In fact, the opportunities related to the information capacity and other specificities of these optical vortices – twisted photons – have encouraged numerous studies concerning a huge variety of their applications {{cite:1bcb1427430b6e18ac2fc153dab706495fa2b15e}} (see also reviews {{cite:a5e4d7ed912734843d7cb3f79a8c8eed8bed74d1}}, {{cite:cfdb1dee2d29a9f585596c0a5c5ecfc48e9ca766}}, {{cite:52c1ed585345300407905f08fc9d787462d03166}}, {{cite:15f45447d8f037f2187b3ac4b65acaf744a7ff30}}, {{cite:65ba0a516932c9b368e6d01bfc255ae2f35452fa}}, {{cite:aa9d49c0c5293dd1025615f45f5a5fc31e69172f}}).
| i | 61068aed07e5fc242ee8b3166d204e72 |
This type of method relies on a box-regression based object detection frameworks with word-level and line-level prior knowledge {{cite:468bc3a08883e0e36d1fa34f8b483cbeb2313439}}, {{cite:e0623cb9658a7ee5daaa192c24bf3dddfbe0dd6f}}, {{cite:e38d13488f9aa295d92f9972a25554534ec3307a}}, {{cite:0c18bfafbad0489da6ab7b7e3b41b741b369332e}}, {{cite:d91d0f5fda86af15059193913f252f58a32c4071}}, {{cite:b31176f33c5d61515c1d2f408cb10fd0d572715b}}. Different from generic objects, texts
are often presented in irregular shapes with various aspect ratios. To deal with this problem, RRPN {{cite:e0623cb9658a7ee5daaa192c24bf3dddfbe0dd6f}} and Textboxes++ {{cite:0c18bfafbad0489da6ab7b7e3b41b741b369332e}} localize text boxes by predicting the offsets from anchors. Different from these methods localizing text regions by implementing refinement on pre-defined anchors, EAST {{cite:d91d0f5fda86af15059193913f252f58a32c4071}} and DDR {{cite:e38d13488f9aa295d92f9972a25554534ec3307a}} propose a new approach for accurate and efficient text detection, which directly regress the offsets from boundaries or vertexes to the current point. Based on these direct regression methods, LOMO {{cite:1910b5e45b437ed30c07680be31a7bb6bfc1fd7c}} proposes
an iterative refinement module to iteratively refine bounding box proposals for extremely long texts and then predicts center line, text region, and border offsets to rebuild text instance. Although regression-based methods have achieved good performance in quadrilateral text detection, they often can't adapt well to arbitrary shape text detection.
| m | 2f2ac53a0fff32fa5b95583135ff5b57 |
Deterministic symbolic regression constructs a large library of nonlinear candidate functions to regress data. It identifies the relevant candidates by adopting a sparsity constraint. Two fundamental methods have been proposed: Sparse identification of nonlinear dynamics (SINDy) {{cite:42176f77235fb00602616b68b1bfc4110caa3fd8}}, {{cite:678c23efd85694def539e187e2ab72edbb0b0a74}} and fast function extraction (FFX) {{cite:9bbe957f5b72ecc392a28e4eb600edc44f0fafe1}}. Both methods were applied in several areas of physical modelling. In the following, we introduce the steps of the model discovery methodology SpaRTA based on FFX, for which a library is constructed using a set of raw input variables and mathematical operations. The model selection uses elastic net regression. Finally, for the inference of the model coefficients the stability requirements of a CFD solver are considered. An overview of SpaRTA is given in Figure REF .
| m | 4bf583551bb1a395fd72db57b3e9a2bb |
The spatial domain {{formula:11807f36-5641-4296-84c2-f50826db9071}} is discretized by a staggered structured grid, in which the first and third order derivatives are defined at the cell-centers, and the second and fourth order ones at the grid points. Following the natural order from left to right, adjacent vertices are associated to the indices {{formula:5cf02cfe-3ff6-4aae-a8e7-bf737e1979c0}} , respectively. Thus, we let {{formula:05b15e26-c36e-4472-8d27-69f4802e40c4}} (where {{formula:00ef029b-f22b-4c96-acb1-d896cb15d6f3}} , and {{formula:ed15721d-2ef0-4a91-8365-521b8ca888ba}} is the fixed grid size), so that the endpoints of the physical domain, 0 and {{formula:8f6339e9-0ec4-455a-aff8-0364ff29073b}} , correspond to the {{formula:eec4bed1-24a9-4310-a04f-2592ff78518d}} and {{formula:361225a8-3f08-4369-90d1-faca4765b03b}} cell-centers, respectively. Similarly, we discretize the time domain and denote by {{formula:d9013e56-3e65-4643-94f0-6287506bf3e8}} the approximation to the solution at the point ({{formula:2072f748-607c-4575-9e31-83be8616666c}} ), where {{formula:ee6e86c0-1df3-42bd-8ecd-ed9177986758}} indicates the number of time steps, and {{formula:0f2b6d59-c9fa-492d-b747-3b09983371c3}} is the temporal step size, which can be chosen adaptively to speed up the marching algorithm when the solution does not exhibit fast temporal variations (see {{cite:2744e351a29dfabed8c5a73dfd098b12e4497771}}, {{cite:1b7d48a3aba059e05f6e2e5053462a4dd25e22e1}}, {{cite:344a4841c8a6025214a3174c046148cbd6b9273a}} for a detailed description). In addition to the numerical formulation provided in {{cite:2744e351a29dfabed8c5a73dfd098b12e4497771}}, the discrete versions of equations (REF ) and () are given by
{{formula:3b2b807e-0d3e-4ff7-80ee-0db2d76db50f}}
| m | 4b68d00015c9c89a129914d5eaecc933 |
It is worth noting that in contrast to the weights of a standard neuron, the weights of the compact support neuron exist in the same space as the neuron inputs and they can be regarded as templates. Thus they have more meaning, and one could easily visualize the type of responses that make them maximal, using standard neuron visualization techniques such as {{cite:2185f6f528046ed86b280c20720623320f6709df}}. Furthermore, one can also obtain samples from the compact support neurons, e.g. for generative or GAN models.
| d | 0b6485d0f4bd15f58ccbb32c30d2631e |
Considering the generality of the DJESCC method, there are multiple architecture choices for the encryption network, the DJSCC encoder, the DJSCC decoder, and the decryption network. To prove the potential of our proposed method, the original DJSCC network architecture in {{cite:3da1999921f77eb034be25e11386e28ea8486929}} is chosen in the subsequent experiments. It is worthy noting that our proposed method can be applied to other extensions of the original DJSCC network architecture.
| r | 027ba41ec9e4400d29e9919fbcef1d7f |
DeepLIFT. Shrikumar et al. {{cite:89a9a6a825be6ec7fc8d3b9d3fa25dcbc328e1f8}} take a different approach to attribution, introducing an efficient method for disentangling contributions of inputs in a neural network – deep learning important features (DeepLIFT). As opposed to LIME, DeepLIFT relies on comparisons to a reference (baseline) data point.
| m | 3050177678ccea4ecd85b28ec20df4ca |
Our focus in this paper is on a mathematical framework of transfer learning and corresponding theoretical results related to geometric structures, minimax bounds, and minimax optimality. To provide further insights and understanding with respect to our framework and results, we now present a collection of simulation results that investigate the quantitative performance of our model interpolation estimator under various conditions, environments, and parameter settings.
These simulation results showcase the ability of our proposed estimator to outperform the basic transfer learning approach discussed in {{cite:2760eabd124baddc54a78812cfa0a5ab25873bb6}} and the recent state-of-the-art transfer learning methods of {{cite:2affa823e0cf518294f0690618b6e756837e43df}} and {{cite:6186ba20afe1a6435dea577706bcd78adac2e534}}.
| r | fd5226e61c675b9d560a68764e6e1e82 |
We begin with the Riemann–Hurwitz formula {{cite:ce3f92f262123795eba574dcc85747325d0051cb}}:
| r | 86efeddd3c980c1c22125089a757b0fe |
A number of possible approaches can be used to tackle the learning problem introduced above.
The most classical is standard empirical risk minimization ({{cite:e7ca93388af62f47ed7c01520b49b179a7f8ab4b}}),
in which a model, {{formula:ff9a3078-c943-4ac0-ad0d-aa5583d3f30c}} , from a class
of possible ones, {{formula:2769cd6e-304f-4ddd-90e5-3ee6ff52a4a2}} , is sought that minimizes the training error across all the data observed so far,
{{formula:9636f2d9-b54c-4e97-a1e0-f84d71cfbe4d}}
| m | 4a4d0e8df35707002be22cafde3b4d31 |
Deep feature transfer learning is the idea of using a neural network that is trained on a massive dataset such as ImageNet {{cite:cb43a892230c6f282cd95fca2be317f1fdd566e4}} with hundreds of classes to predict labels on another dataset. This idea could be achieved by removing fully connected layers, which are used to classify images according to features extracted by convolutional layers and adding another classifier to the network, therefore allowing us to use features extracted by a network with a massive dataset with another classification method in different models. The classifier we add to the model can either be a fully connected layer or one based on other classification algorithms.
| m | 7a720ca749f7c7ac1fc1dfe5e38b18f6 |
We also note that resolving these conjectures, and better understanding entanglement cost in non-local computation generally, has important implications for position-verification {{cite:c3c7dc9fdd4c9ebbeb959f65482fa3978978e793}}, {{cite:c0a641ece1f731b2227a7acb49913b8de9b4b43d}}, {{cite:10f47b024571d402e2590238e729211090a18477}}, {{cite:1deaef31388afe1e2ff0367db42ee08f6f6ab1bf}}.
In that context, Bob wishes to verify that Alice is able to perform quantum operations within a specified spacetime region.
To do this, Bob arranges a quantum task of his choosing.
If Alice behaves honestly, she will complete the task locally by entering the given spacetime region and performing the needed operations.
If Alice behaves dishonestly, she will use a non-local computation to complete the task while remaining outside of the spacetime region.
| d | f1d7a0aa1863f40588ee44a9c30dd7bf |
We have described several dynamical quantities, including those that describe dynamical heterogeneity and the morphology of rearranging regions, that demonstrate a crossover in the dynamics, when the mode coupling temperature is crossed, with relaxation times better approximated by an Arrhenius temperature dependence at lower temperatures. Although it is tempting to describe it as a fragile to strong crossover, whether the dynamical crossover we see is a fragile to strong crossover as originally proposed by Angell {{cite:60a11b1314d421154a08edcf9b8285dc8ecb877f}}, {{cite:f5155e005a033b8494e3e473c7d0b6c71ef2117e}}, {{cite:96c031e8decf79d18dcb6f6f03264afdde6d7e3b}} is open to question. Unlike liquids with energetically favorable tetrahedral structure (such as water, for which the fragile to strong crossover was originally proposed, and silica), the model we investigate does not display a thermodynamic signature of a change in regime in the form of a heat capacity maximum. On the other hand, several glass forming liquids typically described as fragile glass formers do display some form of a crossover at low temperatures {{cite:1dc1d153456dab96155ac348414ddb85c4893ad3}}, as also seen in computer simulations (for, e. g., a model of ortho-terphenyl{{cite:0de08e11fdd3de6d246d9b84ce8edb3ff3519eff}}). A crossover has been predicted as a generic feature in {{cite:d8f7ce8eb24fe89ebcb35d9a68b59a79a25b3516}} within the RFOT, and in extended mode coupling theory {{cite:5b5b3d2c513679b48c277c57c7ced7eb93149a7a}}. Our results do indicate a signature in the changes in morphology of rearranging regions, although with some modifications as compared to those envisaged in {{cite:d8f7ce8eb24fe89ebcb35d9a68b59a79a25b3516}}. In seeking further a structural explanation, it will be interesting to investigate also the morphology of immobile particles, which we have not attempted in this work, in conjunction with investigations of locally preferred structures {{cite:4b934e97b627fa197fb0ab59ff85fe791e342a03}}. Investigating the Adam-Gibbs relation, we find deviations from the high temperature conformity to the Adam-Gibbs relation at temperatures lower than {{formula:e15c5c1f-77ff-4f68-baa4-e10d937f050c}} , when a harmonic approximation to the vibrational entropy is employed. However, inclusion of anharmonic contributions in estimating the vibrational entropy leads to the conclusion that the Adam-Gibbs relation is valid across the temperature range we study. A more rigorous estimation of the vibrational entropy than what we have presented here should be attempted in light of the results we present here. Another issue to consider in the present system is the possible role of finite size effects. Based on the available results, it has been argued in {{cite:12520018cad28d9aaa6f641f061305c5177e1083}} that the observed dynamical crossover is unlikely to be a result of finite size effects. We haven't addressed this aspect any further in the present work, but with the present day computational resources, this is a question that can be more satisfactorily addressed at the present time. Our work, and related work that has been described, illustrates that exploring the nature of dynamics below the mode coupling crossover is now feasible computationally. Exploration of such low temperature dynamics should help bridge the gap between the temperature range computer simulations have been able to access in the past, and the temperature range relevant for several experimental and theoretical results.
| d | 0a8821966f5e4f35d86faf66c5fbb470 |
A smooth classification loss such as the one based on logistic or cross entropy functions is necessary for gradient-based training. Consider a network with weights {{formula:07e4f08a-f24d-40be-bc03-3fc2d0912739}} and corresponding fhsn weights {{formula:0b2b2164-d7cc-440e-bc50-9356c06211db}} . When we extend {{formula:a89416c9-6ab4-415d-882f-fff6ccb47cd7}} along a radial direction, i.e., {{formula:d453d9b8-25b7-4f61-9511-644e49a20089}} , {{formula:7889cbe0-92d2-4c64-ac1e-91324fcad869}} , while keeping the weights of the network outside fhsn unchanged, the classification error on a set of examples (e.g., the training error computed on a training set or the generalization error computed on a test set) remains unaffected whereas classification loss varies a lot with {{formula:703ffdd3-2bad-48ae-b3b7-f65e188de35e}} . In particular, if {{formula:dd4c790b-abf2-48a1-9749-a8cea3a292f4}} classifies a set of examples strictly correctly, then, as {{formula:4a6cf5d8-475d-4097-a823-a74b5dcfe6f9}} goes from 0 to {{formula:66bb4697-a962-4670-9848-7fbd86fcdbdf}} , the average loss on these examples goes from {{formula:688e884b-1bc6-484e-80b5-4e6d14c6ac63}} to zero asymptotically. During normal training, with weight directions having a good radial component, training loss does become very small. Earlier, in § we saw how, when training loss becomes very small, it causes loss flattening, which leads to a loss of adaptivity of the network. Therefore, it makes sense to suitably contain the radial movement of {{formula:a9b800cd-c96d-44a2-bb72-38e2f87dcfae}} by constraining {{formula:66baaeaa-ac4b-4632-9c46-a3eb230856cc}} . The essential spirit of LAWN is along this idea.Weight norm bounding in neural nets is theoretically well-founded for improving generalization {{cite:4cb4aa31d44b1f6750c46ab3e8bed1be1dca79fb}}, {{cite:e3ec906fa371adc64ef7fdc8fd3b61377dd307d9}}; in LAWN, the bounding is done in a specific way to avoid loss of adaptivity.
| m | 38907beb3a10e20e9702769f9789fd7a |
Also, although there are large scale chemical graphs datasets available{{cite:f0f9c673237be7dfc5c384e5ef4785b8bb5e562a}}, {{cite:a3171733c25ac30f2d4842bd477302ca08b89167}}, a benchmark dataset that contains many large graphs is still missing. We plan to create such benchmark dataset for future use. In general, while not addressed in this paper, we note that understanding the power and limitation of various graph representations, as well as the types of datasets with respect to which they are most effective, are crucial, yet challenging and remain largely open.
| d | a373fec92102a33c3d680473872591d3 |
all solutions of which are meromorphic and are of finite hyper order at most (see {{cite:5393723e3f481af2c3ca858bacdd5e516cc11466}}). Below we use the method in the proof of theorem REF to study the growth of meromorphic solutions of the third Painlevé equation {{formula:244c09e6-9320-4ebe-abc5-e9b92f5bed9b}} and also {{formula:1345419f-6527-4718-9f24-c1c9cae72910}} . We prove the following
| d | 2c851ff130d6c856bb602ffc39afc55c |
No loss is perfect and improves performance on all the datasets. There are two major scenarios when recall loss does not improve performance: 1) when a dataset is very "difficult", there are compounding factors resulting in low performance and imbalance is not the most limiting factor among them; 2) when a dataset is too "easy", the class distribution may be imbalanced but visual features are distinctive and easy to classify. For example, it is more difficult to achieve good performance on indoor semantic segmentation due to clustered environments, different orientations of objects and intra-class similarities. We tested the same DeepLab Resnet18 network on the ADE20K datasets {{cite:0ea9d0cf55fa022f8ac015122df47c6d94fea70f}}, and no loss can perform better than the others and performance is not good across all classes. We also tested the various losses on another self-driving dataset, virtual KITTI2 {{cite:a33a97f16dda55d325a959cce132cc72b01468f4}}. The dataset has much simpler scenes and less busy streets than Synthia. Cross-Entropy loss alone achieves over {{formula:e188a901-f15d-4485-a537-1e578b6c7e2e}} mean accuracy. Therefore, other losses including the recall loss perform similarly to the vanilla Cross-Entropy. A rule of thumb for using an imbalance-oriented loss function is when the overall accuracy is reasonably high but mean accuracy is very low. This is a very good indicator that imbalance is the most limiting factor and needs to be carefully approached.
| d | d642f66f34c14722b18f219fe3b387e7 |
There are at least two main strategies for handling missing data including omission and imputation {{cite:e88331f9e3cff48891d47434196c1aa2312f1415}}, {{cite:4301639223786df71e93c521d9a6f0b6a3e986c8}}, {{cite:925d619272d35225fdedcab3da0745c2d4c86446}}.
Common omission approaches include listwise/pairwise omission and dropping features.
Although omission
is simple and easily used, it can lead to serious estimation bias, large efficiency loss, and dramatic reduction of statistical power.
There are two types of imputation methods, including single imputation and multiple imputation.
Single imputation methods generate one imputation value for each missing observation, which leads to a single complete data, while treating the imputed values as the true values in downstream data analysis. Therefore, downstream analyses based on the single imputed complete dataset do not account for the imputation uncertainty.
Two main strategies of single imputation including imputation by statistical values (e.g., mean, median, or maximum) and imputation by predicted values generated from a statistical model.
Multiple imputation methods generate
many imputed values for each missing observation, which lead to many complete datasets, while
analyzing all of them in downstream data analyses. The use of multiple imputation allows us to explicitly account for imputation uncertainty.
| m | b32b18a52cf7c966d1a25c5cae5e9115 |
In this work, we propose an unsupervised LCO framework. Our findings are applied to general CO problems while exhibiting extraordinary promise for PCO problems. Unsupervised LCO has recently attracted great attentions {{cite:ad819e30a283dd35fc2335a4b2a0106dd5354e91}}, {{cite:5403e77dec5f569432eafd9879193a61b4cb20b5}}, {{cite:de2faaf48c00f7fc6b8cb2fd0aadcfcd0659a453}}, {{cite:3f366c0e62cba7ac2584747c1f8a7090f474f76c}}, {{cite:b6b15afa9af620015de3cb61a6261acf0c249142}} due to its several great advantages. Traditional supervised learning often gets criticized for the dependence on huge amounts of labeled data {{cite:509467f3bcff1d66492f5e6e2f5dde05bed302fe}}. Reinforcement learning (RL) on the other hand suffers from notoriously unstable training {{cite:25bb1beff81a1352388447bb153e54fea7de8b14}}. In contrast, unsupervised LCO is superior in its faster training, good generalization, and strong capability of dealing with large-scale problems {{cite:3f366c0e62cba7ac2584747c1f8a7090f474f76c}}. Moreover, unsupervised learning has never been systematically investigated for PCO problems. Previous works for PCO problems, e.g., hardware design {{cite:30698744adb1a7f46d34795bb73662c821bf30da}}, {{cite:fef2143422801ffcb463d4d0ff409edb4c5cb48b}}, were all based on RL. A systematic unsupervised learning framework for PCO problems is still under development.
| i | 0b43dd6f0a4200694d7bcfc16fd192fc |
By Gershgorin circle theorem, every eigenvalue of {{formula:76aea285-4d70-4764-908b-426c774a2b5a}} lies within as least one of {{formula:8b81138b-ccab-4f72-9564-3d3a47ee516b}} for {{formula:dab4899b-7eaf-40d3-8dde-9fcfbf02fb03}} {{cite:c79af7b8f28f29d3029ef7a2214473c2c9d2206e}}.
Thus, it suffices to show
{{formula:6860c53f-2dc1-45f3-b1d6-07ba06e04a23}}
On the event {{formula:34446fdb-aeb9-48d6-a311-6525fc58b13d}} , we have
{{formula:48c00523-696f-4117-a60e-742a92733d5e}}
and
{{formula:439acd85-db17-4267-97da-2ac7c890d925}}
Therefore,
{{formula:3fc64620-2bc3-4c8d-99c3-e4dc3d6835b6}}
Note that
{{formula:538e7afb-aed4-4aa6-92d3-f507dffa3ee8}}
because {{formula:12b72c71-72dc-4013-b721-bd18d7c0f45a}} .
Furthermore, by Theorem 1 in {{cite:b0c22461c2a91a7ab77af89c16fd3391e27b1b49}} and the change of variables,
{{formula:607cf1fa-7c99-4865-a65d-3cad0b53c3a6}}
for any {{formula:4791862a-3aeb-44d7-bdb5-bea7a299172a}} , which implies
{{formula:2ca003cb-9770-4cc2-955f-122121817ad4}}
Thus, we have
{{formula:9f421084-1ae0-4829-aca4-13a7ca182ccc}}
for some constant {{formula:3ee85086-a0c4-4ab8-97f1-866488a88829}} , because {{formula:8ae10f4c-c89b-4dd4-9697-5a9d2f3474c3}} .
{{formula:75c6676f-7a83-49e3-a9c3-46cde7d719ef}}
| r | 193e2093fd816397604b71d772b64d81 |
However, when the neural network is adopted as the solution ansatz, the trained model is often found to satisfies the governing equations but for overfitted boundary conditions {{cite:3be5129605df9e8f7be892a41a1244d765373b3a}}, {{cite:770e218577b2f9520f21bb19a443861037b00e2f}}, which greatly differs from the traditional methods {{cite:39b967cea59eb4c0ef82203e4522e23995acd5a6}}, {{cite:f21a7c2ed8ea745129a80222c27501b2406de4c3}}. To this end, the classification of domain decomposition methods adopted in this paper is based on the information exchange between neighbouring subregions (see also fig-big-picture). More specifically, we summarize in what follows some representative decomposition-based approaches in the literature {{cite:39b967cea59eb4c0ef82203e4522e23995acd5a6}}. Here, we refer to the Schwarz alternating method and the Robin-Robin algorithm as SAM and RRA in algorithm REF , while the Dirichlet-Neumann, Neumann-Neumann, and Dirichlet-Dirichlet algorithms are abbreviated as DNA, NNA, and DDA in algorithm REF , respectively. In addition, the relaxation parameter {{formula:b25a98e8-b587-40d4-b597-b9db47e42a74}} shall lay between {{formula:9ca1699d-2b21-422e-819c-25ccc453d9ec}} in order to achieve convergence {{cite:357ef202110b4654cab041d557f1cd4798314814}}. Notably, although the overlapping methods with small overlap are cheap and easy to implement, it comes at the price of slower convergence. Besides, the non-overlapping methods are more efficient in handling elliptic problems with large jumps in the coefficients.
| m | 9c9bc0118dff967f89e7a9478339d817 |
We extract the central value and the scale uncertainties for 136 processesAll {{formula:74a36f33-e15b-49a7-a7ad-75b261105c55}} -initiated processes other than {{formula:bea10091-3024-42a7-8c25-97d31e7b8cd5}} , for which the NLO cross section has a typo, confirmed by the authors of Ref. {{cite:dd9da7878c5e3bb3dae75d6fdaff2883a40ef6e8}} via private communication., and neglect the statistical uncertainties from Monte Carlo integration, which are typically small in comparison, and PDF uncertainties, which are due to measurements in auxiliary datasets and so are in a different category and with complicated correlations between processes. All cross sections are calculated by MadGraph5_aMC@NLO {{cite:781419396a84eed48de1c4c6c61019a66d7707fe}}. The Higgs boson is assumed to have {{formula:ac1ad585-590c-4fe8-89c5-7c2df5c8cfc6}} GeV, and the top quark {{formula:fba022b1-d77f-494a-87da-e230e017e22d}} GeV. The parton distribution functions MSTWnlo2008{{cite:3d2b870193af780adec9f91eb8258cf4c280f1e2}} are used with {{formula:561ce4d0-628c-4824-b13e-562b242241ee}} . The choice of a central scale is
{{formula:4ba66ee1-32f2-417b-bc21-27a7443508fa}}
| r | acebdaec64f60c0aaf381d628f8f1ec9 |
We obtain a 4-approximation for private {{formula:38ae8a52-85d0-4327-8b2a-a84dd126547a}} -center with outliers (5 for the supplier version). This matches the best known bounds {{cite:f1c8edf2ce1148840e034d8d1b59795eb30c2ba6}} ({{cite:5825851707943bc541cc061ab237fd77d20adbc9}} for the supplier version (this also holds for non-uniform lower bounds)).
We compute an 11-approximation for private capacitated {{formula:6d924c2c-b1ba-4592-b98f-985fa43e183a}} -center (i.e., centers have a lower bound and an upper bound), and a 8-approximation for private uniform capacitated {{formula:371a5f14-b9e2-4b14-ad1b-c3e4ff3216aa}} -center (where the upper bounds are uniform, as well). The best known bounds for these two problems are 9 and 6 {{cite:0d95a9dbbc2da5f5ae1f613ce6b1f360b3e2d35b}}.
For the supplier version we obtain a 13-approximation which matches the best known bound {{cite:0d95a9dbbc2da5f5ae1f613ce6b1f360b3e2d35b}} (for uniform upper bounds a 9-approximation-algorithm is known {{cite:0d95a9dbbc2da5f5ae1f613ce6b1f360b3e2d35b}}).
We achieve constant factor approximations for private fair capacitated/uncapacitated {{formula:31397e08-04db-40f0-9e93-fa53a36209af}} -center/{{formula:c6684992-7758-4b85-bfae-ac0c8b285d31}} -supplier clustering. The approximation factor depends on the balance of the input point set and the type of upper bounds, it ranges between 10 in the uncapacitated case where for each color {{formula:9f26ab65-3a3e-45d7-9d7c-aea497f38a10}} the number of points with color {{formula:09ff4228-c739-4d04-ab8c-12c89bcc107c}} is an integer multiple of the number of points with the rarest color and 325 in the general supplier version with non-uniform upper bounds. To the best of our knowledge, all these combinations have not been studied before.
Along the way, we propose constant factor algorithms for general cases of fair clustering. While {{cite:7b04d197b13e2be8c3521ef17d0e46bf9daa4c73}} introduces a pretty general model of fairness, it only derives approximation algorithms for inputs with two colors and a balance of {{formula:335cd29a-ffea-4a73-b81c-ce5f3ee62baa}} for an integer {{formula:07dfc835-8003-4096-a626-0134f251619b}} . We achieve ratios of 14 and 15 for the general fair {{formula:4118e89e-6922-4567-8387-d193d642d5e2}} -center and supplier problem, respectively.
Finally, we propose the strongly private {{formula:af7ab629-c95d-4c55-ad5a-fc51fe7ad881}} -center problem. As in the fair clustering problem, the input here has a protected feature like gender, modeled by colors. Now instead of a fair clustering, we aim for anonymity for each color, meaning that we have a lower bound for each color. Each open center needs to be assigned this minimum number of points for each color. To the best of our knowledge, this problem has not been studied before; we obtain a 4-approximation as well as a 5-approximation for the supplier version.
| r | dc4d2dc1028686fe81ad828885ccc504 |
The previous best bound was {{formula:bbb4b394-4ef0-4cd2-8b56-07554d21e9bd}} iterations {{cite:aa8f6b57e0558aa112e3471ad0aec6aee94daea4}}. This result is one of the key surprises of this paper. Although this problem has been studied extensively with specifically-designed algorithms and analysis, we show how to get a better result by a general ODE algorithm and a general analysis which works for any ODE. Furthermore, our algorithm is the first to achieve polylogarithmic depth dependence on the dimension, which seemed impossible in prior work.
| r | aba907af66a3d5cbbdc7bec05b4a526e |
The {{formula:aa3db965-e194-41b7-8e58-c9178f1c68f1}} -ray detector response functions were simulated with the geant4 framework {{cite:f43a8c71a26f43fa5041ad1f30c462f446847ec1}} for
fitting and comparison with the experimental data.
| m | c3c86c24f08a0b6a29c221830fb3727b |
The second case of interest corresponds to systems with {{formula:611b0ca3-9858-4b26-a687-d5557b74cfc6}} and {{formula:df4e3fc9-195a-445c-9656-e52ccd64be39}} .
The corresponding U(1) gauge-invariant model was studied in
Ref. {{cite:e5a3d999af56300d778cc4937799d828640ce51b}}, finding a charged transition line, where both
scalar and gauge degrees of freedom play a role. Gauge fields must be included
in the effective description of the critical behavior and indeed the observed
behavior is consisted with that predicted by the AH field
theory {{cite:e4cad0173bc2ff74e3b81b0988f94a1377c72525}}, {{cite:48d5fef67478d228a309c57ea31b732df153d41f}}, {{cite:05934c6cde01362d47ae4b72a7db51187e8dd999}}, {{cite:21cfdabc00f47a02d0c600eb65e4ad0e3a0d806f}}, {{cite:1638d47cd34ce592e8cc3dbe7615898337141d75}}. A natural question is whether
the charged transition survives if one replaces the continuous
gauge group U(1) with the discrete {{formula:fc36ff42-c3d5-44bc-bc3a-66c091d43270}} group.
In the case of a global
U(1) symmetry, it is well known that systems with microscopic {{formula:53ad56c3-384e-40a4-9f63-5a9b302827bc}}
symmetry may have transitions in the U(1)/O(2) universality class if {{formula:679968d2-9c9b-4442-bbab-463f19d642e3}}
(see Refs. {{cite:bb254a946341cf0f0b582f98f98f58785cd9b0b8}}, {{cite:10c6cdd986037b2f4c9324966533a7474d1dffc2}} and references therein).
At the transition one observes an enlargement of the global
symmetry: The large-distance
behavior is invariant under transformations that are not symmetries of the
microscopic theory. Here, we investigate whether a similar
phenomenon occurs for gauge symmetries, i.e., whether it is possible
to have a gauge symmetry enlargement. Simulations with {{formula:bc698403-9d67-40cb-923c-0573255c8884}} and 10
indicate that no such symmetry enlargement occurs. We identify a transition
line that separates two phases that have the same features as those
coexisting along the charged transition line in the U(1) gauge model, but
in {{formula:b3ce5ca0-1647-4879-88aa-44ee301009ac}} gauge compact models these transitions
turn out to be of first order for both values of {{formula:9c2007d2-93de-4855-8812-76335004d165}} . Apparently, the
microscopic model should be exactly U(1) gauge invariant to allow the system to
develop critical charged transitions.
| i | 2b974ab6784417191b2ad3c2c814c71e |
Time-preservation requirement {{cite:f2d50dc0c5d529f46640713d3ee18f26c7c87ede}} of the primary constraints {{cite:7a7f7712c8fc91afe3be1a2796f579796067da12}} for (REF )
{{formula:6bb87385-e954-411b-8ff4-0892ac4f2d0d}}
| i | b9438fe28631409840a7b25b3a769b41 |
Fig. REF shows the learned weights of the first block of the deep maxout neural-kernel networks applied on the Motor dataset. The the learned weights of the second block for the same dataset are also depicted in Fig. REF . It can be noted that the magnitude and structure of the weight matrices corresponding to the first and second block differs.
In particular, the weights of the first block seem to have more sparse preserved structure where only in few positions one can observe the picks in the magnitude. This is due to the fact that still the nonlinear transformation is not applied. One can notice changes in the structure of the weights corresponding to the second block compared to the first block. This can be explained by the imposed nonlinearity.
Inspecting the the magnitude can also potentially reveal the amount of emphasis the network is giving to each transformation. Fig. REF illustrates the The t-SNE visualization {{cite:77bf5b10c0f4862d47f44b14afc527a87b31a46b}} of the hidden layer projections and the score variables corresponding to the employed deep maxout neural-kernel architecture for the Motor dataset.
Thanks to this visualization technique, one can observe the changes in the data representations as the data flows through the stacked hierarchical layers. Ideally, one may expect that the learnt representations in the deeper layer form a more separable clusters corresponding to the existing classes {{cite:aa93b13a1c3c5e2f47bceed7c4559f0a26871f9c}}.
{{table:327af198-8205-4ebc-b325-a32697187c33}}{{table:1b5e1b26-7f49-4ce9-b6c5-c0691be9b481}}{{figure:5b7cb06b-102e-46e2-bf4d-fe3c0d6020cd}}{{figure:76162348-c761-4933-ba72-d5636951bde2}}{{figure:edb21d07-98b9-4e0b-ab44-ac4aa0601d17}}{{figure:3f353e04-cf62-4893-be83-f4ed9a77fcd6}}{{figure:3f52c9aa-b617-4643-be4e-86263039e504}}{{figure:fe48e765-d3cd-4eb1-8999-eb63d1b337b2}} | r | f22b5e63140a0e8c67e348a0e3baead1 |
Over the last decade, technological advances have allowed the emergence of Artificial Intelligence (AI) solutions for many applications. This emergence has been accompanied by an amplification of AI research, for which major tech companies have largely participated in its democratization. This democratization has allowed many technological advances (autonomous cars, translation...) and facilitates the implementation and deployment of AI solutions.New functionalities are regularly made possible thanks to the availability of new neural network structures that are pre-trained and rapidly integrated into high-level frameworks such as kerasWe can cite for example the language model BERT{{cite:2a90299d28993dec3fd2a124f692f770eddd68be}} which has significantly improved performances in automatic language processing. in python. However, this appearance of simplicity hides in some cases a relatively complex prerequisite: having a large volume of labelled training data. In order to understand this need, it may be useful to distinguish between generic and specific AI solutions:
| i | e2d30a6e061a40ae0a432ba94f0dc766 |
The equilibrium geometry was determined for the primitive unit cell with a
{{formula:59fc455d-8d1f-46fa-b49c-abd6853fe6bc}} -centered {{formula:71921f77-c07c-4a69-9875-cf78d418d479}} Monkhorst-Pack (MP) {{formula:b3821f9e-25b4-4f98-8863-136b9fecd254}} -point set
{{cite:443b2cc5ec35c767f86053d0db25573db58b7b3a}}, based on constant volume relaxations and fitting to
Murnaghan's equation of state {{cite:ad648c49db511d2ed5dcdd447f4397898d66fb0b}}. We describe bulk hBN using an
orthogonal supercell of 120 atoms (5a{{formula:5e4e6ade-f378-4d49-aca3-a3223ec205c5}} , 3a{{formula:9bd214f6-cfb0-47f9-9d19-1a2aa75dd5d4}} +
6a{{formula:48c993be-9869-431a-a365-7e5769142aa2}} , a{{formula:cd222b24-44fa-4720-83f3-27245a9c4d1e}} ) at the calculated lattice constants (see
our previous work {{cite:b6ef3ec191ba761d8906939f8e84773cd2248996}}), using the {{formula:c26232cc-a020-4b0e-a5ef-3c4aa0126d81}} -point
approximation. (Here {{formula:3d52a550-587b-4ed9-9f16-845fa8d6e063}} , {{formula:bbb98d31-b9a4-4478-8e31-a79224f459db}} , and {{formula:25813988-caa9-4d11-a39d-a3f4d19327da}}
are the primitive unit vectors). The geometries of the defects in the supercell
were relaxed at fixed lattice constants, using a force criterion of 0.01 eV/Å.
| m | fd6d509ea4686bd3bf60b64503046895 |
The experiment is designed to asses the MixMatch's accuracy results when using the proposed methods to filter the OOD data in the unlabeled dataset. Hence, the MixMatch's accuracy is measured using the filtered datasets with an AlexNet model. Table REF shows the results of training the AlexNet model using MixMatch with the filtered datasets by the proposed methods. The filtering methods use the Mahalanobis distance to assign a score to each unlabeled sample. As previously mentioned, we assume that the OOD and IOD data tend to form a Gaussian mixture distribution or two clusters. Therefore the OOD data is filtered using the threshold estimated by Otsu's thresholding method and the K-means clustering. Moreover, the proposed method evaluates whether the unlabeled dataset depicts contamination using the coefficient of variation of each cluster. To evaluate the ideal setting where all the OOD data is removed from the unlabeled dataset, we evaluate the accuracy yielded by using the real threshold (percentage of contamination using the Costa Rican dataset). The MixMatch algorithm is used with the recommended parameters in {{cite:606fc8138edf816efbe290e79c7c48d5094bd544}}. The models were trained for 50 epochs with 10 random data partitions, each with each respective training and test dataset.
| m | 2c7e22536dee6b272d79e0cf22652d10 |
However, the aforementioned studies are all limited by the homogeneity of the data that was used. They were either single/few centre studies or they used publicly available databases such as the UK Biobank which have uniform imaging protocols and few pathological cases. As such, the datasets do not match the variability seen in clinical practice and although the studies report good performance on their data, it is not guaranteed that they will generalise well to real-world settings. Images acquired using different scanners can have widely-varying levels of signal, noise, and contrast. Images acquired at different centres may be planned differently resulting in differing locations of the heart in the images, and distinct cardiovascular diseases can alter the shape of the heart. Some of these variations are shown in Fig. REF . These variations introduce a so-called domain shift, a change between the distribution of training data and the distribution of the data that the model is being applied to {{cite:b83ee1e5a6d882a2a270a63eb9cd27993d56606d}}.
{{figure:6048ae7b-8d63-40cc-9433-a65c84c485ad}} | i | 1f269c8b1c8b988289f7462fb622f3f0 |
Arbitrarily-shaped dataset.
We further compare our method with state-of-the-art approaches on the benchmarks containing arbitrarily shaped texts, including Total-Text {{cite:beca25e35652fc12c07e07f9e6e616a9329194bb}} and SCUT-CTW1500 {{cite:d4f58f38680af9f26829f95438f4e7c263983883}}. As shown in Table REF , SPTS achieves competitive performance to state-of-the-art methods by only using an extremely low-cost point annotation. Additionally, Table REF shows that our method outperforms TextDragon {{cite:450a68ac563e6d8e441a0f1c65b8bea28927ece6}} and ABCNet {{cite:97669d27a119ef7b41c38140677a657ec3e978fa}} by a large margin on the challenging dataset SCUT-CTW1500, which further identifies the potentiality of our method.
| r | b86bd4df3928aa1e09ffab6e8968f2c9 |
(1) Classification of AD and CN subjects: To train the proposed multi-stream CNN, we randomly select 70% of the MRI patches from the two classes of AD and CN as the training set, 20% as the test set, and 10% as the validation set. We also compare the trained model in this step with 10 additional approaches, including regions of interest (ROI) {{cite:e2ee26d2858628ba6d61df881152098a300e558f}}, Voxel-Based Morphometry (VBM) {{cite:529e8405643d752d27d308896b3cb05306d15bbf}}, and five deep learning-based methods. We used the FAST algorithm {{cite:22655be8919aa9e99efd31fba22337cd4faa160d}} to segment brain MRI images into three different tissues for the ROI and VBM comparison: White Matter (WM), Gray Matter (GM), and Cerebrospinal Fluid (CSF). We followed the implementations described by the researchers for the deep learning-based approaches.
{{table:0535609f-f787-4bcd-8479-2a84a4e8c54b}} | r | 94d46eb1c0cb95099e016d780bbd1d12 |
We have also compared the sensitivity of the Observatory to the neutrino flux
observed by IceCube between 19 October 2014 and 6 February
2015. The analysis of this
period resulted in constraints for the normalization and spectral index of the
observed fluence {{cite:d761a81326d2060570e00eb10cacdedfb11201de}}. This period of increased neutrino flux in IceCube was not coincident with a
VHE gamma-ray flare from the same source, although a hardening of the spectrum
in the GeV region was reported {{cite:95449819fe10195e72d211d2a19ba78b46337fc6}}. In
Fig. REF we display the {{formula:127f217b-2029-4d28-9bb2-b459edfd0bd2}} and {{formula:c817433e-4161-4bb5-88d6-3460603aa6c4}} bands of
the average flux obtained from the fluence reported assuming an activity
period of 110 days as obtained from the IceCube data analysis using a Gaussian
window. The bands are calculated using the whole parameter space allowed at
{{formula:7f67fffc-55b0-4e80-ab36-7119f6594113}} and {{formula:7b0d8adb-ddf1-4e7d-a16f-92e07dcbbc9c}} confidence levels in the IceCube analysis.
The extreme
values of the spectral index are {{formula:be5f8066-0422-480d-bcfd-7d64195ddb14}} and
{{formula:499a1545-50ef-48f0-95a8-c8fecf05ad90}} ({{formula:de37e0bd-97c0-4a13-bd8b-8db61b208a8c}} and {{formula:cf469c9f-ec5c-4fa0-aad7-923f2a132daa}} ) for the {{formula:299f5f58-f724-46b3-8be2-b6b6d89fac8d}} ({{formula:c6687f11-c7d8-4663-97f6-fdaba6fb45bb}} ) CL
contour plot {{cite:d761a81326d2060570e00eb10cacdedfb11201de}}. The figure also displays the average gamma-ray flux obtained for this period illustrating the reported hardening {{cite:95449819fe10195e72d211d2a19ba78b46337fc6}}.
The results obtained indicate that the
Pierre Auger Observatory could only be expected to have detected a signal if
the flux extrapolated to the EeV regime with spectral indices
harder than {{formula:e9056de7-750f-4a5c-954b-1ad380d6ceac}} .
| r | 0b704405a4d8b99ca0422a3b52e5f9ec |
Aside from designing specific models for different tasks, the advent of pre-trained language models (PLMs) such as BERT {{cite:3086b4481339186fcfc4d102f44ad7eab20b259d}} and RoBERTa {{cite:c7cb804ab5bc7bc22d87cd08d6116bd693dd3428}} has brought substantial improvements on a wide range of ABSA tasks in recent years. With PLMs as the backbone, the generalization capability and the robustness of ABSA models have been significantly improved. For example, {{cite:192727d02fc860ec548a94764c84074ed7acef7c}} show that using a simple linear classification layer stacked on top of BERT can achieve more competitive performance than previous specifically designed state-of-the-art models for the End-to-End ABSA task. Although constructing ABSA models based on PLMs has become ubiquitous nowadays, they are not discussed in the existing surveys {{cite:b351e16e1e6c944033ded96cb64cb99bd34aa612}}, {{cite:ab2a4e24df6c8f0c467dd5234de19c8b47a8532d}}, {{cite:a0ea8f9252bec3084bb15f2a5f31f236527ac139}} due to their recency of publication. Therefore, in this paper, we provide an in-depth analysis of existing PLM-based ABSA models by discussing both their advances and limitations.
| i | f76fbf5f6843752a7b113355e9d64f73 |
To reduce the computational costs of SDA model, Chen et al. {{cite:1c2e41d154c1a960415729ffd7ff5663ef841554}} introduced a marginalized SDA (mSDA) model to denoise the marginal noise with a closed-form solution without using a stochastic gradient descent strategy. Multi-task autoencoder (MTAE) {{cite:7efe35529cb28a73f4a23b46291e6dda572dfec0}} learned intra- and inter-domain reconstruction to represent domain invariances. Ghifary et al. {{cite:34a025b7dc63e92e772992bb76f356f3c42f52be}} proposed a deep reconstruction classification network (DRCN) to learn a shared encoding representation, which aims to minimize domain discrepancy. Zhang et al. {{cite:66a88ec1686fc36f08f1670a9c018cafdb62aa95}} proposed transfer learning with deep auto-encoders using Kullback–Leibler divergence to reduce the discrepancy between the source and target distributions. Domain separation networks (DSN) {{cite:c0e5e066f37f33100cceba392b638b519e2ea2c3}} introduced the notion of a private subspace for each domain, which captures domain-specific properties, such as background and low-level image statistics. The shared subspace is enforced through the use of autoencoders and explicit loss functions, which can capture common features between the two domains. The loss function is defined as following:
{{formula:08c22a9b-5894-48b7-aa9e-4ed9d9fe9343}}
| m | 7d4133ef9e593a4b817829693a103f3b |
In this work, we use the microscopic Monte Carlo (MMC) method to perform model simulations.
The MMC method is a rigorous approach that can treat the finite size effect.
In this section, we only briefly introduce this method and refer to {{cite:fe573a1325079fde818504db2b0c0ee26752f946}} and {{cite:afd165337c9fb0c10e6b84114cd5c1b5406e5cb1}} for details.
| m | 3de713c4ac97e56466c2b83e83aff0e7 |
A large spectral weight in the superconducting state can cause an instability even in centrosymmetric systems, when the inversion-related pair creation of Bogoliubov quasiparticles is allowed by multi-orbital effects (i.e., when {{formula:52dce426-b873-4e8b-8bc9-7adf1931c2af}} ).
In this class of systems, Bogoliubov Fermi surfaces (namely, the Fermi surface of Bogoliubov quasiparticles) are topologically protected {{cite:9ec04952c82392a7b802ca920575331c4eb59124}}, {{cite:248a6db6958890da5b2caa23d791cfcbb93f4223}}.
Because of this topological protection, there remains small Fermi surface pockets generically when the superconducting gap opens.
As the pairing strength becomes large enough, the increase of the optical spectral weight in the superconducting state at low energies cause the negativity of the superfluid stiffness {{cite:08da7a44bf02e8230b3d0d674d444421af510fd6}}, {{cite:a92786a8a75fb2514fcadd80a69a7c59f141814d}}, {{cite:5682d0def94de38275a7f047341f40b573b58881}}, making it unstable towards an inversion-broken state.
| d | 818b37c5f73673715a931bd70b02ca04 |
In order to further illustrate this effect, we also considered other
analytic profiles, which were proposed by {{cite:d9f3e262d2efda880152b95270b9614fc326cd77}} and {{cite:a7cada76d20a73bb20c51a5efcbd61f48791c6af}}.
Also in these cases, we note negligible effects on the Doppler
factor evaluation, obtaining relative absolute values of the velocity
differences within a range of about 12% in correspondence of the same
values of the Doppler factor. Conversely, non-negligible differences appear
in the Doppler factor when we take into account a flare profile.
In fact, in this case the resulting velocity variation is significantly larger,
increasing up to about {{formula:21da35b0-af20-43b7-9f11-270e6e320439}} and returning a
relative velocity difference values of about 21%
(see Fig. REF , panel (c)).
| d | ca597695cbaa891d6fb65909c7ac59ba |
Finally, an interesting open question is to provide perturbation results for other algorithms, such as (quasi-)Newton-Grassmann method {{cite:1eb1a7a094b2a7d4d753522ee8e45afcb37c48bf}}, {{cite:d04a5a63e0b968e18067c6a16edcefc580cd9f85}}, geometric Newton method {{cite:3a5dd1142033f758befef017c3a5f4053abb4650}}, Riemannian trust region scheme {{cite:8320e6831e33ecdcb8f7da6255c14d04d0c25dc9}} since sometimes Riemannian trust region scheme and/or Newton-type methods can take much fewer iterations than HOOI to converge. Also in the paper, we mainly focus on Tucker format of tensor decomposition. Although it has many advantages, it has the drawback that in ultra high-order tensor problems, the storage cost of the core tensor in Tucker format scales exponentially with respect to the tensor order and it is more desirable to consider other low-rank tensor approximation methods than Tucker, such as Hierarchical Tucker decomposition {{cite:689482f4865e17ea207282ddd8ec9d9df79d3be9}}, {{cite:02c6ee1367a8f99d35d0aaf72b78010b65c26a25}}, {{cite:a9f99da1b49830bad65f8e40741b541f65b4b5b5}}, and Tensor Train decomposition {{cite:ba2f6809c6dbb60df71d8e97c4d4f56c309068bd}}, {{cite:32a6f92ce208dfd6c9e4c6519b763b479bfa81e9}}. It is interesting to develop the perturbation bounds for algorithms on Hierarchical Tucker or Tensor Train tensor decomposition.
| d | 09f88426b9da750538507907b0455f01 |
The {{formula:e9aa390d-b5cf-4a90-912e-22e3e9047014}} -analogue of Beta function for {{formula:93a0f77c-30a4-4c34-afc6-d845ff59617d}} (see {{cite:0e2dcd86b7c8fd5a9a24fe59cad9bdbb9300ca01}}) is defined as
{{formula:765f729a-9420-4919-a094-66f29db53055}}
| r | 1365520f37e6998089d6ff5226531595 |
Under the assumption that the {{formula:13130c00-5f43-444a-a231-0058b8de6f4b}} is a measure proportional
to the smallest causally-connected structure associated with a GRB
light curve, it is then possible to interpret the scaling trend in terms of the internal shock model in which the basic units of
emission are assumed to be pulses that are produced via the collision of relativistic shells emitted by the central engine.
Indeed, we note that {{cite:73f08a195a4703d24ac90293bb57d68e7e0c6abd}} in their study of the brightest BATSE bursts with {{formula:96855921-0b8c-4fa1-b0c7-cb6edaf5c54c}} {{formula:693aa847-1c92-4eba-bd16-81b3f562a544}} sec explicitly identified
and fitted distinct pulses and demonstrated a strong positive correlation between the number of pulses and the duration of the burst.
More recent studies {{cite:27b7c954c45322be592294d9a49a111e8f5a7a95}}, {{cite:eb89579b50e89c63b311e0586fdd81fadd5de02b}}, {{cite:ae79dad9d5667979a01a8d877dbc2abf9ef0b6c5}}, {{cite:e8bf58a2be474ce942660ed286f12083909b2639}} provide further evidence for the pulse paradigm view of the prompt emission in GRBs.
In our work we have not relied on identifying distinct pulses but instead have used the multi-resolution capacity of the wavelet
technique to resolve the smallest temporal scale present in the prompt emission. If the smallest temporal scale is made from pulse
emissions from the smallest structures, then we can get a measure of the number of pulses in a given burst through the ratio
{{formula:13b990bf-fc14-419a-a5b3-3686a4742490}} /{{formula:8284eecd-e8f3-412d-9fd2-795dd2807521}} . In the simple model in which a pulse is produced every time two shells collide then the ratio,
{{formula:9aad2f1e-b7bc-4c17-ba34-98fa838dd659}} /{{formula:43aa2955-3e81-4cc3-abe4-bee3572ece6d}} should show a correlation with
the duration of the burst.
A plot of this ratio versus {{formula:78710eb7-8302-402d-a380-5632997d0419}} is shown for a sample of short and long bursts in Fig. REF .
The correlation is apparent.
| r | b305641d10e66e39229213e6170d3615 |
where {{formula:22ab5059-fb68-4939-ad1c-95caba179358}} accounts for the isospin admixture within the modest shell-model space, while {{formula:f3722d55-3b71-4699-adcd-3b9f70bdf1b8}} accounts for the mismatch between proton and neutron radial wave functions, simulating the admixture between states that lie outside the shell-model configuration space.
The isospin-mixing component, {{formula:1e8ba04e-fa5a-4425-a6be-0f0ba7dfc990}} , is generally very sensitive to the effective INC interaction,
because of the strong dependence on the energie difference between admixed states (analogue and non-analogue {{formula:f25e8269-9c97-436a-84d1-c847e47fa437}} states).
Fortunately, the contribution of {{formula:799b0dbd-fc33-46e4-9c7a-087f2e94d2b7}} to the total, {{formula:bcebbac1-db11-4ab4-b2d9-2647c9f02eec}} correction is normally less than 10 % {{cite:a683f725b0b7c617871f1d561baf8bddf4e581d1}}.
As a traditional technique to improve this undesired property, the calculated {{formula:08011209-87e7-497a-bb5a-094309ad1de7}} values are scaled
with the measured energy separation between the first and second {{formula:032e3cd3-14fd-4230-b9eb-f5bc98ad086c}} states in daughter nucleus.
Althought this technique is based on the limit of two-level mixing, it works adequately for many cases {{cite:f49f7d174a5195681e2f7852e89f8bbe93fb7cf1}}.
On the other hand, the study of the larger component, {{formula:ae4b2650-5032-468a-a425-0f73b1b37166}} , cannot be satisfactorily concluded because the values calculated by Towner and Hardy (TH) using WS radial wave functions {{cite:f49f7d174a5195681e2f7852e89f8bbe93fb7cf1}}, {{cite:34df5b87a8d9f96a217e07a031780a0dbadba169}}, {{cite:61d4b2a0a1802fedcd3a1553f980ba41bb377e7a}} yield considerably different {{formula:8850d2bc-33c3-46f9-b35b-47293d52a339}} values than do those of the evaluation of Ormand and Brown (OB) using Skyrme HF radial wave functions {{cite:af92ce9b9edca5273de05dea18b66fb7d6f27015}}, {{cite:1a525d1cced24b8d37574b320cd9e39edb7d121c}}, {{cite:5c74cbb5a9030c2656610fabaf84221056d15156}}, {{cite:3c565acfa1c6911f58720182d87b3e1dd152f60d}}.
This discrepancy has been thought to be partially due to the use of the Slater approximation for treating the Coulomb exchange term.
In particular, it has been shown that the resulting Coulomb potential is too extensive or overestimated at large distance {{cite:7ec9e1931844bb4eb6a0c40308aec8dea3acc8f6}}.
Unfortunately, in addition to the numerical inconvenience, the exact treatment of the Coulomb exchange term generates a nonlocal component to the Skyrme HF mean field {{cite:cfa3129a993a76db5d4b0443a7e92c4f381542ce}} and then is technically unsuitable to be implemented within the OB's protocol which incorporates the multiple intermediate {{formula:de64a7ad-8083-4461-9a1d-d2409fb0c91e}} -particle states {{cite:af92ce9b9edca5273de05dea18b66fb7d6f27015}}.
As an asymptotic correction, Towner and Hardy proposed to replace the HF potential of the parent and daughter nuclei identically with the HF potential of the intermediate {{formula:0c5e1448-9874-4328-8fee-efaf3cfe091a}} nucleus, the calculation based on this modified protocol yields the {{formula:d4e09b6a-99c0-4c03-812c-d431f1aac785}} values which are very close to those obtained with WS radial wave functions, except that their local variations (odd-even staggering) trend to occur in the opposite direction {{cite:7ec9e1931844bb4eb6a0c40308aec8dea3acc8f6}}. We remark that, although this method ensures the correctness of the Coulomb potential at large distance, it may disturb or even destroy the other important properties of the Skyrme HF mean field at small to medium distances. Furthermore, the dependence of {{formula:60a674e7-a6a0-4593-afa9-7dc970b68c5a}} on the choice of the Skyrme force parametrizations has been studied in Refs. {{cite:a0228a9caf200ec137927e4b2d00ff2b819eba69}}, {{cite:3bb53a9eaee0631844091b62f1cc36d9c127a7dc}}, this effect was found to be quite small comparing with the gaps between the OBs and THs values.
| i | 1913d803e9ea107624295d27b66f4af3 |
In summary, by applying the first law of thermodynamics to the
apparent horizon of a FRW universe one can derive the
gravitational equations governing the dynamics of the universe in
a wide range of gravitational theories including Einstein
{{cite:56e8134c8ca62edcbd6e21cda858d540121cbb71}}, Gauss-Bonnet, Lovelock {{cite:c1d17f429cda5686e9b489a8fe67c9304473b965}}, and {{formula:9ad92517-8b49-4d1c-a063-3fe79200d243}}
gravity {{cite:d5e60712986cade29c45997b27178b224493bbcd}}. Can this prescription be applied to other
gravitational theories? In this paper we have shown that the
answer is not always positive. Having the entropy expression
associated with the horizon of spherically symmetric black holes
in infrared modified HL gravity, and applying it to the apparent
horizon, we failed to extract the corresponding Friedmann
equations in the modified HL gravity. This is the main result we
found in this paper, which originates from the fact that HL
gravity is not diffeomorphism invariant, and thus the
corresponding field equation cannot be derived from the first law
around horizon in the spacetime {{cite:76150a8719e769085b4e452fec053bd385eb5c9e}}. In order to justify
this result, here are some comments as follows.
| d | 3f5086e29b756fbd26a7865d6dd31e88 |
AI agents capable of having human-like conversations find various applications such as providing an automated help-desk for customer service and technical support, serving as language learning tools, personal assistants and as a source of entertainment/recreation. The research community has accessibility to a number of datasets for the task of dialogue generation {{cite:cc59574ff52b10fc5ae4ead9bd9cf65840c074dd}}, {{cite:eb5f4384dc704f8148a989aac07443ac77be65ba}}, {{cite:2e972d122376530d1cc380a60508f30651c6e208}}. This has led to the emergence of goal-driven as well as non-goal-driven conversation models {{cite:6bee3e0b2edcbb430f6f7608f8c93a240db04964}}, {{cite:51b55d46daa8112228dac3181c269b433c673406}}, {{cite:303f0b95ca24dfcf148fdd2133b0bda1996d67e0}}, {{cite:a50ccd2fe7fd0fa53cfe0880278516c9354f2fcf}}, {{cite:03b51a874879f3546ce03d31e1887b8d8efa13a0}}, {{cite:688ba8548bd243df97ec369dbf748cf954db3bfc}}. However it is hard to measure the scientific progress towards a conversational agent due to the lack of good evaluation metrics. Human evaluations and comparisons are reported in most of the works on samples of the data. However, it is infeasible to have a human in the loop to give feedback and scores for training and evaluating dialogue systems. This has led to adoption of the existing automatic evaluation metrics for the task of scoring the generated dialogues. Popular word overlap based metrics such as BLEU {{cite:24f7f369e079155833dd48abc41334bb64ae6f1b}}, METEOR {{cite:0e4b5cc68ccc3d147553bb77e0fda787c59a29f8}}, ROUGE {{cite:fe11e09e598cf217e8baceacd2ea0deba4d04e05}} and the various word embedding based metrics have been used to score dialogue generation systems. However, they can't handle the diversity of the range of valid responses and are shown to correlate poorly with human judgement {{cite:f1627f193c5c98b0515f495ca3394cac35063b33}}.
| i | 1fbce54f6f7e7f04c108007a8afdc0a1 |
Counting-by-regression {{cite:00ae20ff1b1d22d758b2f01ee4e53a7c093b07b3}}, {{cite:cfbb5f15daf02b78f679d13f5e51b7c9e497160c}}, {{cite:84314f25c2559f5018bc5ef3f54ad462809ab158}} schemes learn the mapping of the input image or patch to its crowd count, whereas the density-map estimation methods {{cite:75241d26c54f671e510ee124ff8f39001dc8fe0f}}, {{cite:518d411a60a5e5d510f3786a78e9c8a96586d904}}, {{cite:f670e9f440066e99de404ad78e49a2e62ce8c4e6}}, {{cite:e23bd2dd8485199481e5c81ce14bf8551509aaf1}}, {{cite:31c1ba970275b62d2da413634700f0838d61a356}}, {{cite:088930fbcbe89f4b7bee5fc1d45b7406381c38e6}}, {{cite:4e829d4f1e60e829b45a6f5d3bf220a2bf854979}} yield the crowd-density value per input image pixel that are summed to get the image final crowd count. In general, counting-by-regression schemes do not perform reasonably well without any special and additive mechanism. On the other hand, density-map based methods rely heavily on the accurate density-map generation for the training images from the available ground-truth dot-map annotations for the learning process of their models. In principle, a point-spread function (e.g. Multivariate Gaussian Kernel) is being deployed to produce pixel-wise crowd-density values. Although the density-based methods produce reasonable results, the amount of Gaussian spread challenge limits their overall performance and relatively compromises their efficacy. Some works {{cite:792277daaf98f3b5222298dd871f1ce01b8d184e}}, {{cite:d593b139dd16ca1a80fe2bec95b5b6e395b01f2b}} also aim to detect people by their head or body using some well-established CNN-based detectors for the crowd counting purpose, however, few pixels per head in medium to high-dense crowd images make it almost impractical to achieve.
{{figure:40cb8731-967c-4fe7-8cf1-28945d7746f2}}{{figure:02eefe39-3301-4363-9c35-8c0485930cf2}} | i | 19663d645b1d415d3eecb94639363348 |
The SemanticGAN {{cite:9accf92ff4115d160420707d3a59601b9dbaab6d}} authors hypothesised that modelling the joint distribution {{formula:d0c54d35-cd28-45f5-81be-fb4c4d18b412}} of images and segmentations yields superior robustness to domain changes in comparison to discriminative methods, or methods only modelling {{formula:fe08f91c-6b4b-4417-b9be-a754bb1eec49}} . We aim to verify this hypothesis in a real-world medical imaging context. Specifically, we compare a generative SSL method, SemanticGAN {{cite:9accf92ff4115d160420707d3a59601b9dbaab6d}}, to a state-of-the-art fully supervised semantic segmentation architecture, DeepLabV3 {{cite:b99b3b0bb8838faf7de43a29e2e0d4da49c300e5}} and to an ablation study on SemanticGAN. Our proposed ablation study pertains to the generative arm of SemanticGAN to determine whether adversarial training is sufficient for SemanticGAN's success.
| m | 3cd45dbcffa087bed673f241f5de435b |
Elastic weight consolidation (EWC) {{cite:61671df2eb65e910526e641bb5f3dcc9d8d17e2d}} is a pioneering and the most cited regularization method. EWC quantifies the importance of the learned parameters by estimating the Fisher information relative to the objective likelihood and preserves parameters with high importance values by restricting their drastic changes against new tasks. However, EWC assumes that the Fisher information matrix (FIM) {{cite:56bf6b43dc1cc450fdaa8099cbe80834641b393f}} is diagonal, which is unrealistic in the original parameter space. R-EWC {{cite:7f20758340a2c10c33ded1d33fcb5fa4c9764ab1}} enhances this diagonal assumption of EWC through a reparameterization strategy. This strategy rotates the parameter space by singular value decomposition (SVD) {{cite:6d141cccd76dabd994eba8ef665a25f0dd7b1c85}} such that the output of the forward pass is unchanged, but the FIM computed from the gradients during the backward pass is approximately diagonal. In this rotated parameter space, EWC can effectively optimize the new task. Although several methods similar to EWC have been proposed, these approaches calculate the parameter importance using different techniques. For example, according to the memory-aware synapsis (MAS) {{cite:8aace99dc7ca515dd861cd01d90989b42132feae}} technique, the change in an important weight can influence the output function of the model more significantly than changes in unimportant weights, and thus, this approach computes the importance of the weight by measuring the magnitude of the gradient of a parameter when it is perturbed. Compared with EWC, an apparent advantage of MAS is that it can update the model in an unsupervised and online manner by avoiding the need for labeled data. Synaptic intelligence (SI) {{cite:4611e71aa9bb861094ab02a9a7f95cfecbfad81b}} computes the path integral of the gradient vector field as the weight importance along the entire learning trajectory. The most notable difference between SI and EWC is that SI computes the importance online and along the entire learning trajectory, whereas EWC computes the importance in a separate phase at the end of each task. Orthogonal weight modification (OWM) {{cite:2d0ea45c1d2f539ffdf1e9ca748131949ff41daf}} optimizes the update direction of parameters in the direction orthogonal to all the previous input spaces. This strategy can avoid mutual interference among different tasks. To protect the most important weights, ABLL {{cite:bbf3653c3dddff0ad3a06c58e4790b2c5971c9e5}} leverages an autoencoder to capture the submanifold that contains the most informative features pertaining to a past task. When training for a new task, the features projected onto this submanifold are controlled to not be drastically updated. Rather than focusing on the weights in all layers of CNNs, LFL {{cite:0b5b470623c5eed786f9239d7d2026f185655673}} restricts the drastic changes in the learned parameters in the final hidden activations to preserve the previously learned input-output mappings and maintain the decision boundaries. IMM {{cite:c3865d081dde7cc9bcafaa25e5913550b56e21bf}} progressively matches the Gaussian posterior distribution of the CNNs trained on the old and new tasks and uses various transfer learning techniques to render the Gaussian distribution smooth and reasonable.
| m | ae52815bbda0b7a8e38c32030f8177af |
The pure YM expansions of EYM amplitudes, together with the gauge invariance conditions of gravitons or the cyclic symmetries of gluon traces, induce nontrivial identities for color-ordered YM amplitudes {{cite:298628e713cbeacb50cd9a0804391909d44ac7a1}}. These identities guaranteed the localities in the Britto-Cachazo-Feng-Witten (BCFW) {{cite:2e1a2c8061395a46b9e9c1c3ffa639afc0cfad45}}, {{cite:494719e3b025725e00cebefb8edb8b2753a22862}} proof of the recursive expansions {{cite:b63b9b4d0e06c2c00793ad5253eda514ea700242}}, {{cite:298628e713cbeacb50cd9a0804391909d44ac7a1}} and played a crucial role in the proof {{cite:ebb49a2e52df673b170ac4e536526bc796e6a9d0}} of the equivalence between distinct approaches {{cite:d2bd771829057bef44b2d34914ac50e1d057b9cf}}, {{cite:6cdf48d00daf34a6a8b1db1cf8a8793916eeb240}}, {{cite:f766a2d9ef5c2032662022a583eb142470e2f6c5}} to nonlinear sigma model amplitudes.
| i | c6a1685ac74765c147ef1567aa32a0a7 |
The SoMoF benchmark {{cite:8b312c7b0eeff21f9ed69aa4b4bfcdba46f5c0b8}}, {{cite:0bbcc1d4f5f7dd26fa8fdc7e9e0c41ab6ce8779e}} provides a benchmark for multi-person human pose trajectory. Each sequence has 16 frames (1070 ms) of input to predict the next 14 frames (930 ms), where each frames consists of joints positions for multiple people. Results are reported as the mean VIM at multiple future timesteps. Just as {{cite:90b6722bbb78084f90ea39730519ec0ca498cee2}}, we train on the 3DPW {{cite:3877a212273fc2590f2feed50ee9ba53ee9a324b}} and AMASS {{cite:35cbdf5c42eebd5e583a27316ccb906e19be7bbe}} datasets, which provide both multi-person and single-person data, and finally we finetune on 3DPW. Since SoMoF only uses 13 joints for evaluation, we use just these joints during training as well. We report in Table REF a comparison of methods on the SoMoF 3DPW test set Our submission (currently anonymous) to the SoMoF benchmark is dated March 7, 2022.. Our model consistently outperforms all previous methods.
| r | bcb9209e5c7bfb934105ba8448cd17ed |
It is noted that these rogue patterns as reported in this article are universal when the {{formula:d7fb2b31-e3dd-48c0-9032-d41305037027}} functions of the underlying rogue wave expressions can be expressed through Schur polynomials with index jumps of 2, which is the case for the three integrable systems of this article and many others {{cite:7f29cb618da3761769a715e4e54d1ccdf587c64e}}, {{cite:ac196cee126eff4e6f8fae62eec443ce3dd4db23}}, {{cite:987eafbace48c968dc3e0507cee406f4e5040756}}, {{cite:88597b1eaa17f71f97e31e9deafe1acf8ffe545c}}, {{cite:56ec70b73a8ce5af3ef07c91c5aebec0ff7d0dfa}}. However, there also exist rogue waves whose {{formula:e455e091-4b01-4d1f-b9ae-3dfcf2a3f1a6}} functions cannot be expressed through Schur polynomials with index jumps of 2. For example, certain rogue waves in the three-wave resonant interaction system as derived by the bilinear method in Ref. {{cite:afac4f632bb781c7d605947c88f425959691f403}} are expressed through Schur polynomials with index jumps of 3 instead of 2. When internal parameters in such rogue waves get large, very different rogue patterns will arise, and they will be asymptotically described by root structures of different types of polynomials. Pattern analysis of such rogue waves is beyond the scope of this article and will be pursued in future publications.
| d | b339a552898ba52e91a6584710c40704 |
Baselines on carton datasets: To establish baselines, both RetinaNet{{cite:ca6d30ba4487959bef16fb47e1ec3758a22eada7}} and Faster R-CNN{{cite:6457c633cc4c8939a98b53591079bd25918aa5ab}} equipped with ResNet18 are employed to fine-tuning on training set of CPLC and test respectively on 500 images from ECLC and 492 images from FM. The overall results are reported in Table REF which shows that huge domain shift exists from CPCL to FM and ECLC, up to {{formula:c0457b3f-290f-4da7-b832-97e0431ae6fb}} in AP.
| r | db8a343a514bdbee708d7d56e242aa29 |
Proof. Since the sub-problem of HOOI is the rank-{{formula:c8dffc95-f32d-4938-b838-b252f9bef82e}} approximation of {{formula:e062dbbb-8b61-4f1d-af8b-d0ae5bba133a}} , and by the Ecart-Young theorem {{cite:54bac1b1336db00f10d9a95d5d6139c6ef844f10}}, we know that the rank-{{formula:e819b852-271a-45a0-8072-6c13983d825e}} SVD of {{formula:630d7312-c66d-4cda-ba51-548dadd8882f}} is a solution of it. Suppose that {{formula:c126cec0-17f4-43f1-a029-73651453be4c}} are the truncated rank-{{formula:1e3d4466-364f-49ef-affc-40ac69734b08}} SVD factors, then {{formula:965eaf46-c505-402a-8e97-d42de0ddb5a0}} is the update value of {{formula:df607561-b826-4a54-af7e-c4e5fb326461}} , and
{{formula:5b9bed6c-a268-4533-a7ce-f268b9d08fb2}}
| d | 53d9795531b07ed069d5939013231d4f |
This is the condition of uniform negligibility of {{formula:aab01cfe-7684-4829-a867-1ead4e4125e9}} in the sums {{formula:f3eb9af9-8ce6-4e76-bed8-e44ce195475c}} . It is known, that under this condition and (REF ) the limit distribution in (REF ) is self-decomposable (see {{cite:64f01fb6c162cfd8f0bae1118611dbbebe5930a0}}, p. 101, or {{cite:5565cbc2b3c5286e666d61520b69f0f2ff6cbb2b}}, Theorem 10). Let {{formula:af44b0af-3141-430a-ba6e-8853192c84cf}} denote the Lévy spectral function of {{formula:e4063ce9-4498-48a1-aeca-98a3f3c20470}} . Due to the non-negativity of {{formula:d5bd603c-e056-44c1-aa76-b446f909d37b}} , we have {{formula:9227ad38-f5ff-4e0e-9822-fadf87549742}} , {{formula:0aa5c4f7-9136-4c6c-9cd7-7af6fd6966fc}} (see {{cite:ad0c4f6c9ee1dfb85f046fbead9cf6e0b92f1240}}, p. 124, or {{cite:5565cbc2b3c5286e666d61520b69f0f2ff6cbb2b}}, Theorem 11). {{formula:725c94aa-c38c-43c1-b5bc-4ca70108970b}}
| r | 9b4d8f89d3ed819794760fac0c426e3f |
This specific choice of the rescaling comes from the fact that the macroscopic density of {{formula:6601b462-6c89-4dec-af50-ff2326a027fd}} with respect to the area measure is {{formula:afccdee6-38ed-446f-8fd9-469964f75d43}} see {{cite:243f7569f9e0972cd41af622b8e989d7861d9e87}}, {{cite:de2dfdb58cb0fd60fb940e53b5d4b1dbe24b8a2b}}.
| r | 593a7d90bf05a91d95bf5d0676479c9e |
In table REF , we list the masses, decay constants and total widths of the mesons involved in the quasi-two-body decays.
We take the masses and widths from PDG {{cite:232e6af604a335f7957ed471ddc99ead75aaf795}},
use the decay constants updated from the Laplace QCD sum rules for the light and {{formula:a35c8f79-b432-45d4-b935-04334814b10f}} mesons {{cite:7f61fe35c048b2af8a496472033df82f2b9070fd}},
and use the four-flavor lattice QCD result for the {{formula:1b6431b1-f1d4-4d5f-b161-3a5e6bf3a8ea}} mesons decay constants {{cite:841eb383506661e9f56a400eb8adf0870e2c64c9}}.
For the first inverse moments of heavy mesons, we take {{formula:63c6c0a8-6e51-43e4-a981-a1bb107a1225}} MeV and
{{formula:bf724b3b-b2a5-49b6-8ef2-440c2889ee23}} MeV for the vector charmed mesons,
take {{formula:7e7b5d18-4d3a-4639-9d04-f67d9fb33b0c}} MeV and {{formula:49674232-c305-42f8-ab57-d7f04e45daa6}} MeV for the {{formula:65a3e89d-4041-42e5-b9b9-8c489da06e72}} mesons {{cite:e7362c1727a9881e4c674fdc41d66a1fa6d2e1e0}}.
The gegenbauer moments in the leading twist LCDAs of light mesons are taken from the QCD sum rules {{cite:24a0083eda74ccbc380413e4b86209bfc69148e4}} as
{{formula:b101d84e-8fbc-4255-b1ba-30cf6f7710b6}} ,
the moments of vector {{formula:f5b074a4-c7aa-4744-8abd-399b6a09c48e}} meson are taken from the pQCD fitting with the {{formula:523f92ff-be9f-49d2-a586-8a99fbff850e}} decays data {{cite:1eedc14bc0500764bd02d08b64414bd1c903ed3c}} as
{{formula:8a28d58e-f661-476f-858b-228d7d27b046}} and {{formula:879b3a6c-bec8-4c6c-8370-9ea4f680bd21}} .
Besides these, the CKM matrix elements in the effective Hamiltonian
are determined by Wolfenstein parameters {{formula:348e0fec-c3c3-4d39-b152-828d57606145}} , {{formula:61594261-35de-4f71-9a50-475f39af60f7}} ,
{{formula:6574d0fd-d6dc-4513-99cb-6a54bae94ecd}} and {{formula:2ee04963-b5c5-4bab-ac44-8cd8c643610b}} {{cite:232e6af604a335f7957ed471ddc99ead75aaf795}},
the masses of {{formula:9a7159eb-0b14-4f50-af24-58df99097da4}} mesons are also taken from PDG with {{formula:2ae267de-3f34-420b-98bf-c384b7fae258}} GeV, {{formula:146bf7e0-bc1b-4402-b647-3e27afc2b576}} GeV and {{formula:5d865d18-64c2-43bd-baad-33dafbb82522}} GeV,
the chiral masses of light mesons are chosen at {{formula:815975d8-2a4e-40a0-986e-efe1445fb295}} GeV and {{formula:82a8f0d8-521a-4265-a599-57a90943c47f}} GeV {{cite:e7362c1727a9881e4c674fdc41d66a1fa6d2e1e0}},
{{table:ef5b471b-90af-49ee-8e5d-092fb8fc0359}} | d | 0a6bf2dfd91e6e6acf24beedd2f9b692 |
There exist two systems having two interacting electrons, i.e. the helium atom
and the lithium ion. In these systems the external potential acting on the electrons
is the Coulomb potential of the nucleus. The Schrodinger equations of both systems
do not separate into the center-of-mass and relative motions,
and in order to find eigenvalues or the eigenstates one has to use approximate
methods, e.g. variational calculations, molecular orbital approximations
or perturbation methods {{cite:a4492e9345af0a4f68272eaae46e35ffc8ca1e64}}, {{cite:e941cba0808fcf66117ad6759dd391a1f9691fee}}.
These methods work correctly
for low energy states but their accuracy decreases for high-energies.
For this reason it is practically impossible
to calculate PEO for helium atom and lithium ion by summating the eigenstates
in Eqs. (REF ) and (). However, the results in Figures 1 and 3 suggest
that for both systems the position representation of PEO is also given in
Eqs. (REF )–(REF ). Finally, for hypothetical {{formula:27d9e2c5-78c0-4881-913a-49cbe3834db6}} helium atom
PEO is also given by Eq. (REF ).
| d | d432c3bac49d7a8a64aca86651c9affc |
Thus, the matrix in (REF ) is nonsingular if (REF ) is valid. Now, applying Theorem 2.1 from {{cite:76831462f43a204a970d1beb9884347402031f31}} to the parametric generalized equation (REF ), we can assert that if (REF ) is strongly regular at {{formula:ac1ad935-4e74-48c7-ad4a-027b7f538a36}} , then the implicit multifunction {{formula:a965506a-b82e-4677-8f29-e378e0f43cb1}} has a single-valued localization {{cite:629310e25f3403eb4b7d6bdcf39fff8c20470282}} around {{formula:b6b14b18-6add-4750-a946-b463395f338d}} for {{formula:f3efcc8e-b8ee-4367-a236-85fe1a39a2f7}} which is Lipschitz continuous in a neighborhood of {{formula:796ec0bd-f20d-4017-93fc-59271e53bf83}} . This means that there exist {{formula:a48ada5a-20d9-457d-abcf-d34f09498dd8}} , a neighborhood {{formula:1aea5ea2-f2a4-4d3a-8198-531fc145b0ef}} of {{formula:656f2759-606f-4f88-938f-43eed5157226}} , a neighborhood {{formula:6671a480-de4d-46e4-9b08-d92dceb77e32}} of {{formula:e89890d0-269c-451c-b404-030bc533a4e8}} , and neighborhood {{formula:eb4fa910-bac8-4fce-a57b-b4cf6f5473b2}} of {{formula:1d7778e7-552a-4d6d-81d6-762d1b22c365}} such that for each {{formula:583dcd29-0710-4395-ad61-def6913a5ec7}} there is a unique vector {{formula:60d9702f-855f-4efc-8284-c3c24de440d3}} , denoted by {{formula:91888f33-4bb8-45ff-8916-1052987396d9}} , in {{formula:30124c5f-89f9-4b2c-83f9-9d2578488c3d}} satisfying the equation (REF ) and {{formula:c75c81fb-b7da-48b8-b27e-733a93c65653}} for any {{formula:c2cf61af-b209-48e6-9d5c-79a7797b014a}} . Therefore, thanks to (REF ), we obtain the following result.
| r | d832d2c28ca67a9e6f88c4e528447ce0 |
Given an {{formula:f381c786-1fb0-4407-a365-d80d21905645}} -partite generalized GHZ state {{cite:42fe96b413644790c359da95aef1a1e021322519}} {{formula:16772935-f553-4182-981b-423a46bc6ad8}} with {{formula:295b6f41-393c-4823-8eb5-2e8607f47738}} , the corresponding Werner state {{cite:8f27d139007d00ccfb21f1da97c3b21ee9c1865a}} is defined as mixed state of {{formula:1cb353ff-bab6-4b17-934c-9ccfcd829756}} and noise state {{formula:baba14b8-1adc-4185-8a60-f068b7623733}} with the following form:
{{formula:2947845e-2282-44c6-a868-bc9a0d71e492}}
| r | 1023ba56de9c0f920ddaf6a7029ec6f3 |
Generally, when it comes to a learning problem, there are two most important metrics that people care about, which are directly related to the optimizers being used.
One is the convergence property such as convergence guarantees and convergence rate, which offer insights about the fundamental quality of the optimizer in terms of reliability and run-time complexity {{cite:c1c98dc214a00c849ed94e52a4db5a74ba4d0159}}. The other metric is generalization, which evaluates how well the optimizer does on unseen test data comparing to its performance on the train data {{cite:15477c3762a73f004f707f85e9822adff1363e7e}}. Oftentimes there is a tradeoff between those two metrics: one can not usually perform well on train data (i.e. convergence) as well as on test data (i.e. generalization) {{cite:dba5dc3446857b8736753c8ac257faa0d0b41ed4}}. Specifically, the generalization error is usually large when the training loss is small and vice versa. Importantly, the generalization error is upper bounded by the stability of the optimizer {{cite:05bc5f6e2f628ff5e2d8d49574c1b620e3f625fe}}. Roughly speaking, stability measure the sensitivity of an optimizer with respect to the change of train data. This is an important concept because we do not want to use an optimizer that finds very different solutions when we just slightly change the train data. In this paper, we study the adaptive optimization methods through the lens of stability and draw some conclusions about their generalization as a consequence.
| i | b0f14c07123a9279f713e17b43ef1b1d |
In recent years, end-to-end (E2E) modeling for automatic speech recognition (ASR) has been intensively studied and significant progress has been made (e.g. {{cite:7eed136a36642c045b8a79d8ace13f9134b3293f}}, {{cite:d294c5bf3e957bafe8d737ab79d039f6efad9502}}, {{cite:45234ea3cc17af29f6c3c96ae91c33c16bcbd7c3}}, {{cite:e268a20a90f3a40fa713a411a0be3de380a52872}}, {{cite:4d81f3dc81e75710d18a26c8631c2ed6a160d52b}}, {{cite:b834a0ccae2002ad60dcb627772087a5661e058f}}, {{cite:7ac42151fbfe1044a6452fd79a6ea7ff13068cd2}}). Broadly speaking, there are 3 different architectures under the E2E ASR category. Firstly, the connectionist temporal classification loss {{cite:0a462ad2c124b2cf11cf9404f7a61ac0160dcf94}} can be used to optimize the likelihood of word or wordpiece {{cite:9312b898e652024d9a93f0ba3d4c20f01cd38a14}} sequences (as compared to using phoneme sequences and finite state transducer in the traditional hybrid system, e.g., {{cite:6341e24e4603b83ce27110c974484a16c42d1e9c}}). However, the lack of language modeling in this architecture usually leads to a sub-optimal recognition accuracy; Secondly, the attention-based sequence-to-sequence (S2S) modeling {{cite:4753f8e5806798ffc7a72ba959be574db79f0055}}, {{cite:d4bda2616ca482b6ff50f126b96ccb031a4aec5d}} can be adopted for E2E ASR, e.g. {{cite:d294c5bf3e957bafe8d737ab79d039f6efad9502}}, {{cite:45234ea3cc17af29f6c3c96ae91c33c16bcbd7c3}}. However, this approach cannot naturally fit into the streaming requirements in many speech applications {{cite:5fa612ca2dfcfaea8cf383d675e9a92c4df2dc99}}; the third approach, based on neural transducer loss {{cite:1c5b4853b7f884e9308879beced9eb267beee4e3}}, namely Recurrent Neural Network Transducer (RNN-T), integrates language models in the E2E model and fits well with the streaming requirement, therefore it has been widely adopted {{cite:665def0d1ef67bf4be5e575dd42a91af02d077fa}}. Both RNN-T and CTC use a so-called “blank" symbol to deal with the fact that the decoder input sequence is usually much longer than its output sequence. Notably, all these 3 architectures are equipped with an acoustic encoder which converts acoustic signals to a sequence of acoustic embeddings with a fixed frame rate.
| i | 3f4d4363c9e3beebef12154c531a03f8 |
blueNote that problem (REF ) differs from unconstrained distributed optimization {{cite:2db15859d38d392a49163086a4de7ab14adb8341}}, {{cite:44f2007042b4b3a44c40e29727cd737a716151e6}} due to feasibility constraint {{formula:d14480dc-2655-44f9-b2ba-ffbab3a1285d}} which is of dimension {{formula:f1d64685-95c7-457f-9b18-4611fb29178d}} . Some works consider inequality constraints {{formula:63680ff0-092a-4127-b4b9-05ed51b98b57}} {{cite:feeb9faa81dc0eefcf9c533fbcf1077b9bd1336b}}, {{cite:34e0ed920290db201c013cd2c73f128e3eda41a7}}, {{cite:7a210d87f71518a79c3ec7c3767b9073cf0a6b7c}}, which represents a half-space of dimension {{formula:47ac6963-f41d-40fa-86d2-d37c59577ab3}} , with example application in network utility maximization, saying that the weighted sum of utilities {{formula:ac3133fc-a50b-42d2-841c-d206b92130b4}} should not exceed certain value {{formula:7dcf6e3f-96ca-48d4-a483-b51de26d1db9}} . These problems may encounter many such relaxed inequality constraints. In contrast, having one equality constraint, e.g., in EDP, the weighted sum of generated power {{formula:afbdcea5-349b-42c5-87d9-c6cd43395bfa}} should exactly meet the load demand constraint {{formula:7e641bcd-bd98-42f1-bfe9-54352de5f95e}} at all times, i.e., {{formula:40195e9a-8537-4624-a697-6e46b2d17918}} {{cite:1727371a5f45c44503c19d80040647350e14ca31}}, {{cite:99161d00c9ea1589473bc38ce43d4bef8fa2aec2}}, {{cite:4eeb9c38b29e20a552798500777ecf67be6e7af2}}, {{cite:90e896a6ec3a0a70f3559642702a66f4cfe56838}}, {{cite:e9ed711727c55a1251516b8f673088a4c37cc9a3}}. Having {{formula:b49745fe-8034-4006-bd40-da4630cad3f1}} equality constraints, it can be algebraically reduced to a cost optimization of {{formula:3c8db648-3658-4387-981a-2efc44e31660}} states subject to one feasibility constraint of dimension {{formula:64b6c827-85b6-42bb-a35c-d021bb24a254}} , where the other {{formula:a4035d00-4cf4-48f4-82a8-48387877116f}} states are dependent variables. For comparison of different constraints and solutions see {{cite:4675e1ef9be0150def12bc8fe09b0b40e39314ff}}.
| r | ee96dacc07119aff254eb157fdbf6be9 |
While preparing this manuscript, we became aware of a
recent preprint by Gao and Remsing {{cite:4f28897bec223c3ce5244c6a8c95442ccecdf3ac}}, in which the
electronic structure information encoded in the Wannier centers is used to construct a ML model of the PES including long-range electrostatics. This model, called self-consistent field neural network (SCFNN), differs from DPLR because the separation between short- and long-range
contributions is not done in the way in which the training data are handled, but in the way in which they are generated.
The model requires short-range DFT data obtained, in principle, from calculations with a truncated Coulomb potential.
A drawback of this formulation is that it introduces a self-consistency condition for the Wannier center positions.
As a consequence, the simplicity of the ML construction is lost, and the dynamics of the model is no longer conservative, unless
formulations such as those proposed by Car and Parrinello are introduced {{cite:515401055fb5b3ee476769a1fa20a2ef31111223}}. In practice, separating standard DFT data into data
for a truncated Coulomb potential and data for its long-range counterpart is not straightforward, and the authors overcome this difficulty by invoking linear response conditions.
| i | fd03bb7aa64c0787b5f6895e11f5f989 |
The 22 {{formula:641e43c3-4793-47a1-960c-6871a53a43b2}} m flux density yielded a SFR that was {{formula:441f897a-4f49-401c-afdc-99127ed14318}} 3{{formula:a201dded-a023-42cf-b1de-e18f0c080b45}} higher than the H30{{formula:40bcda56-4759-4296-817b-57f76899d9ac}} SFR and is also significantly higher than the SFRs calculated using most other methods. The aberrant SFR from the mid-infrared data is a consequence of the low metallicity of NGC 5253. As stated in the previous section, the conversions from flux densities in individual infrared bands to SFRs are based upon the key assumptions that the total infrared flux originates from light absorbed from star forming regions and that the individual bands will scale linearly with the total infrared flux. When the second condition is not met, the SFRs from individual infrared bands will be inaccurate. This problem had been anticipated for low metallicity galaxies like NGC 5253 {{cite:4da8683ecf21bc0a95173a9e452da3551f8fdf51}}. Low metallicity galaxies contain less interstellar dust, so the light from star forming regions is not attenuated as much as it is in larger galaxies. As a result, the dust that is present is irradiated by a relatively hard and strong radiation field, which makes the dust warmer than in spiral galaxies {{cite:22bafd4a4f9c20365ff56ce01d9f10592287dd18}}, {{cite:ce44c6089453d1ff2b029fcfb3272bbde3ea44fb}}, {{cite:b4f1742a4da252946034f68dd24ebd5888583ffc}}, {{cite:664cd385475ab565ac4b346bbc0e42d4305701c4}}, {{cite:92faa7455b25ea0dc5422c5d7340bc68431b549a}}. The resulting change in the shape of the infrared SED results in biasing the SFR from 22 {{formula:96f5d385-05ce-4832-b67a-345d818d006c}} m data upwards.
| d | eb411b3d045a3e829bdcd139bb7e3da5 |
To overcome such computational limitations, a new type of computation technology known as the Ising machine was developed. In 2011, the first commercial quantum annealing machine was presented{{cite:3808b0afe6382ac4a337ea72ffea9f75e045c1be}}. The hardware of existing quantum annealing machines has been developed based on the theories of quantum annealing{{cite:490283566984b75345ce51e3a597ad1f40e6c0ad}} and adiabatic quantum computation {{cite:970afe597284f293111a9d12311502dd9712aae5}}, {{cite:0a78e5d6cf09637054d4fd29b8baeb5c6fc6266d}}. Ising machines are inspired not only by quantum annealing but also other principles that have been developed since the emergence of the first commercial quantum annealer {{cite:8e2612e0e92d6ee700870bc3bb5867c742ad7e23}}, {{cite:bcfd0f657a72dfdf3f52b5f8adc53067e3eaa6d6}}, {{cite:fa051014599c1e91a369f52d6e43aadbb9eb0208}}, {{cite:48a80b2645fffacb85c18c0bb0286de21b5c53bb}}, {{cite:b1c70565c5547d41fbd29269a5772cb691bedac8}}, {{cite:f0552eafcf70bf7a18a2c924496356d02831eb4f}}. A number of studies utilizing Ising machines have been conducted in various fields: portfolio optimization{{cite:214205d5498093ccef5fa511da4d29240ef5c7ab}}, traffic optimization {{cite:64357136aac8ab500edd58f714c0ac0548feadbe}}, rectangle packing optimization{{cite:7a4cebae6fa8887267178209d41ca9e29d8a1b64}}, item listing optimization for e-commerce websites{{cite:6d11c9c116ace048977434c664ee08c92be380fc}}, and materials design{{cite:0fae2acf0ce029f01b206f00de8812d0aec0ad24}}.
| i | ea83bb09d57a7e61cf712cf2f9c170e4 |
Non-classical states of light, e.g. entangled photons, squeezed light are ubiquitous in optical quantum sensing and quantum communication and simulation. These states are conveniently generated at room temperature through parametric down conversion in materials with second order nonlinearity{{cite:9cab278be26051ce4d97240cdf3e920c81ba7dac}}. Resonant enhancement in optical cavities is used to increase the efficiency of these sources, which have been miniaturized in integrated photonic circuits. As silicon lacks second order optical nonlinearity, spontaneous Four-Wave-Mixing (FWM) is exploited as an alternative. Here, two photons from the pump decay spontaneously into a pair of photons under the constraint of energy conservation. If the interacting waves are all on resonance with the corresponding cavity modes, the spontaneous generation rate scales as {{formula:aa6f8abb-211c-47d6-8b2d-eddd60aecdee}} with {{formula:5f5a7e6a-241d-48d7-8460-34b46f529c2e}} the Kerr non-linear index, Q the quality factor, V the volume of the resonator and P the pump power {{cite:978e314dd831830f3581c6d64a56d0fef844dbef}}, {{cite:385ab36c04686c41fafbdfaf404d1a56f570c76e}}. Time-energy entangled photon pairs have been demonstrated on a silicon chip via FWM {{cite:5643168618b7caf9f0d4ef24d0f90878f8ec682c}} with a microring resonator. By optimizing the nonlinearity of the material and the Q factor, large efficiency can be achieved{{cite:23f756d925bebf4e0c521dcceaf5277177df9c5a}}
| i | c800b68b99874f22a42ed60c0ee6c8c7 |
The base models are trained from scratch for 600 epochs to ensure the convergence, which is longer than the usually adopted benchmarks (160 {{cite:33f1cf4c75aec695dfd8918a419a2ea779219750}} or 300 {{cite:8dc64929557422f9e9175d160e4aff318aa92ac7}} epochs), because we expect to perform pruning on a fully trained base model, such that the accuracy increase (in the case of VGG, Res56, Res110, Dense40) cannot be simply attributed to the training on a base model which has not fully converged. We use the data augmentation techniques adopted by He et al. {{cite:33f1cf4c75aec695dfd8918a419a2ea779219750}}, i.e., padding to {{formula:ace38ad3-d253-4d48-a3e3-1eb9410be0fb}} , random cropping and flipping. The hyper-parameter {{formula:ac7cd0ac-a0d1-4a1d-b81c-5291e242330d}} is casually set to {{formula:41eea2f5-c001-467d-b74e-bf7753628431}} . We perform C-SGD training for 600 epochs with batch size 64 and a learning rate initialized as {{formula:e3a9b0cf-184b-4db5-b5e9-b0c93b1101e4}} then multiplied by 0.1 when the loss stops decreasing.
| r | 97f43ba175a6da840131ae74d16f6e40 |
It is interesting to investigate {{formula:03235eab-01e3-44d2-a1d6-1b778b9d8545}} in the {{formula:28dfe270-a353-458e-9b5b-97f67f6427a9}} plane. Not only have many researchers recently paid more attention to measuring
{{formula:e9dc221d-61bd-4f42-b42b-c27d503104ee}} in that plane{{cite:be574aff5a1afc6a2e8cc6bf202f09bb539b9baf}}, {{cite:6266e48d7517c5130a52ea41df2e895ad7d70e85}}, many superconductors, such as the cuprates, have been thought by many authors to exhibit {{formula:771302c1-90c1-4ed7-8d16-6d9d8969a2df}} OP
symmetry{{cite:97f28dabb16d4fd031288dbc16cd49acb72a4fdd}}, {{cite:514d31a91b102008d60fef6ab09f312a546aa9d3}}. Here we present results for {{formula:386517b7-6bf4-4914-8bfe-1f68af31d406}} in the {{formula:c7ffbbfc-efee-497a-98c8-b8c444f61357}} plane, with and without the Zeeman interaction. It is intriguing to find that
{{formula:fbd5f046-48e3-40c4-9d44-64081d841861}} including the Zeeman energy at different {{formula:241845e1-001c-470d-b46c-89c9770c43b7}} values does not always follow the commonly held belief that the maxima {{formula:69eddba8-c616-48ca-850b-978da9d211c3}}
always occurs along the antinodal directions of the of the OP. Instead, there is a {{formula:dd10057d-6407-4b34-a464-54b02c3b7c91}} azimuthal shift in these maxima if this slope were to persist to low {{formula:ed7ccfc5-3d1f-4699-8ae1-a90e4bd25653}} . That is, the maxima in
{{formula:6513ecf9-824d-4747-a570-1fac9864fa24}} for a {{formula:86bdcfa3-1b9b-46f4-8f24-7e596ac997c5}} -wave superconductor just below {{formula:a0bbe775-c6c6-438a-857a-ecae28bbf206}} is indeed along the antinodal directions, but at low {{formula:ab087253-b4a1-4436-99b4-8624bbf9c955}} , the maxima are along the nodal directions.
| r | 2688ca8a780804e3a3bc4d3b0e9dcd41 |
The interest is growing rapidly in unmanned
aerial vehicles (UAVs) for applications such as monitoring, surveying, precision agriculture, construction, remote sensing, or product delivery {{cite:acc32da7da402984acd515567bb2884a1504247d}}, {{cite:dde3a59c26da27eca7f8e692a3d90a01dc4eeaaa}}, {{cite:4a49686cc17c18adaa83f0c046bee0ed74d10e3b}}, {{cite:400cc548507648f21ed0b055a3a5410161c35c03}}, {{cite:6c59065f1bbecb6e57631eaa157259cbd19b554c}}.
Low-latency connectivity at high data rates is required
for many of these applications and the millimeter wave (mmWave) frequency range offers bandwidths that can be instrumental to meet these requirements
{{cite:84b54a0f5f4cdd0b1b1b880cce1cebd6ab80a162}}, {{cite:bf44ada37bdd9d29e3e4aa616f30eec30cf005e9}}, {{cite:dd095591e3f189a345b4a8c4c78fa18f06270365}}, {{cite:d2b9789aeb7e2e23d939498d2586bfefc5362653}}, {{cite:9b784ba424e204915e921e50de47e47ad58a92b8}}.
Also, links to UAVs are often line-of-sight (LOS), which is desirable due to the limited diffraction at these frequencies {{cite:bf16cbff0dcd80fe0a81a4d5a157c63087adfae3}}, {{cite:3efd3f13fadf323cd7b93dd10ee7d056d6afa111}}.
| i | 9db960b8e79c351ca104dc00a0d9be23 |
A common assumption behind all these methods above is that the operator in VIs is Lipschitz continuous and monotone. However, modern nonconvex-nonconcave saddle-point optimization problems such as those appeared in deep learning go beyond monotone VIs and hence the existing results fail to apply in non-monotone settings. This motivates a surge of interest to generalized VIs and its associated algorithms {{cite:8adff888c082a99d48bec5f4ef0ac6085aa6e1cf}}, {{cite:3c4198482eb1dc42fed2fb082fb2d5444fdf42c1}}, {{cite:d276c1c9d39f9058fe5903a1411c3491c6914ffe}}, {{cite:c0b23d8acaac6590997cc05b7a89a723ed89a030}}, {{cite:18c9a43af9e1e6e0a64fd813edf70cc6fee5854b}}, {{cite:8f823d4e49ad508986d7b4fea6a8b139ef3e332e}}, {{cite:5699ed892aef1caaee7f89f7bdb50fe010377eb4}}. We restrict our attention to the line of research that relaxes the Lipschitz continuity and monotonicity assumptions.
| i | 96279d08089425da4d25d87e4e3dbc67 |
Deep learning based top-k recommendation algorithms significantly improve the recommendation performance and become the mainstream research direction in recent years, especially the collaborative filtering based methods. These existing algorithms extract advanced semantic features and perform complex feature interactions by employing MLP{{cite:2eec2421d1011993b892ce57337c059daab3fba8}}, CNN{{cite:ca6545ac41f17feb5d78f595013768de85d836f0}}, RNN{{cite:3321ff1b2373555aefd97779d208eff26aeb3cdd}}, attention mechanism{{cite:a0b98d33175fc1b4e91b86fd31df78e52ffbff0c}}, {{cite:03f58d170c3eef5782f80bd78d2d75e7281317b0}}, etc. The user-item interaction is naturally viewed as a bipartite graph. Graph convolutional networks (GCN) based methods are increasingly integrated with recommendation systems, such as NGCF{{cite:0ec1d48670ec35dc3e1b51d2263bb3b3ae6bd4f1}}, LR-GCCF{{cite:4810695649272eb6ea33294dbbdcc1808bdf5aa0}}, LightGCN{{cite:0bd4e2f5f4058b2143195491bf82a0e3d9bca98a}}, DGCF{{cite:2e3b0b88a447f8a76c4888195a48ffe1cdc3b6fe}}. GCN based methods aggregate features of neighbors as well as higher-order neighbors to obtain better feature representations of users and items and the performance has been further improved.
| i | e288752e2368f39bee2796d25b4552b0 |
In Sections REF and REF we presented two new survival methods based on case-control sampling and neural networks: a proportional Cox method and a non-proportional Cox method, which we will refer to as Cox-MLP (CC) and Cox-Time respectively.
We will compare our methods to a classical linear Cox regression referred to as Classical Cox (Linear), DeepHit {{cite:e616711e0d38fdfab565947fc5bd0cb71606bda1}}, and Random Survival Forests (RSF) {{cite:f031fd75bb7d2bbca316715e51c135ef8b01c601}}.
We will also compare to a proportional Cox method similar to DeepSurv {{cite:e07a2980ab5dd8770a63ee065f0cc283e022779b}}, but our version performs batched SGD by computing the negative partial log-likelihood in (REF ) on a subset of the data set. Furthermore, we choose not to restrict the network structure and optimization scheme to that of {{cite:e07a2980ab5dd8770a63ee065f0cc283e022779b}}. Hence, this method is identical to our proportional Cox method in Section REF , except that it computes the negative partial log-likelihood of a batch, while we use case-control sampling in the loss function.
We will refer to these two methods as Cox-MLP (DeepSurv) and Cox-MLP (CC) respectively.
We do not compare with {{cite:269f6b88ca1c7308afcd5459f6708f2366e1a08d}} as their method is another proportional hazard method and is therefore restricted in all the same ways as our other proportional methods.
As we will show in Section REF , the proportional hazards assumption is very restrictive, and methods based on this assumption are therefore not able to compete with other methods such as DeepHit, RSF, and Cox-Time.
| m | 63db3022b85bfe552430222b334d7b58 |
In contrast, R-NDF s more accurately localize the task-relevant object parts and assign coordinate frames to these parts that are consistent with the demonstrations, leading to the highest success rates.
Consistent with {{cite:15ca5e13600de5cd8a521ac0505378723bdb4bfd}}, the performance gap between the “upright” and “arbitrary” pose settings is small, which can be attributed to the built-in equivariance of the features used in R-NDF .
| r | eea33e3c650dd80e4eb70b34b7578c5b |
There are number of review papers devoted to the phenomenon of supersolidity
{{cite:d8cc10ab4516ad26fcec29e7927cd986c90f65b4}}, {{cite:12506eb05e95975a626b3ec0eb27099af26bf9f0}}, {{cite:439cd041c471806095b3b666e4f658a42f5198d2}}, {{cite:d5e7dd9bf6617e74f9e7e9d3a6e993a16614bd3a}}, {{cite:f87b228110f30fc3ded4cfe984deb7d9b29dfaac}}, {{cite:4b83c8ab855005c72f6992f11608bfa01adfdc2d}}, {{cite:610b727ecd3199b1c17d102060722abde4b8a174}}, {{cite:1e1937ee7b54c0ab8df4f1eb07315158b39d3cb6}}. In Ref. {{cite:d8cc10ab4516ad26fcec29e7927cd986c90f65b4}}, experiments on searches of
manifestations of supersolidity as to 1990 were reviewed. The review {{cite:12506eb05e95975a626b3ec0eb27099af26bf9f0}} describes
the progress
in theoretical and experimental study achieved during the period from 2004 to 2013 and
stimulated by the experiment {{cite:8b2e7b229e60da732b437dd12b361dfa8ca8e7b7}}. A review of the experiments on mass flow through
solid {{formula:68435a70-df40-45a9-9e57-e1329418cb7e}} He was done in Ref. {{cite:439cd041c471806095b3b666e4f658a42f5198d2}}. The papers {{cite:d5e7dd9bf6617e74f9e7e9d3a6e993a16614bd3a}} and {{cite:f87b228110f30fc3ded4cfe984deb7d9b29dfaac}} are the
excellent colloquium-style reviews on supersolid. Refs. {{cite:4b83c8ab855005c72f6992f11608bfa01adfdc2d}} and {{cite:610b727ecd3199b1c17d102060722abde4b8a174}} are
written for a very broad audience of scientists and describe the main achievements and
problems in understanding of the phenomenon of supersolidity. A short historical review
{{cite:1e1937ee7b54c0ab8df4f1eb07315158b39d3cb6}} presents overall portrait and basic ideas related to the problem of supersolid.
| i | bde5bce09c2d81ac90c628da9fc6244e |
In recent years, the performance of automatic speech recognition (ASR) systems have seen dramatic improvements due to the application of deep learning and the use of large-scale datasets {{cite:f61608e4b831d8af86439669e92e894b5cc49770}}, {{cite:9533bb77fad8cc6585d9374b465672802ede1fbc}}, {{cite:3f5bff2840364a2a62c560a39eab9a10bcaa2ff8}}, {{cite:5b690f2ac1e6b401e36121ed293d9d7acb022b05}}, {{cite:648dcd37ba830819d9074d86df107a5cb5dd045e}}, {{cite:9832f032b609b6062db886995ad2f92225619a16}}, {{cite:91ca446f58252f63bfd80161ada8c7be41540c2e}}, {{cite:b52701aa24c94e8c368512a3e539e7328d6ba275}}, {{cite:e51070e20ba89b0c7504bb7ed9c6ff8a8f006197}}. In particular, there have been two strands of research pushing the state-of-the-art in this field, namely sequence-to-sequence models {{cite:ddff8e3c328fbbe012f15597864c3dc0a3382b35}}, {{cite:ac2afc28ac14ea1a325d6ad7402874f44708ad15}} and Connectionist Temporal Classifier (CTC) based methods
{{cite:157695623b8f5d9676018e0045ba37e085ce1f5b}}, {{cite:be82468b2a02043bc5c07829ba6adb97f453fbde}}. This work focuses on the latter due to its simplicity and good performance.
| i | 3e7c101c3dc11a7f7545b1f9f16a6774 |
{{cite:5a9088a3c6cb34846b66c04ea8ae6ce62145928a}} compared the performance of the MBL, AML, and GLM models, as well as a logistic regression model and a recursive partitioning tree {{cite:54b19d1cb26e68f2b86b7bc305fc63985334a95f}}, on the task of predicting whether word-final obstruents in Dutch alternate with respect to their voicing. They observed similar performance across all models, with the best performance, surprisingly, for the only parameter-free model, AML. Their results suggest that the quantitative structure of morphological data sets may be straightforward to discover for any reasonably decent classifier.
| i | 89e665570f268ddd660f8d35137882f1 |
Now we will discuss the physical understanding
of charmonium suppression due to screening in the deconfined medium
produced in relativistic nucleus-nucleus collisions. This involves a competition
of various time-scales involved in an expanding plasma.
From the table I and II we observe that the value of
{{formula:6cf6b075-f7b2-4244-8c72-432babd92807}} is different for different charmonium
states and varies from one EoS to other.
If {{formula:2b486ef4-6e85-4542-86cd-88ba47dc7eaf}} , then
there will be no suppression at all i.e.,
survival probability, {{formula:76092a5c-ce8d-4d16-a271-8a278b26fdaf}} is equal to 1.
With this physical understanding we analyze our results,{{formula:033279ff-e977-43a8-990b-42876323bfc6}} as a function of the number of participants {{formula:0100e927-93fc-485d-87bf-3eee6b746b6f}} in an expanding QGP.
At RHIC energy, {{formula:4152833b-282d-4b17-a1d4-2e7de6c9d70a}} yields have been resulted
from a balance between annihilation of {{formula:eb504a70-11d2-4b99-b049-3e657c6cc944}} 's due to hard, thermal
gluons {{cite:4161dd046efbd3e12adbd576a1cfefea953be8fc}}, {{cite:ee73bda091bc26b4904c1d7896a5d00b134da29d}} along with colour screening {{cite:177bb95976b9b261fe3ee8ea9b8149c1372b74a8}}, {{cite:434216896f8fea28a54aa7f68a9f0b7e8c8c1076}}
and enhancement due to coalescence of uncorrelated {{formula:ce061323-c02b-43d2-9413-323f2ba47446}}
pairs {{cite:0add1420cd52523b76d0b659081d8db52c46425a}}, {{cite:4453e708a6f8894531b6083752e937a209b1644e}}, {{cite:95f223158fd1ee78709a092a5cb98fe5b463abdd}} which are produced thermally at
deconfined medium.
A detailed investigation of the scaling properties of {{formula:d492190f-9629-4041-8010-b140a5a5ec49}} suppression as a function
of several centrality variables
would give valuable insights into the origin of the
observed effect {{cite:bb614ed393d657d4d9fd5ff57d73bf14eda9e09d}}.
However, recent CMS data do not show a fully confirmed indication
of {{formula:b246d322-732b-4ede-a8bf-5ad4620fb756}} enhancement except for the fact that
{{formula:de91dede-fc42-4bf4-b821-bf10682b702e}} of the data and shape of rapidity-dependent
nuclear modification factor
{{formula:60d52134-6f3b-4e0f-a127-d9152ef2eabd}} {{cite:456bdd9bbee590567f235b2602fa2e5944f03458}}, {{cite:e1b666ffdfe6cd9269db8a8f9602fee594f3ff11}}, {{cite:8fa10d85a922afdb13ca35c45f82df1441e7a77a}}, {{cite:e5b1bf57cdee8afeb49a46a4ecd301ca331562c0}} show some characteristics of coalescence production.
| r | d970df2f371165606523327b93287eb6 |
I wanted to produce NFAs rather than NFA-{{formula:a584f823-ca56-4903-968a-dfe34479eaf8}} s. In large part this was due to my desire not cover the notion of NFA-{{formula:d0eda7a4-2468-430e-8807-db6321b4c178}} . The only place this material is used in typical automata-theory textbooks is as a vehicle for converting regular expressions into finite automata. By giving a construction that avoids the use of {{formula:0e72d242-d4ef-431e-abc0-989141cba9ab}} -transitions, I could avoid covering NFA-{{formula:b609aefb-e9ec-4282-98a3-16702ba4466e}} s and devote the newly freed lecture time to other topics. Of course, this is only possible if the NFA-based construction does not require more time to describe than the introduction of NFA-{{formula:2f4c5f63-6692-4cc0-a642-eb9ae6bd6a39}} and the NFA-{{formula:f06f6219-680b-4b9d-95e2-e6adc2040d06}} construction.
I wanted the construction to be one that students could apply during an exam to generate finite automata from regular expressions. The classical construction found in {{cite:e8b72999e3263030a4e0358743814d1c8ada404f}} and other books fails this test, in my opinion; while the inductive definitions are mathematically pleasing, they yield automata with too many states for students to be expected to apply them in a time-constrained setting.
Related to the preceding point, I wanted a technique that students could imagine being implemented and used in the numerous applications to which regular expressions are applied. In such a setting, fewer states is better than more states, all things considered.
| d | 20a751b22a93f81465c7b35913a5866a |
The extinction-corrected H{{formula:b7144bfc-484d-4fbc-81af-b587887b4da9}} fluxes calculated by {{cite:97ef6f675b6905bed0953e367937c938c8b4be3c}} using Pa{{formula:5bf7068a-19dd-41c6-80a3-03e850edeb90}} and Pa{{formula:0464588b-35fe-47ec-b437-47f88378286b}} line data yield SFRs that fall within 25% of the SFRs from the H30{{formula:7efd8c9a-db75-4d20-ad38-dbb768749124}} data. Given that the extinction corrections changed the H{{formula:a1c1415e-d463-4aca-a191-422194e7d0f4}} fluxes by {{formula:1843cee1-98cf-45cf-99e0-5efe06cf0aa7}} 20{{formula:40e5ec2f-2c31-4115-bc8d-b8ffd121ea24}} , that relatively complex dust geometries were used in calculating the corrections, and that the uncertainties in the extinction-corrected H{{formula:24475be7-4a4d-461c-ae3c-2110141febba}} fluxes are relatively high, this match is reasonably good. However, the fact that the SFR from the H30{{formula:35d57917-5531-4188-bb26-616019022a00}} is higher would indicate that the method of correcting the H{{formula:23c8c174-9ddb-4321-93b4-29d7c0a602d4}} flux could still be improved.
| d | 7267957b272e14f38b0afe6d45855bca |
In early 1983, Yosi Avron told me about the paper of Thouless et al. paper {{cite:8f2b6de3f1924a12e625cead495b758e9f8799d7}} which gave a novel explanation of the quantum Hall effect, a subject that had fascinated Yosi. The striking aspect of that effect is that a resistance was quantized. In the TKNN approach (we quickly came up with that abbreviation, sometimes TKN{{formula:df40001b-6052-45a5-a1fc-0c24cf31b41f}} , especially TKNN integers, a name which has stuck), this arose because, using the Kubo formula, they got the resistance (in a certain idealized situation) was given by an integral over a torus that turned out to be an integer (in suitable units).
| m | 26ca7334254a7f8c3b46f6c0066dec60 |
All the results reported in Section REF were obtained using their original fron-end methods for global image description. SeqSLAM used classical sum of absolute differences (SAD), and the others (Delta Descriptors, Baseline, and our approach) by default use the best NetVLAD model as we described in Section REF . In Table REF , we report additional supporting comparisons against single-frame vanilla NetVLAD (pairwise) and other 6 multi-frame filtering methods (ABLE {{cite:01fc2221db6ff3cd7ebadbd0781df36d3e254ebc}}, ISM {{cite:1cce87c7bad090edc4e44db3320ea668a5e8b2fc}}, OPR {{cite:ef207ba7aa88cc404439b370ce835768eab16540}}, VPR {{cite:a8524987327a955dae35be695b606dd9e760f0dd}}, HMM {{cite:c316a7df7e1cfbfe3b814c030f89377e6fd85187}}, SeqSLAM {{cite:cb67776724787410787d4933e44cb95c0fcd3f5d}}, MCN {{cite:257b6af377f31ffdce30fbfc5a31a6d97fb5fdae}}) that received the pairwise similarity matrix, obtained from NetVLAD {{cite:9c514bcf34b65eb145148a3e265ad7f05ac6bbc4}}, for sequence filtering according to {{cite:257b6af377f31ffdce30fbfc5a31a6d97fb5fdae}}. In addition to SeqSLAM, Delta Descriptors (DD) and Baseline (BL), in Table REF we present the full comparison all these ten methods on the Gardens Point dataset. AUC results are calculated with the same localization tolerance used in {{cite:257b6af377f31ffdce30fbfc5a31a6d97fb5fdae}}. Our model outperforms all the others with 100% recall at 100% precision on all the required reference-query combinations of subsets: day-left (D-L), day-right (D-R) and night-right (N-R).
| m | fa56f8884abf01f6b459e3875d4185db |
Iridium-based materials with strong SOC host a variety of exotic quantum phases but also properties of interest for applications {{cite:4e2727f3f2b29f3da715988ae503e4d78a4ff3e8}}, {{cite:416d21b210284695158aa962db5a19a1a318c767}}, {{cite:3ceb45ddfb8b224b929bb4c38b5d26eca7e53eee}}, {{cite:680b6d78c4ad19aa84d1095eb3ade27d70949a03}}. In IrBiSe, for example bulk electronic bands are split by giant spin orbit splitting about 0.3 eV and are fully spin polarized {{cite:c8fb596c5faa3ed0f306886eab99c30da786ec7c}}. Electronic states in IrBiSe with three-dimensional (3D) chiral spin texture with negative and positive chiralities along crystallographic [111] direction are of interest for spin sensor applications and could exhibit spin-triplet superconductivity upon doping {{cite:c8fb596c5faa3ed0f306886eab99c30da786ec7c}}.
| i | 203934d1922515e44011fdcb0460f404 |
Theorem 4.20 ({{cite:1452cebfdc4f0c400e24548496211a897e570cb9}})
Every {{formula:5c9aa328-55d1-4fbb-9a97-28e48f2a3e84}} -polynomial {{formula:46bfa7bc-413a-43b1-b93a-3b2fbcd2b120}} has constant term 1.
| r | c24fb177729627538bb5997b31c05e12 |
Since {{formula:11bf4af1-25ce-47e8-9c9b-1dc2ee9d8b07}} performs well on its own, we question the need for WAIC since the AUROC using WAIC lies between that of {{formula:464e2d81-a199-4252-8e78-55297f58898a}} and {{formula:c68fcc40-17d2-430e-acdd-e7be5567871a}} in most of the results.
Moreover, different from the implementation of {{cite:ca04ff3c3b87043159626ce85ef5829a430df065}} with the VAE, we obtain the epistemic uncertainty through importance sampling of a single model instead of an ensemble, which we find to be sufficient.
| r | b86fa864c3d4515e1bfb7647f682478d |
Our findings show that some models are more effectively robust than others. Namely, zero-shot CLIP models have much higher accuracy on images with lowest spuriosity than their performance on the images with highest spuriosity would predict. After finetuning a linear head using ImageNet on top of the fixed CLIP image encoder, we see effective robustness drops significantly. We note that both these results were observed using independent distributional robustness measures in {{cite:4b6d1c618497869f8d4a2b9b0ab90fb4f64a950e}}. At the other extreme, adversarially trained ResNets have the lowest effective robustness, a result carefully studied in {{cite:647e37da2c262116c1f86079f4e820641a2342c0}}.
| r | 4261a5e693eda13e6b289b0f95901571 |
CoOp + GM applies gradient matching method {{cite:f0d4ff94ca44e869112e4410ff9bc7f2cf0622eb}} to CoOp, i.e., we not only project the {{formula:100925f5-036b-424c-83df-8e767e5e1d0a}} to the perpendicular direction of {{formula:febf4fe7-665d-4ba1-b8e0-9238cd23fe7e}} as the updated gradient, but also project the {{formula:319ebc87-5b80-47e6-97f4-15463aefbedf}} to the perpendicular direction of {{formula:7441c2d2-10f7-454f-9f28-f671a956d60a}} as the updated gradient to fine-tune the model alternately.
| r | 28dfa56d78832cf68bac10baf447519f |
There is an immense literature on this subject's intrinsic difficulties and attempts of regularisation of apparent divergencies in it (e.g., see {{cite:b5d97cf85cb8f389e493223f45855fade5c75384}}, {{cite:b159e2419128c6ba4f90d9cad46175b1afd6d129}}, {{cite:395365b788376ecc24357e51877af32280426697}}, {{cite:fc92109c07f24afd682834ff5b14c1df887e387f}}, {{cite:40248c184d6d07fb5aa63ea4e9141949eeb488ee}} vs {{cite:cf008f5a0c252b12e3822ac08fdd52b488d6d73c}}). While the BV-quantisation technique has advanced far from its sources {{cite:5db1eabe16e1e4726f7bbf0789079a031719c588}}, {{cite:5b22a35534074980ecfe26ee441721830334810f}}, it is still admitted that it lacks sound mathematical consistency ({{cite:e37ce1f967b62822411cd3e666d3222447fd57fe}} or {{cite:0f3d53fb7fc8bf81c4218501385a938c6fc05537}}). The calculus in this field is thus reduced to formal operation with expressions which are expected to render the theory's main objects and structures. Several ad hoc techniques for cancellation of divergencies, allowing one to strike through calculations and obtain meaningful results, are adopted by repetition; we briefly review the plurality of such tricks in what follows.
| i | 594d1d72a2209c00b4633b4f2b3d1880 |
As the combined configuration prevents the average reward from dropping,
it is particularly relevant for real-life applications when a Grover search (with its intrinsic overshooting drawback) is implemented.
An average reward that saturates at a high niveau without a subsequent drop can also be achieved by employing other algorithms like the fixed-point algorithm {{cite:205976a2129916b435b867afa1eeb7335fa44111}}. However, they show a less favorable speed-up than Grover-like amplitude amplification, especially considering the limited size of our integrated processor.
| r | 86de992d349638fd8b746f584e0ce72a |
The consequences of {{formula:1b68ac99-dfc4-4657-869e-fad41f54e3ed}} and {{formula:ea8a1f1e-9216-47a2-ac04-4588e3914816}} at {{formula:8b52e85f-bc8f-4444-b30f-6b2df9ba5cd4}} for the contact interlayer tunneling term –independent of the spatial gradients of the atomic displacement i.e. to zeroth order in {{formula:4a312370-e1e4-4dcd-819a-749f53204f75}} – were worked out in Ref.{{cite:460184b223eb4e40ccafe3dbb78be2ecfd0e3689}}. There it was shown that, when combined with {{formula:7cbaa74c-98c6-4f33-ba0f-9c829aa4a91a}} , only two independent real parameters are allowed for the first shell of wavectors {{formula:a812cc75-d78b-423e-8154-7a5dfbf2f7ec}} . Physically, these correspond to the interlayer tunneling through the AA region and the AB region, and are the only interlayer tunneling terms kept in the Bistritzer-MacDonald model{{cite:554abd4276c5fd85ae5833315ff2516935205e99}}, {{cite:460184b223eb4e40ccafe3dbb78be2ecfd0e3689}}.
| d | bfbd5a25d811fa5393e83e24fa60ef04 |
Supplementary Material
Cost Sensitive Learning in the Presence of Symmetric Label Noise
Proofs
Proof of Theorem
Let the linear classifier be of the form {{formula:9062d7b8-6674-4b9f-95eb-0693f6fece14}} where {{formula:f9f6ed42-237d-48a9-9388-949f3747c6d0}} and {{formula:4d885ece-3a6b-4fce-b5de-a81b9f32dcfb}} trained on clean data {{formula:5565feb6-bb08-417b-a616-d95303a652c5}} and {{formula:7c481ce5-c4d5-44b4-97d0-16065da06d19}} trained on {{formula:0584c7b0-c91d-4d7b-a453-5a58302f22dd}} with noise rate {{formula:11a2ea2b-06e2-4db2-ac8d-869b10f53a98}} For {{formula:363790ae-f399-46fc-84a8-ea35a1ad68ac}} , the weighted clean regularized risk is
{{formula:68e0f4c0-69ec-4730-a3a3-69985f9d63ce}}
Differentiating (REF ) w.r.t. {{formula:c511c194-f843-48e2-95dd-75a04793a61f}} and equating to 0, we get
{{formula:6f5972de-beeb-44a6-9934-fb33b7c633df}}
{{formula:fa54339e-b2d2-4e08-94e6-3d66fc91fbe7}}
where {{formula:bdf81280-1a02-43e7-bb78-38621bd3dc08}} is {{formula:d3ac84a2-3e5e-421f-ad25-9ab03ae5afbf}} dimensional identity matrix.
Now, consider the weighted corrupted regularized risk as:
{{formula:29599f06-7403-44e0-8d86-0873a3192e73}}
Differentiating (REF ) w.r.t. {{formula:493bb6cc-ca75-451b-899b-da70d45e765e}} , we get
{{formula:d10aabe7-2f7f-4aaf-a153-370b5f840e53}}
Therefore, we get the optimal noisy weighted classifier as
{{formula:32019c08-0a8d-4748-b815-1986d8fab7f9}}
Taking the transpose in above equation and post multiplying by {{formula:c729626f-93fe-4fc4-82a1-8d7381bd53da}} , we get {{formula:3a4f205b-feb5-48e4-8a32-5b773168b390}} and hence,
{{formula:fc18f086-3641-4490-b41d-cc03b5e418c6}}
where {{formula:d9cd1156-adb6-4bc3-82d7-0a029852a257}} and {{formula:c12a3e6e-8dfa-4255-8c9b-a0714727c325}} are the optimal liner classifiers learnt using {{formula:67dda97d-715c-4edf-859e-a067618f3b89}} on {{formula:003f21f1-8543-424f-b838-b8c0e1ef367a}} and {{formula:e1f405a7-fa55-48af-b7e0-bd3a10c82453}} respectively.
Since the noise rate {{formula:283c36e2-1071-489c-8d45-7da61dc412c4}} , {{formula:8c5ca7fd-ff65-4764-a807-4f2af2f90acc}} = {{formula:d46c4b62-3ac3-44ae-a58d-7a2de4313dea}} . This implies that (REF ) holds for {{formula:3d441f9d-bd45-4daf-b848-df180b19acf8}}
Proof of Proposition REF
Consider the empirical noisy regularized risk {{formula:8093b94f-9f0f-4264-81ce-e95722fbad9a}} as follows:
{{formula:ce8190b0-d461-45ef-b9ee-2165e63c83db}}
Differentiating equation (REF ) w.r.t. {{formula:e73804a9-afc0-445b-954b-32c1a3a464f6}} and equating to 0, we get
{{formula:30b7ef5f-5e85-4a42-bcea-d94b212ee9a5}}
Differentiating equation (REF ) w.r.t. {{formula:e7329ecf-f77c-46b9-8a8e-322e92171d8a}} and equating to 0, we get
{{formula:ff9ae074-1488-4a76-b1de-3ffb69741b5f}}
Let {{formula:ae9ccb32-7c89-49fd-a25f-7bb5bd10fda7}} and {{formula:d186cfa7-99eb-4094-8157-c5986540142e}} . Writing equation (REF ) and (REF ) in matrix form, we get the following system of linear equations:
{{formula:a945b8b6-78dc-484b-b1f6-e585018c5ec9}}
where {{formula:c07f1f6e-4806-49b4-a11a-f1250988abff}} is a {{formula:6607f0dd-4706-4931-8d2a-c83db9fe5601}} dimensional known symmetric matrix, {{formula:1af17960-44fe-4786-8041-d141fcecf731}} is {{formula:2f783900-3f93-43a9-92dc-130b9d3e7a63}} dimensional vector of variables and {{formula:e2514b8f-9511-4aac-88a0-1e1d3f1a5c69}} is {{formula:883d3526-7e9e-4f02-9d2b-49af793af5b4}} dimensional known vector. The above linear system of equations can be written as {{formula:e594ff60-894b-4843-8b92-ed79a6da7501}} and the cost-sensitive and {{formula:8a211f01-c1a7-4621-a0a0-1b80a1c23702}} -robust classifier {{formula:36db691e-90b1-4c37-b079-2a0d4b8bd6c8}} is as follows:
{{formula:f1290dc3-3f82-453a-a2e4-1105dee46f69}}
Proof of Lemma REF
Consider the difference between corrupted true risk and corrupted empirical risk, i.e.,
{{formula:5192c5c9-8ec1-4f8c-b4dd-8b9f40ffd2cd}}
Now, using the Rademacher bound ({{cite:82bb58514dafcc2450c02210e5dc0ef19c68c4bd}}) on the maximal deviation between risks and empirical risks over {{formula:b425c2c8-2257-4102-b1bc-33eada715ede}} , we get with probability at least {{formula:274c9fb2-4b1f-4a22-9bdb-2e3744f85ae6}}
{{formula:ed95d0c3-722b-436d-8040-1791bc27d5a2}}
where {{formula:9173a94b-98e7-4736-898f-7fa5f7d48f75}}
Now, using the Lipschitz composition property of the Rademacher averages, we have
{{formula:d634bf6c-3350-41d6-951e-5b2de443e180}} Hence, the inequality in lemma statement follows.
Proof of Lemma REF
Consider the corrupted {{formula:228757f3-65ef-46fd-bff9-ca2951e1a895}} risk of a classifier {{formula:acd722b2-88aa-48fd-89c1-7178255afb9e}} as given in equation (REF ).
{{formula:438af4e9-6e11-4d37-9440-7445b31378a4}}
Proof of Theorem REF
Let {{formula:4c29d8fc-0d48-45ba-9279-264e60b2d430}} be the minimizer of {{formula:fd7ec1c1-144d-4c06-ba9f-06211283455d}} . Then, the excess {{formula:d13e6fe3-db4e-4da4-aa68-b004cc043e40}} risk of {{formula:a1fc66b5-ba33-43f7-98e3-f0c8a48245f7}} is
{{formula:44ed1996-a029-4281-98c2-e0293ecdea8f}}
The last inequality holds because {{formula:71d8b3a0-5645-49b8-bb7f-c84704402abc}} is the minimizer of {{formula:669f0671-3dd3-4785-97cb-517d3211979a}} .
Using Lemma REF , we get
{{formula:df91f908-9369-4b0f-9d7b-d12db3f976cb}}
Last second inequality follows by using Triangle's inequality for absolute deviations. The last inequality follows from Lemma REF and equation (REF ). Next, we simplify the term including expectation
{{formula:fdf508c1-e2da-4397-a017-ff3fefaf40a7}}
Substituting it back in the upper bound we get,
{{formula:de9e3bc5-72a5-413d-af2d-0e52c168431a}}
Subtracting {{formula:665a729f-b889-4ae4-89eb-f9f9f1ea0b04}} from both sides of above inequality and invoking equation (REF ) we get the second result in the theorem statement.
Proof of Lemma
Consider the case when {{formula:3ad2ca18-7a94-4501-b725-5ca4c112b3e1}} and {{formula:0c74fdc4-667f-4bfa-8fae-c41c8f327b54}} as follows:
{{formula:1e86199f-aad2-4ca4-aee6-4708e7f6cd89}}
The case of {{formula:72a39290-eeb9-4843-aec8-b18ae2d0a6e8}} can be proved in similar lines as above by reversing the inequality.
Since, {{formula:59d23ab3-6fe5-49aa-97b5-3521e93df035}} has the same coefficients as those relating {{formula:2c08add3-9ddf-482b-8d20-151f923f770a}} and {{formula:66a0ae8b-833d-46da-82d3-1fc9f4aa1f1c}} , monotonicity w.r.t. {{formula:40932a54-d112-450f-bd1c-b14afc13952c}} holds for {{formula:3363a4dc-6530-473b-8f0a-0729e015132c}} too.
Details about various counter-examples
Details of the counter-example
Let {{formula:b84bad64-7ee7-4ed8-8f02-8248f585bbb8}} has a Bernoulli distribution with parameter {{formula:867a073f-e2ae-43ff-b19a-e3224a8a5ded}} . Let {{formula:d7a8f38b-eb89-4213-950d-df33c3d3091b}} be such that such that {{formula:9569ca5f-5174-4837-8bc2-8e449ac8731f}} and {{formula:afee8ed7-b704-4331-8b88-4b5023f1ccb5}} . Then, the in-class probability {{formula:cb4b5f8d-317c-41f5-a5ff-1fe051749f13}} is given as follows:
{{formula:62a138f0-b736-4771-89f5-ffd66ce0613d}}
Suppose uniform noise rate be {{formula:b326e1c2-7182-4df7-b3b3-08934257b8ee}} . Then, {{formula:8da746a9-39c7-486d-8372-e0e2f5be4ab5}} . Now, if we are given that positive class is more important with misclassification costs with {{formula:1752a6df-69fd-412a-a794-47bf7e7454b6}} , then the corresponding {{formula:e3bf573a-1a98-4b3d-b218-d017ef6bd3b0}} based optimal classifier on {{formula:d0bf35ef-bd61-4eba-944f-594585194be6}} and {{formula:ed5ebeb9-d450-4763-aac7-7220a3aac3a8}} are as follows:
{{formula:76f18fe9-c60d-4951-9f15-49d113383ce2}}
Consider the {{formula:69ee9d7e-e54d-4cb1-894b-47af8446e959}} weighted 0-1 risk of the above classifiers as below :
{{formula:93646188-cf02-4003-87ab-4b0fb7516b49}}
{{formula:6d97f170-69a2-478b-8d0c-dea28f6e9742}}
Therefore, {{formula:75ab2142-df06-4714-89b5-0535f38c6323}} implying that the {{formula:10943a05-89f8-458d-8479-73e7dcaadb69}} weighted 0-1 loss function {{formula:4ebbf277-334c-43ca-bd32-9f5665552bb2}} is not uniform noise robust. Note that in this example {{formula:a089f481-5fd6-4b44-8b1c-ba0996d4258a}} is linearly separable because of {{formula:0b53fe21-63b2-46d7-b17d-e120bfcb02f6}} . The counter-example suggests that even in the easiest case when the clean distribution is linearly separable, {{formula:08164bfd-87f1-4ddc-9f2b-867031a9e70a}} is not uniform noise tolerant. If {{formula:3a451b1b-a6c8-4dcf-8b81-1c660c665a11}} , {{formula:10992c81-edde-40e9-ae89-844bed128f9c}} will be linearly inseparable and by changing {{formula:ca07e3b9-f6e5-44ae-aa34-a684e9b8fbc6}} value another counter-example can be generated.
Details of counter-example
Consider the training set {{formula:1ffd6b94-6ef1-4b2d-aab8-41d3fe971f02}} . Let the probability distribution on the feature space be uniformly concentrated on the training dataset. Let the linear classifier be of the form {{formula:e975559d-66e9-4674-a591-36615d8b8d6d}} . Let {{formula:64d37b06-3dad-4cf7-bc1a-e2b63b06f559}} and the uniform noise be {{formula:ba2df6a2-dfff-45ac-9786-d5d49a3208bb}} . Then,
{{formula:6c7025ce-6d54-4163-b49b-e80bf912070f}}
Therefore, {{formula:f2e95dfb-edcd-41b0-ac2c-9095cebf8229}} Also,
{{formula:348a4f6e-d723-4cc7-89eb-b2cabb1a2000}}
Therefore, {{formula:068685ac-dd8d-4e9b-8183-ac2b0920e477}} with {{formula:a7b53029-5938-4327-8145-92e12ca601d7}} Hence, {{formula:a27c5a24-33fd-4b37-ab50-ead11b5b34bd}} implying that weighted 0-1 loss is not uniform noise robust even with a classifier in {{formula:ff38a608-098e-434f-bed5-161dd1bc747f}}
Details of counter-example
Consider the minimizer of {{formula:a09264e2-15f2-4045-89fa-03b447d92abc}} , the clean {{formula:f9ddfc18-e79e-4045-ae39-a37e735b8287}} -risk of {{formula:f07e3e41-689a-49ab-a06b-f3da4162a9e0}} given below:
{{formula:a534d5c3-2fcf-4509-802f-788a0af1d419}}
Given {{formula:947c672c-8190-4c0d-88fc-a05c5fa381b7}} , the optimal decision would be {{formula:f691b2bf-d526-4b11-b0fe-db90d49dce9a}} since the denominator in the RHS of equation (REF ) is always positive.
In similar lines as above, one can show that the minimizer of {{formula:c8c206e6-7640-40bf-8f4d-d767a29ba0c1}} is as follows:
{{formula:9366cfd7-e4cc-4e58-9e2e-8a13f7367a7e}}
We show using following counter-example that clean weighted risks of {{formula:0cb59cb0-a51b-46fe-8548-9feb34873142}} and {{formula:cdeb80cf-2919-42ec-9cc1-23ced2f1147f}} need not be equal.
Consider the settings as in Example . Let {{formula:1bd98e09-e9f9-4b81-b328-18f5c161f6e3}} and {{formula:b158f956-2070-43f2-b5a6-ec7e8a528e7f}} . Then, from equation (REF ) and (REF ), we get
{{formula:e3389979-3de5-4e5d-be7a-15370205ae83}}
Now, we need to check for the clean {{formula:05632897-b6ce-4a92-89ed-bff471aa107b}} risk of above two classifiers.
This can be computed in the same way as in Example . Therefore, we get
{{formula:922818b7-858f-42ad-b2bf-d35254d67259}}
Hence, {{formula:eecbf57f-1f54-4653-aa03-a07d1904c9c8}}
implying that the classifiers {{formula:6849f339-2d19-4554-9beb-73364957aa4f}} learnt from {{formula:7593a81c-dc3d-42c8-a01f-baca701dd700}} based ERM may not be both cost-sensitive and uniform noise robust.
Another counter-example to show that weighted 0-1 loss, {{formula:e192da7f-cd87-4035-86f6-7cd7070275cd}} is not cost sensitive uniform noise robust
Following example shows that cost sensitive risk minimization under {{formula:70381af9-b90c-4266-b34e-d7f4f63b0716}} is not uniform noise tolerant.
Let {{formula:85eb2f3e-28a1-4525-8d08-f036a9c4e307}} has a Bernoulli distribution with parameter {{formula:24c8134c-cf1c-4724-af55-9c154c9cf415}} . Let {{formula:8c11c1e3-08fa-4994-8290-18577adbd509}} be such that {{formula:aa0971cf-8c30-4cbc-bd3d-1031e5398be6}} and {{formula:604fccf9-ab1d-4661-93de-6875656c74ca}} . Then, using Lemma , the in-class probability {{formula:a9b99112-45f6-4ac8-99be-d7bcf3802126}} is given as follows:
{{formula:6875d688-0b9e-4dfd-a54f-45b47c3671e4}}
Suppose we are given that the negative class is more important with misclassification cost {{formula:46bfe55f-93f8-48fe-9755-7c9b9ca1052a}} . Also, let the uniform noise rate to corrupt the data be {{formula:c08d98fe-6832-4959-8284-f201b281552e}} . Now, using equation (REF ) and (REF ), the thresholds corresponding to {{formula:179dde23-52d5-4939-8116-4560bcd3d893}} are {{formula:0aa60c5a-bb55-4ba4-8bfd-883ffdea7ee9}} and {{formula:eec9079d-7f28-4605-b29a-7eb31c39fd56}} . Then, the optimal classifiers are as follows:
{{formula:3b44c30c-f470-4d6b-9292-d85f83551a3c}}
{{formula:5795a7a4-899a-4f97-9719-3a3d30d2167c}}
The last equality is obtained by substituting the value of {{formula:6bbf4e9e-cd2a-4e3e-8b23-efb6490aa241}} in (REF ) and solving for {{formula:6c100f1e-b241-43e5-a51f-fe53f83b9ccd}} . Now we compute the weighted 0-1 risks of these classifiers show that they are different.
{{formula:219945e2-254c-43fc-9f75-74615ae6a36e}}
{{formula:ced4ab4b-dd3f-41ba-9afb-2f9699672bc2}}
Therefore, {{formula:6d16f88d-c416-4639-b7e5-04333541a4b8}} implying that the risk minimization under weighted 0-1 loss {{formula:b7757a6c-9e76-446b-88dc-c2f86de06a46}} is not uniform robust.
Explicit form for {{formula:71091f40-b34f-469c-9425-858ef731e809}} when conditional distributions are normally distributed
Here, we present a result which provides the explicit from of in-class probability {{formula:ecdf885b-8aa9-4b00-87fb-08c0b9483334}} when the conditional distributions {{formula:671af28f-0320-49c1-bd8a-2216b23714f5}} are normally distributed.
Let Y has Bernoulli distribution with parameter {{formula:9ad7c985-74f6-4821-9a7c-35c2f09ae135}} . Let {{formula:253d8f5a-5bf2-47f9-b49a-aba3933b3db3}} be such that {{formula:1cd3f9d6-b4f2-44e1-a43d-f736bdf287d3}} and {{formula:a2c7bdc9-45ab-4ee8-9867-e27b4ec9d205}} . Then, the in-class probability {{formula:1acf8b87-9362-4993-84c7-73685bbf9c16}} is given as follows:
{{formula:af0bfeb6-4db6-400e-875f-b9599daf8aaa}}
where {{formula:814aea9f-6dc9-43df-891d-b4eca39dceae}} and {{formula:1621e0d7-4e46-4b94-a230-a79cd8dcde17}} are the inverse of {{formula:d7edad7e-c89a-4087-a130-0256422c6d0d}} and {{formula:57713152-19d2-4237-9b55-4c921cd8b463}} respectively.
Consider the in-class probability {{formula:cfe87b26-ed72-45d3-9918-6294738be9b8}} given below:
{{formula:9c2ba3a9-a166-42df-a6da-855916484cdd}}
The last second equality uses the fact that {{formula:9c3d59ea-5840-4042-8bc5-72c38c7b0015}} and {{formula:e4b3164e-e011-46fe-9ff9-d4230f24dd84}} is normally distributed. Now, expanding the product terms in exponent of the last equality and collecting the common terms, we get equation (REF ).
This result was used in synthetic data experiments to get the get the Bayes classifiers using {{formula:d2008690-5ba0-4c19-adde-3159fd0cdc82}} .
Experiments demonstrating the role of {{formula:9597b37e-5d21-4b48-bd40-73a325973536}} and {{formula:14f6a622-7d42-4cfd-86fd-88be5ec048dd}} in cost sensitive learning on clean data
In this section, our goal is to understand the role of {{formula:9c94d4dd-b4d8-4919-8116-1991bf11003a}} and {{formula:71983979-4d49-4d36-b11a-790ba58c0763}} in {{formula:a06e2323-61fe-41b2-b262-11f1bd8ad98b}} . Specifically, we work with {{formula:64fb7307-9dce-4aaa-a705-f53a7d1acc19}} -weighted uneven margin squared loss defined below:
{{formula:e8cddc85-bd56-40cd-af94-287b83f5df01}}
We consider 3 performance measures, viz., Accuracy (Acc), Arithmetic mean (AM) and F measure.
Every dataset is partitioned into training and test set with 80-20 split. Further, for tuning the parameters like {{formula:55328e2f-ebc7-4068-a6d6-4d347d12d216}} , we split the training data into 65-15 and use {{formula:c89b3398-9b91-4ddd-bc22-bc4634a67da1}} as validation set for tuning the parameters
To account for the randomness while partitioning of the data, this process is repeated 10 times. The reported values are performance measures averaged over 10 trials along with their standard deviation. To choose the value of the regularization parameter {{formula:4b0927e8-a592-4bf1-9704-04955e97cdd1}} , we perform a CV over the set {{formula:33cf7b38-4066-42a8-86ed-fe753fc25a57}} . Next, we separately provide examples for the cases where there is a need for differential costing of misclassifications.
Data has only class imbalance and no user given cost
This is the case when there is only imbalance in the dataset and there is no explicit requirement of costing by the domain. In literature, researchers use only {{formula:38d2a39e-c4a5-4f05-b394-e4e9aad7aa6c}} to account for the imbalance, i.e., use surrogate loss function {{formula:1ed2dc70-9f26-4d41-984a-e0fde81011f0}} given below:
{{formula:7706431b-5fb3-42e9-bff7-18e3ca349c4e}}
where {{formula:efff70b1-a757-4190-bb49-1cbbb156752c}} is tuned. However, as stated in {{cite:b312dd8a6aef136eda43760b8fd7e02faf740a07}}, uneven margins {{formula:dbf231e9-b8c7-4eb0-a460-fa659937bc0b}} have been found to yield improved empirical performance in classification problems involving imbalanced data. With this motivation, we demonstrate the performance of cost sensitive linear classifier from {{formula:ccbcafcf-d8ec-4951-a28b-f6d89bf19075}} based regularized ERM on two synthetic datasets across different level of imbalances given in the set {{formula:288943a8-1092-422c-972d-b1bdc36fe2a8}} with 3 approaches given below:
Ap1 : {{formula:ada0e28e-bff2-47a2-9d9f-b35d8d5e688f}} is tuned and {{formula:604c8949-f90b-4cdc-bd87-deb105950918}}
Ap2 : {{formula:fceb1c16-d282-4ef5-b015-a2eab1faaa08}} is tuned and {{formula:c7e8180c-70f8-4188-a1d9-93d4d6d3272e}}
Ap3 : {{formula:9fa8cc93-97d9-4c7d-a6a5-dd75970e0c69}} and {{formula:e4eedf5c-89c3-4c9a-97ab-9d523869be11}} both are tuned
Results on the following two synthetic datasets w.r.t. Accuracy, AM, and F measure are presented in Table REF , REF and REF respectively.
Syn-dataset1
We first generate 1000 binary class labels {{formula:dd73e8b7-ca96-4981-8d39-a776f61bb67d}} from Bernoulli distribution with parameter {{formula:1ea8ca15-b885-45e9-9579-7fd1b35e0dd8}} where {{formula:a4cfb6c9-fcef-4aac-949b-2fa131f8f24e}} Then a 2-dimensional feature vector {{formula:03c9f39c-77a9-475a-ba83-b294eb1b791d}} for each label is drawn from two different Gaussian distributions: {{formula:b05140f4-6cdb-4428-b03c-9bc663dec8ae}} {{formula:518995c2-497a-424a-b317-225b410fda3b}} {{formula:deaf59f5-e748-4f30-913e-1f7bc60b39c0}} where {{formula:1a3fea91-7bfc-46a3-838b-7c2883cc4d75}} .
Syn-dataset2
We first generate 1000 binary class labels {{formula:639b04b4-4902-4454-8213-f3598527c1be}} from Bernoulli distribution with parameter {{formula:db0d7e43-b9aa-4daa-b1a7-b913e4407e82}} where {{formula:47e76f28-a982-4856-ad75-319223948d26}} Then, a 2-dimensional feature vector {{formula:c6276fc4-73dc-4d06-a1a1-85649c59ea86}} for each label is drawn from two different Gaussian distributions: {{formula:788388d6-58d1-4dfe-85fb-08e8b41459b4}} {{formula:28abac6d-0b18-4b27-b4f0-0d49f7af3fc7}} {{formula:1ce734ba-19c9-453f-869d-ded03baf45e7}} where {{formula:13cd718a-55c9-4c2e-b537-411af9606f92}} .
{{table:e71f3883-7762-403a-9432-c933b3619aa0}}{{table:24a75193-2a75-4f7d-b9d3-91cdc6acf47a}}{{table:7649aade-f9f4-4039-8e91-83a245677bfa}}Observations
In Table REF , we observe that when there is only imbalance in the data and no user given cost, all three approaches, i.e., Ap1, Ap2 and Ap3 have almost equal accuracy on both the synthetic datasets considered. When AM measure is used for evaluation, one can observe in Table REF that Ap2 has marginally better performance for some values of {{formula:46d73647-accf-4211-a809-762b7604ec36}} . Again with F measure, deciding on one approach to handle only class imbalance is not easy. We believe that one can consider either tuning {{formula:e7b964f8-ab1a-4533-90b6-4cde7afc8c4d}} with {{formula:8dbb749b-e140-4aa3-8ece-f9b60c3ee431}} or tuning {{formula:24c325fd-3938-4e33-98ff-06fb99e97147}} with {{formula:9cc5be69-9608-4fe2-b31e-1b09aa941c60}} . Also, as can be seen in Table REF , Accuracy values exhibits convexity w.r.t the imbalance {{formula:a4caa29d-6239-4a69-b190-31cba1de7d21}} . This provides us with an evidence that Accuracy is not the correct measure when there is imbalance in the data. It will lead to high values of Accuracy in the presence of imbalance.
Data has no class imbalance but there is a user given cost
This is the case where there a user given cost {{formula:a587d968-33f2-45ad-84fa-2d07e768fb4a}} and {{formula:aa208def-7c38-45d1-86bf-60b838b3e7f2}} can be set to 1. However, using Bupa dataset {{cite:e2c281076b3fa50d08e0c771be11f631ee9f747f}} we show that, in this case also tuning {{formula:6898d0b1-da04-4445-8965-1f37ccd86581}} can lead to improvement in the performance of the classifier.
{{table:a97b32c4-9190-4654-82a2-bf5ce2698e96}}
Data has class imbalance and there is a user given cost too
This is the scenario when there is a domain requirement of differential costing and the dataset available is imbalanced too. In this case, we fix the value of {{formula:d6a9885b-bbfc-42d9-bee5-911b81fd9b2c}} and let the data decide the value of {{formula:a1784297-b851-4d5c-b2f8-10d8b1ff0947}} . Here, we present the performance of linear classifiers learnt from {{formula:8f13f4b3-b0c1-43b9-843d-27f58711a335}} based regularized ERM on Syn-dataset 2. The results are presented across different level of imbalances from the set {{formula:f77d5f06-5245-4d92-8521-9a3f21bf1939}} with the corresponding cost set as {{formula:399c8b54-e47b-42c6-85cc-c0b364cdeac4}} , i.e., {{formula:bc6b8e82-1b66-489d-ac41-73969680b43f}} .
{{table:5f6be0a6-8917-44d3-ade7-516296a8b693}}Observations In this case, the user given cost is {{formula:346c4c37-5302-4c67-bf4f-f65892b67bae}} and the value of {{formula:ec631765-749c-45ba-96e5-ba39cf99adfd}} is tuned to account for the class imbalance in data. We observe that the values of {{formula:6ed9e3b3-0e20-4e25-925b-2daef3f6a514}} picked up when evaluation measure was Accuracy are almost complementary to the value of {{formula:09718a80-7972-4d46-a74a-4687450e42c8}} picked up when evaluation measure is AM. Specifically, with Accuracy measure, for {{formula:253198fd-7f3b-4832-9240-d11891a6de5d}} the {{formula:a88a3fd1-9f05-4e16-955c-dccfc79d0240}} value is less than 1 and for {{formula:109aaa83-5627-4d9f-ae8f-d3af747ed0e6}} , it is more than 1. However, with AM measure, for {{formula:114f4d29-b24a-4b8b-b19c-ca5bd32cf60d}} the {{formula:ad745944-14d8-466f-80ec-c86539833306}} value is greater than 1 and for {{formula:3e4a6cde-474d-444d-8c31-f55c1f7a1108}} , it is less than 1. Based on the functional form of {{formula:bf94b922-edc5-4066-85be-9b563471f650}} , the correct thing would to choose the value of {{formula:564271a8-3ffb-4372-82e7-36717744a845}} as given by AM measure. The F measure based evaluation leads to {{formula:b49adcdb-cb1e-4311-875b-b06049d61a1e}} values lying somewhere between the values obtained from Accuracy and AM measure. For {{formula:602c79c6-c5cf-42e0-a555-cc4fd00325fa}} , there is a mix of both {{formula:a772d949-be0a-467f-abb3-da27f00a7475}} less than 1 and more than 1 but for {{formula:94cee8d1-b97f-458c-9def-2db20b413b0a}} , all values of {{formula:4b24bd9c-505d-4527-a931-1f05ca87c317}} are more than 1.
Additional experimental results when there is a need for differential costing and data has uniform label noise
F and Weighted Cost measure of {{formula:b19ceb9d-ef63-4cc7-bae7-f46a60b9edc4}} -regularized ERM based classifiers and Re-sampling based algorithm on UCI datasets
In this section, we present the performance of and {{formula:5cd5eea8-51d4-4454-b57a-13fd07fc87a7}} based regularized ERM on UCI datasets w.r.t. F measure and Weighted Cost measure.
From Table REF , we conclude that for Breast Cancer and German dataset, performs better among the two proposed schemes and for the other two datasets {{formula:c8ea6922-62c5-4d1a-ba1f-4e8c4e5907be}} based RegERM is good. With the WC criteria in Table REF , does well on all datasets.
{{table:c0d9d7db-d426-4192-a4d1-4270b3791c28}}{{table:5621d9b8-27b3-4287-9558-55184a26c589}}
Result of on Bupa dataset when {{formula:3ff4f222-a1f4-4aed-a3c2-7a222ad8c037}} is estimated using Lk-fun with squared loss
In this section, we obtained the classifiers from when {{formula:3e4f5f11-90c5-4356-8643-833e0f6b9ab5}} is estimated using Lk-fun with {{formula:12c5f6d7-6924-416c-9c3a-c16ff5a188c6}} on Bupa datasets. We compared the cases when the classifiers are learnt by tuning {{formula:38fb5278-563f-4525-9f8c-3b4ac618881c}} and when {{formula:9e50fd32-0f0a-4442-8911-a9528fe59969}} . We observed that tuning {{formula:ccaaf323-9ca0-42d7-861f-1cf973597d5b}} is always better than fixing {{formula:1ef8f253-2d0b-43b6-bcb6-211b13abf066}} at 1 because all the classifiers have improved value of all performance measures with the former.
{{table:3017178e-b149-4ce8-a9f2-bc93337ad55f}}
Performance of {{formula:7e4afc2e-19ac-4a3f-ace8-ac4e39c752e1}} -regularized ERM based classifier and Re-sampling based algorithm on Synthetic dataset
{{formula:11d9f5ad-b030-496c-98b2-379d0294021c}} (adapted from {{cite:521538da98e5c385c093f391b3b60633912a3bfa}}): Generate 4000 binary class labels {{formula:1710ba59-b94d-4f60-91a6-bf707bfd0952}} . Then, a 3-dimensional feature vector {{formula:580d6196-438c-48f4-8071-009156ce0aa2}} for each label is drawn from two different Gaussian distributions: {{formula:6b65ce96-ff4b-4b41-98e0-9af678994e33}} {{formula:dcba53fc-a50c-4872-b021-060cac21b4db}} {{formula:f1f0dc0b-d0a4-4dd2-9864-6a5f3459f3e8}} where {{formula:d8147a6f-4902-48bb-9e52-a0dcd0d9939c}} .
For {{formula:25d8021e-2641-4b57-806b-8f99a5c561d1}} and {{formula:114ca64d-beab-4655-908c-7aa27e052881}} tuned suitably for the measure in consideration, we compare the quality of cost sensitive predictions from and {{formula:3d4c04a8-b4d7-47d9-909b-cd4a776ebeb1}} based regularized ERM implemented on corrupted data with cost sensitive Bayes classifier {{formula:dcdf61fc-af16-41f8-8fdb-e8734125df7b}} learnt on clean data.
It can be observed in Table REF that Accuracy and AM values for and {{formula:acabbc8c-2ac4-4c20-a786-d72b642f73d1}} based classifiers are comparable to that of {{formula:fa251587-a470-45fc-8088-ae8ed127f652}} till the noise rate {{formula:ebe7ee0e-34e7-461c-a8ec-3114dc65c24b}} does not cross {{formula:ef6ef90d-4a24-413e-9ca5-69c558f106f0}} after which it starts deteriorating. However, as shown in Table REF , for F measure and Weighted Cost, this threshold is reached at close to {{formula:c5258266-dfbe-4a67-a291-feac694ce3eb}} implying that these measures are more sensitive to noise.
{{table:219ebafd-6342-4139-b260-a4a867fb1902}}
{{table:c28cad7c-5035-4d36-bd01-c7135e3369fc}}
{{table:6e1b37ef-edb2-465c-86a1-23b837843c2a}}
Various in-class probability estimation methods
In this section, we present a comparative study between 3 existing in-class probability {{formula:93ae6067-e26b-4102-8080-f20b9aa883e2}} estimation methods and a minimum squared deviation method. The later method is based on our idea that {{formula:65afaaad-5e9f-4340-b4b2-4f59b7b816a3}} is a conditional expectation and hence a minimizer of a suitable expected squared loss. We first present the details of 3 existing {{formula:d985da00-087d-445a-8f7a-6768639c0398}} estimation methods viz., Link function based method, LSPC, KLIEP and {{formula:0df487a0-528a-4c42-b1b9-466249b07f4a}} -Nearest Neighbour ({{formula:24bf7352-cbc4-4971-a289-baf5610a54ce}} -NN). Then, we present our method and various estimators based on KLIEP.
Link function based approach {{cite:45b289b93889c2abcf717d5ee0c73bfd8d25d5ad}}
This method is the most commonly used approach for getting the in-class probabilities from a given dataset. In this method, first a classifier is learnt from the data using a special class of loss functions and then a suitable transformation is used to get the {{formula:05483a1d-b0f2-4287-8d89-b52be7a75fb0}} estimates. This special class of loss functions is the class of strongly proper loss functions defined below:
Definition 4 (Strongly proper loss {{cite:d26cd8009a22ff4b270682559b25a61522601633}})
Let {{formula:278183fc-4d51-44be-b884-74e12c241c0c}} be a binary Class probability estimation (CPE) loss and let {{formula:93e45d98-2198-4b6b-9b26-47dfc7a413dd}} .Then, {{formula:1be1acf2-c7d2-4f24-8759-19f99174fbbf}} is {{formula:e6c566ea-60f9-40c0-a0ba-6865f7d535d2}} -strongly proper if {{formula:d5e8e4fe-4774-4e91-ad1d-bebe8a01c159}}
{{formula:3524cce0-c74a-4b6d-aa08-a010efb804e5}}
where {{formula:76639f4e-9226-4624-bc14-7e3a5b0b06d5}} .
Therefore, a loss {{formula:047a768f-4d37-4f67-8124-e1ae3bbf7c31}} is said to be strongly proper composite if it can be written as
{{formula:2f34a710-fdef-45bb-9eb9-eb82d8f1a07a}}
for some strongly proper loss {{formula:ad87a925-6553-4aa6-b5b6-f21aade79429}} and and strictly increasing link function {{formula:0d672862-bff4-466a-8e99-72e9a17343eb}} .
Given a strongly proper loss function {{formula:a7f73161-6331-4de7-b924-2378f285c8ad}} , one can obtain an optimal classifier {{formula:4234515a-0047-4da1-9182-1f536ad8df6d}} by minimizing {{formula:cbc1fcf2-b2ff-4861-b10c-1ea86d9600d0}} based empirical risk minimization (ERM). Then, the in-class probability can be computed as {{formula:7d3dcf6b-3fce-4111-b40d-66b1c70e54a3}} . Some examples of strongly proper composite loss functions along with their corresponding inverse link function {{formula:93881fe9-0b3c-4449-9866-256ac205a6d0}} {{cite:d26cd8009a22ff4b270682559b25a61522601633}}, {{cite:f0d3ba5a46d8603a28f38ba0e16b487c8b508f9c}} are as follows:
Logistic loss: {{formula:1677fafb-4ee5-44b8-9b1e-3154923c4d5e}} , {{formula:abdb3ca9-1aaf-4294-a87f-5570d14c2113}}
Squared loss: {{formula:39a5107a-f615-4002-8093-230d23b6e1f0}} , {{formula:c8240838-cb97-4c33-b741-e3a7aa34d352}}
Mod-squared loss: {{formula:022c142a-8e94-4f0a-b5ee-64126958efdc}} , {{formula:75d55d30-bb40-4c6b-a886-6810f6e48dcc}} where {{formula:848a491e-674d-4cbf-ac7a-49aec7c0b940}}
Exponential loss: {{formula:7bdcd5be-6a0d-4002-93d6-101c98032a98}} , {{formula:89777ed4-18b1-4f8e-aa55-20125d74e8b0}}
It is shown in {{cite:d26cd8009a22ff4b270682559b25a61522601633}} that if {{formula:444c1f85-90bf-4465-8b8f-04a75ae3ff17}} is a strongly proper loss then the {{formula:8d43fa02-1526-4e75-9657-014ea40d99b2}} distance between {{formula:72bce851-52e2-4d6a-b2bc-cdde701a26d3}} and {{formula:0a009e1e-78f8-4d1c-bd5c-e42b242a45bb}} can be upper bounded in terms of {{formula:dc2d30ed-e000-4e25-8802-461bf8570fea}} -regret of {{formula:e78f5595-eee8-4572-a370-456b81243957}} . Some special cases of this result are also presented in {{cite:f0d3ba5a46d8603a28f38ba0e16b487c8b508f9c}}, {{cite:4f5d78a12a6c3fd02a8f467bfd1c904f07de0143}} (for logistic loss). In this method, for some loss functions like squared loss and mod-squared loss, one has to suitably truncate the inverse link function {{formula:cea35a5f-9609-4216-8148-d9219a88b136}} so that it is a valid probability.
The quality of {{formula:970981f5-26a6-41b3-a88d-a114182b9070}} estimate will depend upon the quality of the classifier too. So, whether one uses linear or nonlinear (kernel) classifiers is expected to make a difference. Also, in case of non-linear classifier, regularization parameter tuning is required.
As hinge loss is not a strongly proper loss, one cannot use the above approach to get the {{formula:ab7e2631-be4d-4c82-8554-bf1ba76e0d05}} . It was shown in {{cite:fa53f8e53b5210b6ed3cc3eddbb18c280ce6ab8e}} that one can get the in-class probability by using a sigmoid transformation and solving an optimization problem in 2 variables. However, in {{cite:f0d3ba5a46d8603a28f38ba0e16b487c8b508f9c}}, it is shown that SVM formulation doesn't lead to reliable probability estimates.
For the sake of convenience, a compact form of this scheme is available in REF .
[H]
{{formula:c5a8b825-4e5d-478c-b404-c80e1dd55941}} via classifier scheme
[1]
Input: Training data {{formula:02ed8ad6-7cfa-473f-8564-5e4b81329fb9}} , test data {{formula:8131fe2b-1717-402f-8b84-050b304cc2b4}} , strongly proper composite loss {{formula:8d3e4c0d-9904-41a1-bed6-d9398d5247e0}} , its inverse link {{formula:4d48d188-80e7-4043-bba2-b28360e8be1e}} . Output: {{formula:5a9e0f8a-6d0d-472e-8374-4bc89f2d1aae}} estimate of {{formula:be198be2-34ee-46c7-8ca8-9e7ebbf8c12a}} on test data {{formula:038faa8c-f8e6-4b1f-a0ea-dfa5a67f04a4}}
Compute {{formula:4212ee92-7d4b-4fa2-be84-59be5fa223b0}} .
{{formula:2ffaba3a-0ee9-422c-bd4b-6434047e6486}}
Compute the estimate {{formula:b54bc45a-4c2e-4347-a451-fa72397e13d9}} .
Least Squared Probabilistic Classifier (LSPC) {{cite:88a6defbe40c8614bc0af8192c25f3bfbee5bed1}}
Least Squared Probabilistic Classifier (LSPC) {{cite:88a6defbe40c8614bc0af8192c25f3bfbee5bed1}} is a classifier which makes predictions based on the in-class probability {{formula:ea110339-5d72-4c83-8616-5c9fc1d65222}} . This is a direct {{formula:a17ec203-67fd-4dad-8f19-034ef4611dc0}} estimation method. Suppose the in-class probability is modeled as follows :
{{formula:53a256ee-03d1-4cf8-946f-ce87f3ba378d}}
where {{formula:7b79f28b-4b9e-470c-900c-df729b2f51ca}} , a {{formula:04e433c1-810d-415e-af8e-595f8c016206}} -dimensional vector is to be learned from the sample and {{formula:846d6218-cb10-4b93-9dc0-20ace40e652b}} is non-negative {{formula:ad8f006f-3450-4f81-9095-6c52e66e5c3f}} -dimensional vector of basis functions.
The parameter {{formula:99409944-2b21-41ba-9297-971eea266404}} is obtained so that the following squared error {{formula:35b9622f-267b-433e-83f7-bcf0ec40f4ac}} is minimized:
{{formula:24475f89-0749-4b6e-9d50-6f208cb6ea80}}
Here, {{formula:314f7200-ca13-4674-a54e-c79d1c6004f1}} is the number of classes. Solving a regularized empirical version of the above optimization problem leads to the following closed form expression for the optimal value of {{formula:57108ff2-5bae-483c-9436-71646fd453a4}} ,
{{formula:78622bf2-4a27-4cb8-90c9-681cccfce429}}
where {{formula:000146ff-0485-4b97-85f8-139226a065a4}} and {{formula:25e69115-607f-40c0-9a47-986f94f5e7e4}} and {{formula:5cd4486a-bacb-4fb9-a466-d6a585ecada3}} is th regularization parameter. The final solution is obtained by normalizing the solution to get,
{{formula:577a1cbd-2558-4ef7-ba28-0d20b37d29fe}}
In particular, LSPC uses the Gaussian kernels centered at training points. Use of kernel functions, allows to learn the parameters in a class-wise manner.
The kernel parameters are chosen based on the dataset and the regularization parameter is chosen by CV. It is claimed that LSPC can be viewed as an application of density ratio estimation method called unconstrained Least Squared Importance Fitting method (uLSIF {{cite:7cd5be4b2c71fbaea864f1878179710446ca1c79}}) and hence the theoretical guarantees like consistency and stability will follow.
However, the consistency results of {{cite:7cd5be4b2c71fbaea864f1878179710446ca1c79}} are under the following assumption. If {{formula:cdf2dbb9-6177-4de4-b6d1-225d03110e84}} and {{formula:3953dfbb-80de-42b1-9a98-29a0e9b2acc5}} are the data points from the numerator and denominator density respectively, then {{formula:4b7c35cf-020b-4b40-a1fb-0da5f9bcd860}} . This means that there exists {{formula:1bd99ba0-f65b-4c68-886b-2c7ea06f6632}} such that {{formula:26c2c4d0-e62d-4f40-96b0-fadac98fe2e7}} . Now, the density ratio in case of in-class probability is {{formula:f71ac993-6c28-4b80-9f2a-ef22fe092302}} . Therefore, the number of sample points in numerator and denominator would be equal i.e., {{formula:3f6929dc-682e-41d0-b6f0-048c92bf6fdb}} and the assumption of uLSIF is not satisfied.
Centiles from the data are known to be good candidates for Gaussian kernel width. Pseudo code given in REF is borrowed from the code made available by {{cite:88a6defbe40c8614bc0af8192c25f3bfbee5bed1}}. It generates the set of kernel width {{formula:f5cc7f38-dbcc-4e93-a054-348af124fb76}} to be chosen from for tuning Gaussian kernel's parameter {{formula:b913e18a-b341-47d3-a396-b3fb1fa2a0f7}} .
[H]
Method for generating a set of {{formula:d9673219-8642-418a-a25c-b74c909c0a7c}} values for Gaussian kernels.
[1]
Input: Sample points {{formula:d8952d47-d58d-45c6-9238-7491192ac29e}} , Centile Vector {{formula:22c93cd6-d96e-4cc4-9777-0ca7e2017ce3}} of size {{formula:489d88d1-26ff-47dd-956f-af51a06349ca}} e.g. C = {{formula:0ed53a8b-26cd-4292-acde-e56d2d0b09f3}} implies k=9. Output: {{formula:34ff7c51-4e10-4a2d-849a-d731b94c88df}} , a vector of size {{formula:7880b3b5-2ef6-4673-91ac-666d106996bc}} containing possible {{formula:13f0bfd3-771a-45d0-9eeb-2f89b5b75fac}} values.
Set number of pairs for which the distance is computed as {{formula:eed8c4eb-071d-47ac-97ed-241ed5ca184a}} and {{formula:224a50b0-a236-4e11-9131-407226f268e8}}
randOrder1 = {{formula:eeab9d3b-d36d-49e6-9443-6bce95a76a75}} and randOrder2 = {{formula:d00b6db6-babd-4dc6-a408-96dd65d72692}} where {{formula:01f333aa-a3a0-42fe-9dd3-59bae4ea8f8c}} is a permutation of {{formula:b555887b-0348-4d8a-b997-aae2019b75f3}} .
{{formula:0695d128-3195-465b-bc9b-6b796c92387c}}
dist[i] = {{formula:54170794-84f6-4d67-8f1d-71547960a43d}}
Sort the vector dist.
{{formula:d94cab28-91c9-4d4a-a072-0bb01408f9ec}}
out[i] = {{formula:eb73980c-e7d4-4cf4-996c-d0c1c2e0be5c}}
Return {{formula:27ecd46f-896d-4034-be60-0081963d2536}}
Kullback-Leibler Importance Estimation Procedure (KLIEP) {{cite:e00bf511ecd58adaa5a4075b6e3b8fd7a4692624}}
The in-class probability for a class {{formula:963bc6bd-4127-43b3-a63b-b6b08d88a570}} can be written as
{{formula:734d8ce7-772c-4bdf-b2bb-55232245c6bb}}
The authors in {{cite:e00bf511ecd58adaa5a4075b6e3b8fd7a4692624}} deal with the problem of estimating importance {{formula:39c4d53d-575d-4842-b282-b50ecbe13e57}} using {{formula:649220de-5ac2-4763-953e-82abe28bd758}} and {{formula:2c7307ce-e42a-43b4-a4bb-c068884a9f11}} from the probability density {{formula:aa16fb89-48ee-4a9a-9d77-b93dd81e7067}} and {{formula:7b15dc0f-f28d-401a-b4e5-428f48f6bc4f}} respectively. The estimated ratio is given a linear parametric form of {{formula:1a07b57d-428b-4de2-ad87-1078e543f14e}} . Here, {{formula:ce22cc3b-f84c-4804-bb96-b346d6d28420}} are known non negative basis functions and {{formula:c75360fe-ee9a-4fc8-b898-8f4b81932715}} are the parameters to be estimated. The numerator density can be estimated as {{formula:495dae24-fd34-4a41-91cf-87987b888715}} . The parameters are determined by minimizing the KL divergence between the true and estimated numerator density as follows:
{{formula:9381a7ba-16ba-441d-bd7e-95d0b1826f73}}
The first term in equation (REF ) is independent of the {{formula:cb9b6732-e5e0-45cb-bbd7-f9c40ee194ca}} , and incorporating the fact that {{formula:6d1ffd04-a176-4008-b3d6-b79bf3d45680}} is probability density, the following optimization problem is obtained:
{{formula:418b8dea-e1e2-4f4c-9a35-90cd1094d902}}
This is a convex optimization problem and the authors have provided an algorithm based on gradient ascent to solve this problem. The solution is claimed to be sparse and hence leads to a reduction in computational complexity.
{{formula:dc23d9ab-c945-4d5f-99ff-37fab97007f7}} -Nearest Neighbour method for {{formula:c8b18b28-106b-4268-9c52-37b2e60a9a1b}} estimation {{cite:7f909ec6f0060a1d4bd2170bd3540a0809e09f2b}}
In classical binary setup, {{formula:f0dce5e3-d9e1-413d-993a-44f16cb38731}} -NN works based on the labels of the neighbours. Firstly, an estimate of in-class probability is computed and then it is thresholded to make the label prediction.
{{formula:880efc69-d4b1-412a-af72-0761126c6f86}}
Here, {{formula:23aa891a-38d8-4cda-a0c7-770dd6e3319a}} 's are ordered w.r.t. some distance measure {{formula:c2580afd-6e3e-4aa9-b2b9-52289c0ea187}} Choice of {{formula:297c329d-15f1-4b46-a758-5be1db8c1380}} and {{formula:69586c41-39cf-4534-98d7-8552c02b8ed9}} are the two levers available while learning. Also, there is a trade-off between bias and variance.
Minimization of Squared Deviation method
This is a direct {{formula:305f0595-f4e9-4788-bee8-27fe1bca47b0}} estimation method, i.e., it doesn't involve learning a classifier. It is known that the expected squared loss is minimized by a suitable conditional expectation. In particular, given two r.v.s {{formula:46f0a01c-0f52-4108-ad5e-5dab713ac9d2}} such that {{formula:1f76616e-60dd-47eb-a679-6d9fe120d559}} is integrable, {{formula:82f307fd-d3dd-488b-b00d-92cb128900b2}} is the minimizer of {{formula:340cdff5-64b3-48a7-a229-a9a9874359b4}} a.s. among all random variables {{formula:2a61a735-7624-4722-a484-29323a51f4da}} , i.e.,
{{formula:f7bda896-3d34-4348-89da-c42c714576e3}}
where {{formula:c5f76e1d-a182-46a3-bd0a-467385b0a50b}} belongs to the class of all measurable functions.
Given {{formula:c5b08e38-1b0d-4316-9b0f-d39537b709c4}} and interpretation of {{formula:cc5cc433-6238-4dd4-bdee-ef56b05bef6a}} as the conditional expectation, we are interested in knowing what is the corresponding {{formula:0da51c29-9fd0-4b8a-9fc8-44a264c0aff7}} in the binary classification framework.
{{formula:9f2d9696-340f-4e94-ac9a-1143a6f13c34}}
Therefore, we have {{formula:608e66e6-b644-4c1a-8a23-eb516b1c7b82}} and {{formula:cb28dd94-3061-4cf6-82aa-f9b6b20c7a3b}} and the minimization problem is
{{formula:0bb67e88-c346-4a45-95e2-8df8389f341b}}
From above, {{formula:0be7c56c-b46f-4b7c-8d8a-04b081693230}} and hence {{formula:aa6441da-4e5f-4a3a-a075-96cb47912d8e}} as a function of {{formula:14f61656-ad29-449a-9d41-a65b9f0980be}} . This has some resemblance to the squared loss for CPE which is {{formula:20ae1938-9981-4f7b-b15f-5676e0419d80}} The advantage here is that the minimizer is the conditional expectation which in our case is the conditional probability. This need not be true for the log-loss ( {{formula:5ecac6f6-2ec9-44dc-91b7-4ab261f430d1}} ), etc.
Consider the class of positive valued functions with an element denoted by {{formula:dd752c18-893e-4b99-abf8-c74765e95f9f}} . Let {{formula:3768bada-d728-4dd4-9593-756161924ce9}} be called a basis function. Later, we will be specializing this class to be the class of Gaussian kernels. Now, consider the following functional form of {{formula:e84661b2-365f-4c99-8e4e-5f787c77350f}} in equation (REF ),
{{formula:f16f7f44-57a8-413d-bee0-f114afee15b8}}
where {{formula:38006bed-b6e2-4757-a0ee-8f6e0a79ef17}} is the variable to be optimized over and {{formula:00e07c32-cb0b-4c61-a270-aaf9fbcd3d11}} is the vector of basis functions {{formula:fdf8194a-3acf-406e-8f92-7170fefdab54}} , i.e., {{formula:b662733c-604c-463f-b2f3-638ab73e652e}} is a weighted combination of the positive basis functions. An {{formula:dc392f7d-1245-4398-a92b-89d26b20f609}} regularized version of equation (REF ) can be rewritten as follows:
{{formula:df6b0a65-021e-4881-a20b-1803f132e7a0}}
Here, {{formula:bf308e61-461f-4a65-9ae6-a95363256134}} is the regularization parameter which is usually determined using CV.
If one considers the multi-class classification problem with {{formula:769b6755-3d4a-4519-bc67-cf1a46005460}} classes, then the in-class probability for each class is defined as follows:
{{formula:c500f4a0-7a9e-474a-91fc-b26922540375}}
The idea that expected squared loss is minimized by a suitable conditional expectation can be extended to multi class problems too by formulating following optimization problems, one for each class {{formula:1e5ecb6c-3946-44d1-8f38-979c2cd53bd4}} , to get the {{formula:32f7e42f-b89f-4ee3-bcdf-53fb04dd8517}} 's.
{{formula:90ba3d24-fde2-46de-b208-ac6cb5b573fc}}
Solving for {{formula:f7c97e11-4307-4bad-a6b9-255b9cbf4b0c}} will lead to the following system of equations,
{{formula:9884cfae-a3ec-4b9c-960a-5a7cc832f163}}
Given the multi-class data {{formula:5025fc4a-6034-4258-a944-961ac492243d}} , {{formula:5016f8bd-bce9-4131-ba9a-ce51c3e9d2f2}} , we have the sample versions of the solution as follows:
{{formula:a923df01-920a-4a3a-bb46-526155f0ae1e}}
where {{formula:ae0adadb-11c8-44c4-9176-587d68249bd8}} is the index of examples with label {{formula:55745a05-2181-4629-ae48-37ecb0fa764c}} . One important question here is to decide about the nature of basis functions {{formula:c52629c3-5625-439e-8d35-fe24c1555cfe}} . Kernel function {{formula:ce654c95-4d7c-4732-a321-c3aa504d9271}} comes as natural choice for the basis functions due to the information they carry. Using these kernels, the elements of {{formula:02f88ba3-27b7-45c8-8bce-374e31106fdc}} and {{formula:cbb67b9b-52cc-4d45-96f6-812ed522f184}} can be written as follows:
{{formula:560a9a2e-90e4-4e9b-8d42-e00837d4e317}}
Therefore, here we would get {{formula:e620018b-f231-444e-95ff-de92bc2bca75}} sets of {{formula:556bf01e-9dbe-4612-ae1b-68845123f885}} , one from each system of equation. To get the final estimates of the in-class probabilities, one has to do the max and the normalization as follows:
{{formula:5eb4942e-2789-4982-8b32-744835150a45}}
In-class probability for a binary classification problem can be written as a special case of equation (REF ).
It is not necessary that the regularized expected squared loss is minimized by conditional expectation. Mathematically,
{{formula:9b9e88cc-5e21-4cdc-b451-cad1581a3cff}}
where {{formula:f4be9a8a-b2e7-41bb-9be8-352fecaf0414}} . Here, it is not necessary that {{formula:8d846ce8-a30b-47d9-888f-250d22f0ba29}} . So, there would be an approximation error if one tries to use minimizer of regularized expected squared loss to estimate in-class probability. Let us denote the estimates of in-class probability from regularized and unregularized version by (REF ) as {{formula:c4d64150-d547-479f-9be6-70a7eec4226a}} and {{formula:c08383f4-d51d-418f-920e-e1affa179d73}} respectively. We expect {{formula:091a35db-bb0c-4576-8157-586071299212}} to have model explaining properties and {{formula:db9230fb-cf7c-41d9-aeec-c7150a99ec7f}} to have good prediction properties due to regularization and avoidance of over-fitting.
It can be observed that the solution in equation (REF ) and (REF ) are same even though the initial objective is quite different. Therfore, in the experiments, we denote the results by the name of LSPC.
Various estimators based on KLIEP
We use KLIEP to estimate {{formula:ccbdbecd-4361-4d13-9991-8f0683711511}} , ratio of two continuous densities. One requirement in this method is that the ratio is such that its product with estimated class marginal, {{formula:d9cefbed-0b9f-4aac-88d0-82a7d080533e}} is a valid probability i.e., {{formula:ad94d6fa-ee4d-4cdf-9b28-dab60b89e480}} This condition need not be true in general as we observed in our experiments.
For choosing the number of basis functions {{formula:09b44a27-3232-478d-bb3d-c44cf6776ca2}} , the authors in {{cite:e00bf511ecd58adaa5a4075b6e3b8fd7a4692624}} suggest to use {{formula:c1b7e952-901f-40df-8281-46484f30ca28}} . In our case, {{formula:7681dba3-0f4f-4569-a5b6-f861435235a5}} would be number of points from class {{formula:ca80f764-8ab8-4fa2-805b-c1d5a120bcbd}} . As seen in equation (REF ), one can either use KLIEP with positive class points in the numerator and get {{formula:ea03abff-a043-4da5-b624-212c5877e23a}} or use KLIEP twice, one with positive class in the numerator and other with negative class in the numerator and get {{formula:2a9c61a7-0098-400f-a27b-8c7c3bdfa939}} .
Based on these observations, we have come up with following estimates of {{formula:b30abf41-b4d2-4bf2-813e-4ef917edb7d8}} using KLIEP.
{{formula:1e5e4726-54b0-4c53-afb3-117908ec5efd}} This estimator is obtained by implementing KLIEP with positive class points in the numerator and obtaining the ratio {{formula:224c572c-fd04-4a8d-a9f8-7c3364c37a0b}} . That is, the ratio of density to be estimated is {{formula:350cb60c-dc7d-44dc-b806-d624c8b57096}}
{{formula:ba31bcce-ea91-40ed-8f1a-b3723e3baeb6}}
This estimator of {{formula:4e5e8cd7-611e-40d5-a1f9-aa9cc9906539}} integrates to 1 but there is no guarantee that it will always be less than 1.
{{formula:298cda6c-0588-4a54-a5c0-9d9d37a7077d}} This estimator is obtained by implementing KLIEP with negative class points in the numerator and obtaining the ratio {{formula:b9e1585c-fbf7-4599-93ba-7c8ad84a0a73}} . That is, the ratio of density to be estimated is {{formula:68d3a3e2-60a8-444d-aacb-beb2774eeaa0}}
{{formula:03d5cd06-8a21-42ad-90c1-ad4517beb733}}
This estimator of {{formula:ec1f3c3e-a9f0-47d0-b45e-98b3303530dc}} integrates to 1 but it is prone to being less than 0.
{{formula:55204234-8e9a-4a77-8372-a03d036c61f5}} This estimator is obtained by implementing two KLIEPs, one with positive class in the numerator and other with negative class in the numerator. Then, the estimates are combined as follows:
{{formula:85b09248-6a4f-4b82-ac05-6afe303a3a76}}
This {{formula:5022c27b-9f1d-40be-9258-c759e2c0aa19}} estimate is a valid density as it integrates to 1 and always lies in the interval {{formula:65ccd389-14bd-426c-b57a-b18c9e4372b3}}
{{formula:cf743bd9-262d-4167-8513-3d062cfc92ca}} This is a scaled version of {{formula:fbf3d56a-7307-4901-abf0-7033995e00bb}} . Let {{formula:1ca8a884-21be-4a89-a200-41a2d946abde}} be the maximum value of {{formula:20a9ebb2-f9d5-45d9-b7d5-d5782daa59eb}} in the training data. Then,
{{formula:19e6bce6-1a53-4497-9550-e42c23ae370c}}
This estimator will integrate to 1 and will be in the interval {{formula:a6ad3973-cd02-4141-a2d6-16d3b4020311}} for training points but for test points it can cross 1.
{{formula:5a8cdf13-80d8-40d9-a4e4-ce2d0b18ff88}} This is a scaled version of {{formula:8dda3de8-2671-4034-a3eb-c0ef3cf7422b}} . Let {{formula:de67eb60-776b-478a-8ea5-25e765d1f790}} be the maximum value of {{formula:f522a330-ae0b-44be-b05e-97bc41f2bfb9}} in the training data. Then,
{{formula:bb1e5d5f-8da8-44fb-b22c-5549df646418}}
This estimator will integrate to 1 and will be in the interval {{formula:41fbedbc-dd27-4ae2-af11-18b2f4a08cd8}} for training points but for test points it can take values less than 0.
{{formula:6811e896-a282-4576-bb22-0a71f2494eba}} This is obtained by normalizing the estimators {{formula:117bd703-779f-45da-919f-3ad224abc7eb}} and {{formula:9c1f1a5e-11ef-45e6-bfa1-941397dd14a4}} as follows:
{{formula:b3f54af6-d0c1-42e9-bff5-a3c3e515c9dc}}
This is a valid probability density estimate as it integrates to 1 and lies in {{formula:f9dd783a-d2cf-4106-a5b4-5b74ef22f881}}
{{formula:d48a6d3d-ce1e-44d2-830c-d4e714736055}} This is also a normalized estimator but the normalizing is different from {{formula:99e730af-959c-4570-b0c5-d334b0519dd1}} . It is given as below:
{{formula:e5266765-3be6-4503-8757-e37147211210}}
This is a valid probability density estimate as it integrates to 1 and lies in {{formula:3ff73dde-942d-4345-b261-100721124636}}
{{formula:89b54f53-26db-4e4c-8067-9da427196b23}} This is the averaged version of scaled estimators {{formula:65bcb9ec-5198-448b-8a1b-523b009e4377}} and {{formula:2febd1d6-5213-4a54-96de-3df4141a7c89}} and given as follows:
{{formula:05351948-801e-4adf-97a1-6b0cea4506ce}}
This estimator integrates to 1 but for test data points it is prone to lying outside the interval {{formula:9ef58396-5b99-4565-a172-90d27a10f111}} .
The estimators which does not lie inside the interval {{formula:92475d8f-8452-4961-a76d-fe0c9a87a30c}} can be truncated to lie inside the interval. However, there performance might be affected. A compact form of the above estimators is available in Table REF .
{{table:c127da83-1e5d-410f-8316-922cf01e51db}}The comparison between these estimators w.r.t. various measures is done in Section REF .
In our implementation, we have used Gaussian kernels as the basis functions with centres as the sample points from the numerator density (positive/negative class).
The kernel width {{formula:b0ae9e78-b9e4-4804-b9d0-720a58293544}} is tuned by KLIEP's inbuilt CV procedure where the criterion is maximum objective value of the KL based optimization problem. The possible candidates for kernel width {{formula:dc3e66cd-2d3a-4923-853f-e2516398c343}} to be used in KLIEP's CV procedure were generated through Centile method described in REF .
An experimental comparative study of various in-class probability estimation methods
In this section, we compare the performance of the {{formula:b6465ad4-6027-4100-b558-af1af4b9a7b0}} estimation methods described in Section REF , REF and REF . We implemented REF with squared loss, logistic loss and modified squared loss and LSPC, KLIEP and {{formula:d44f09b4-359e-445b-8461-ab63d99a1995}} -NN on 3 synthetic datasets so that we can comment on the quality of {{formula:53c596c7-eeb1-4554-8386-1e8b61fd62fa}} estimates. In case of {{formula:a64894ad-2f08-4dab-abad-27a8fd3a1e9b}} -NN, the value of number of neighbours {{formula:77172b3c-0c59-4c41-8535-05f3f429dd83}} to be used is selected by cross validating w.r.t. the measure to be evaluated on. The dataset generation scheme is adapted from {{cite:521538da98e5c385c093f391b3b60633912a3bfa}}. We consider 9 performance measures out of which first 5 (MSE, RMSE, MAD, MD, KL) are purely for {{formula:ebe919cb-f3d1-4deb-b5ab-fdb132b8c3da}} estimation, next 2 (Acc, BS) are for the scenario when these estimates are used in label prediction and the last 2 measure the algorithms estimation capability at the boundary i.e., maximum and minimum. For an estimate {{formula:3d09e633-0761-4434-8cb2-139a362610b8}} , when the train data is {{formula:99925da0-10a8-4fbf-aae6-0ff409444961}} and test data set is {{formula:242059f1-c73f-4950-8b37-adbec1a6bc38}} , the details about all these measures are given below:
Mean squared error (MSE): {{formula:f595016e-3602-4d3f-8236-9065006b9be0}}
Root mean squared error (RMSE): {{formula:c813cb3d-8e21-4333-93a3-059722d7d2f3}}
Mean absolute deviation (MAD): {{formula:e859e6c2-959b-4e25-9579-66215d58ef54}}
Mean deviation (MD): {{formula:088e220f-e103-4452-aadb-510699042c2e}}
Averaged Kullback-Leibler divergence (KL): Since {{formula:5dd8321b-1143-4336-87e4-257bd7beaba7}} is a density, we can compute the KL between its estimate and true value as follows:
{{formula:e3363cfa-48a1-4f8e-97f6-ae3acfe7991a}}
Accuracy (Acc): {{formula:06332a1b-9f3c-4891-b44d-5b8a5b5cfb3b}}
Brier Score (BS): Brier score is a proper score function which measures the accuracy of probabilistic predictions and is defined below:
{{formula:a0469956-5c00-4c19-b842-4acaca0c4530}}
DiffMax: In some applications, one requires how good the estimates are at the boundary. In such cases, we have to compare the maximum value of true {{formula:beb61c2d-7524-4bfb-bfff-6d40fe155cb2}} to the maximum value of {{formula:d25b94b6-ee04-46d6-9a41-e64ccdeb3e51}} on the training set. This measure plays an important role when one is interested in estimating noise rates as some noise estimating schemes use corrupted {{formula:ef2e76d7-1306-4da5-b539-e5b652b71484}} values {{cite:b2323c5166dcd0c2abaa3c3e460110dce0431bd9}}. Therefore,
{{formula:be331fb8-82ab-452c-be35-cfd2d0c2bf7f}}
DiffMin: Similar to DiffMax, this measure is also a deciding factor for quality of noise rates.
{{formula:4a24ebf6-f3b5-4f22-9c88-1f910355eb5e}}
The synthetic dataset is partitioned into training and test set with 80-20 split. The first 7 performance measures are computed on the test set and the last two on the training set. To make the comparisons more robust, this process is repeated for 10 trials. Finally, the reported values are performance measures averaged over the 10 trials along with the standards deviation.
Synthetic dataset 1 ({{formula:1edfa281-6aa9-42aa-a9e1-1c0a40507c6c}} )
This is a 2 dimensional dataset. We first generate 1000 binary class labels {{formula:22017d2d-c024-49cc-b1f7-09d2ba94647c}} from Bernoulli distribution with parameter {{formula:d1f59472-61f8-42c6-b818-b19fc06f2165}} . Then a 2-dimensional feature vector {{formula:acd45046-d666-48cb-8878-e27bc3b75669}} for each label is drawn from two different Gaussian distributions: {{formula:987084ef-f4c8-43b3-be46-4adf2981efef}} {{formula:b9c5f1e1-4d4a-4b81-a7a2-a8e6d968504c}} {{formula:cab185fe-6bca-4a97-b216-17305c2e1636}} where {{formula:93777195-826d-4bb4-bc78-ab3670ec52f2}} . Table REF , REF and REF present the values of various measures for this dataset.
{{table:a22f85d1-159b-4687-aa39-d1c4fb4e4c90}}{{table:c434c301-5d30-498f-9479-70fc3a01112e}}{{table:16a9f7ff-eb19-4392-852e-5866a4b9a6a2}}
Synthetic dataset 2 ({{formula:f9de2471-3865-47f2-a376-53eeb9f44d92}} )
This is a 3 dimensional dataset. We first generate 1000 binary class labels {{formula:b0109348-70d6-468f-8f99-aecbbfc8a7b3}} from Bernoulli distribution with parameter {{formula:7b92daab-f303-4248-96ed-405984ddd24b}} . Then a 3-dimensional feature vector {{formula:511b49ef-6780-416f-93d2-c9255f2cc73f}} for each label is drawn from two different Gaussian distributions: {{formula:6855c631-5779-48d1-8c03-1e35f7d00673}} {{formula:b6f047cd-fd0b-4f10-bb2b-16425d7dfbc1}} {{formula:f6abf469-710f-4287-983e-b447db8f509d}} where {{formula:cdfcab18-b274-4adb-b7fd-442acafd0fc6}} . Table REF , REF and REF present the values of various measures for this dataset.
{{table:8ae3c214-cc88-411d-8ce1-b6d4e4e30f3b}}{{table:0d7d5917-8337-4de7-92ef-4b70b4f2331f}}{{table:cc3385aa-9857-4fb1-b0cf-cfaf1c440c73}}
Synthetic dataset 3 ({{formula:d12e94b9-a69d-4216-ac40-5fcade70f597}} )
This is a 10 dimensional dataset. We first generate 1000 binary class labels {{formula:1b2ae856-6dec-43e3-8df2-d640ac071b61}} from Bernoulli distribution with parameter {{formula:8868dd61-2313-484e-9d6d-ad513f165893}} . Then, a 10-dimensional feature vector {{formula:55458423-e4ca-464d-9fdd-1449efbf3d7e}} for each label is drawn from two different Gaussian distributions: {{formula:320b1844-67ed-405a-8fee-192b27f444a2}} {{formula:59dfdcc7-60bd-4d5e-8619-fc8450639779}} {{formula:138691f7-cfd1-4d93-ba55-bb7a6bad44e4}} where {{formula:73bffef7-2570-4bae-b657-9378e754ef35}} is such that all the diagonal elements (variances) are 1 and the non-diagonal elements (covariances) are 10. Here, the means are {{formula:b82547a7-b685-42f0-bd2b-4e0009d3d129}} and {{formula:88ddff3f-ecd7-49ae-af09-d31654009717}} Table REF , REF and REF present the values of various measures for this dataset.
{{table:f85690b1-2725-45d8-855d-0b85cac2af56}}{{table:f192796d-76a0-4681-98b8-e8129c7e8d6e}}{{table:c3d70854-72c9-4aef-81ed-32d89a003a37}}Observations
We observe that as far as label prediction is concerned REF with squared loss, logistic loss and modified squared loss, LSPC, {{formula:483d3071-2464-4362-8fdc-1a3b31468918}} -NN and KLIEP's {{formula:2ad84148-16f0-4723-b6f8-cede5ba46f74}} estimators perform equally well as their accuracy is the highest in comparison to others. This trend has been observed across all 3 synthetic datasets.
With respect to DiffMax and DiffMin measures, REF with logistic loss and modified squared loss, LSPC, {{formula:efadbe5f-3695-429a-b4a5-8f91497e8ae0}} -NN and KLIEP's {{formula:a86fc55e-b7a1-443f-a112-f3764775b1b0}} estimators are good.
Risk minimization under weighted 0-1 loss in the extended cost space
In view of the negative result obtained with the {{formula:cc3b6712-de55-4796-b76b-b9d3cee25208}} -weighted 0-1 loss function, we explore a larger cost space. In a similar kind of representation as that of {{formula:ad65edb2-147e-47d2-aa8f-a2052d1efc55}} , one can extend the parameter space from {{formula:527c95b8-05f0-4f73-943b-5b7af800d24f}} to {{formula:44ad50d9-b988-4fab-a85e-e8185382b1a6}} given as follows.
{{formula:0386fa66-51d9-47d3-a689-454b63c86361}}
{{formula:01ac1506-2e07-4ed3-8099-e1683eaf6d26}} (or {{formula:c27bd598-71e2-4f3a-9d0f-2ea11b59c387}} ): loss when a positive (or negative) class point is misclassified
{{formula:78dff813-eb69-45af-aec8-21edfc59d108}}
{{formula:fc47fb3f-0bfd-4142-9f18-e4f86db9c19d}} (or {{formula:20d5f2f5-97d4-48f7-8769-c9b641ce0fb9}} ): loss when a positive (or negative) class point is correctly classified
Let the extended cost sensitive 0-1 loss be defined as
{{formula:0d69d41c-3899-491c-9c95-b539e139fba8}}
Then, the corresponding risk on {{formula:75e6eae1-5ed3-4336-843f-9a7f6d3fd23e}} and {{formula:3e1acc86-af87-4210-b046-fc710200c936}} be denoted by {{formula:bf5a8c4b-234f-4bdc-ae01-64d86dc80648}} and {{formula:08563251-404a-41b4-9fa4-5c35dbfd64db}} respectively.
One can reduce the extended {{formula:162df350-9862-4b34-8d85-7db43e01df48}} based 4 cost space again to {{formula:57a70ad4-884e-46b7-9f5f-fdf51fc7df7f}} based 2 cost space as the threshold of the optimal classifiers does not change by reducing the parameter space. This has also been shown in {{cite:c98d88d234baa3e6739c5fabe6456b40b38714fe}}. However, we still compute the minimizers of the risks {{formula:30f269ab-e5c4-4038-bc2e-2cbe77ee24ca}} and {{formula:3940bc52-e6d3-45f4-8604-505e87526f33}} and show that they need not be equal.
Using the same technique as in {{formula:7e6544e1-fcdb-4023-86c0-3a9eef3fc6de}} cost case, one can show that the minimizers of {{formula:d0cf43df-5173-4a49-8061-0fcef34e1fb0}} and {{formula:7032103e-54a1-4c8f-8bd5-0dddadbcf122}} are as follows:
{{formula:62b51787-ef5d-417f-8e4c-432567806171}}
{{formula:607d257d-c1be-46df-b910-ac8ff29de79e}}
The threshold of {{formula:0f3fe295-cc29-4d12-a32e-5292f707f4ea}} in equation (REF ) and (REF ) cannot be same unless {{formula:ccbf7dd3-acdc-4e71-9852-1cc53f9a74b3}} or {{formula:bcd1ef54-c624-4554-9f30-6ceaa3979dea}} . Therefore, even in the extended cost space, under uniform noise and differential cost, {{formula:9f2fde29-7912-43da-9311-62df107fa424}} , i.e., learning a classifier from {{formula:2d65c69a-fe72-4a69-aa70-a9c825d90370}} which is both noise robust and cost sensitive is not always possible. This is independent of the extended sufficient condition (E-SC) that we now introduce; as we will see, this condition doesn't help us for joint uniform noise robustness and cost sensitivity.
A natural extension of sufficient condition (Symmetry condition SC {{cite:95ad2c48956292193665f49c47acd754c337fa68}}) in cost sensitive case when used with weighted 0-1 loss can be written as follows. For an arbitrary classifier {{formula:41222889-fa87-443d-91cc-8ce711790b65}} and {{formula:8122fda5-2051-43c9-af42-a62b380967f8}} ,
{{formula:1841e0d0-662b-49ef-8089-ed7f8139ef2d}}
Clearly, {{formula:326a9ca0-4337-4a79-9ad4-716068cf8c25}} which can be equivalently written as {{formula:24e835a0-6433-4cce-bd08-77d3aa22f4a7}} . This implies that the effective cost for each class is same, i.e. {{formula:f4c370d6-f06c-4aef-aeea-39bc177d6ad9}} and hence no differential costs for the classes. Therefore, E-SC cannot be sufficient condition for SLN robustness if there is differential costing of {{formula:2ffe2de1-5bab-435b-8614-a570581d3c08}}
| d | b5460292bb225538269995dcade8d47c |
Studies related to the pion form factor within AdS/QCD program were motioned in Sec. . For sake of completeness we will present them again in Refs. {{cite:9d729b60f7ca748bc60be527737839b0e62eeee7}}, {{cite:e72f98abe92237c2779b8cc483226c322cae7129}}, {{cite:7fdda353562bc0c0f38d47bfbb3b912a7be47f8b}}, {{cite:83044688718c263bb83c0435c5d744cbb24fdd4d}}, {{cite:aa7356139cbca8746e1b6d7df0670e7716d64ae7}}, {{cite:1b25f343374c83a4199957894c7b37e2b62cc75f}}, {{cite:6ffa1a15ca7bfd4ad5a2e43fba66702d888885f7}}, {{cite:b72f55b411ae08bd51113120be29f713c95d559d}}. Such works achieved a good agreement with others found in the literature specially for soft or intermediate processes.
| r | a2d6e0df2944234d632d9d7434170ca1 |
Binary code scale and diversity In our experiments, we adopt AnghaBench as the only dataset of source code and use the compiler method to get LLVM assembly code from this dataset for COMBO pre-training.
While it has been an extensive dataset with one million C programs compared with other datasets in AI-based source code and binary code analysis and serves as a benchmark, the scale is still far behind that in natural language processing and computer vision.
For example. the large version of RoBERTa{{cite:c248b8aac5f17f234d168f80ff1006700bb6f250}} has approximately 135 GB corpus during training, and MoCo{{cite:598297a7c3c8a85e0ca7c23c9368528430d1db60}} has 150 GB ImageNet dataset for thorough contrastive learning.
Since real-world code can be diverse, software engineering and the security community may benefit more from AI with a large-scale dataset to perform code pre-training and code contrastive learning tasks.
| d | b2c11c9aa5cb50dd5ae397efe560fc0a |
Please refer to Kurtz {{cite:19dfd5857a4f0465534553b071bea825afcc03f4}} for this lemma. We give a proof for the readers' convenience.
| r | 04fdac5454e490bee8c311dd21c534c8 |
Overall Architecture Fig REF (a) shows the overall architecture of our proposed method. We first train an in-domain intent classifier using IND data in training stage. Then in the test stage, we extract the intent feature of a test query and employ the detection algorithms MSP {{cite:f3de616bbbbff27df9ed2ffb94443f14cfe7ef96}} or Energy to detect OOD. Fig REF (b) demonstrates the effectiveness of our method distinguishing OOD distributions from INDBecause the max softmax score is higher for IND samples and lower for OOD samples, we use the negative energy score to align with the conventional definition where positive(IND) samples get higher scores..
| m | 141e4ab1f7dc72f156457d21b537f529 |
Appendix 2 shows that this expression, which corresponds to a Bayesian {{formula:53aee991-c9f2-4364-8f19-7426930c2d21}} -test (cf. {{cite:8b2dd90ac5a8c97a824b853807ea717917efc737}}, Eq. 6, and {{cite:467dcdc8dd76a05a8ae0ec0cce29493ba8ee0635}}), approaches Jeffreys's default approximate Bayes factor (e.g., {{cite:af806eff3be51f07795951b1191983ca2e680abc}}) with increasing sample size and under a unit information prior.
| m | c11d21c9a66ad93b08ff36095cca88e5 |
astropy {{cite:e9843ce13037dd97e97de9e359bf8b37e135936e}},
Dolphot {{cite:7bf396ed269d0486db96dfbd1ab22a4242f4ad5b}},
Tiny Tim {{cite:c146d3fc16550505ba82627f30ee9e8b895001a3}}
| d | 16e3378195f1f1fa73e7a2444f519fc1 |
We also evaluate the performance of Seq2seq / transformer-BERT to better prove the merits of using fastText as a semantic-neighbor word prediction model. The performance of them is shown in Table REF in terms of accuracy metrics, Ent, Dist, and Sen(the average of Ent/Dist/Sen-n and n=1, 2, 3). EA and BERT variant show equivalent performance on both base models, because BERT embeddings without tuning on a specific domain can even perform worse than a universal embedding {{cite:eedf6ebb0b46de9b3caf85f6d3789e4c8499242a}}. However, the huge BERT model will increase the computational load since the training time of our EA is approximately 1.98{{formula:e91bbe1d-2ac1-475e-9f2c-65a2e3e5af99}} time of raw models, while it is 5.87{{formula:44918adc-a0ff-49d4-aae0-2dfeec7e19b0}} for BERT variant and 1.37{{formula:d8d7c07c-8e62-4f56-9fd2-a87447dc04c2}} for replacement according to experiments. Obviously, it is better to select fastText rather than BERT for augmentation candidates prediction.
| r | a3b0187e946974abb650eaaa3324670c |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.