text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Table REF reports the comparison results.
Our method significantly surpasses both baselines under all the evaluation metrics on the SOBA-VID test set.
The IoU Tracker relies heavily on the quality of the detection results, whereas Mask2Former can only recognize objects of the same category as those in the training data {{cite:716302c05a0cab481cb9773d682ec6c53b79a86f}}.
In contrast, our approach benefits from the end-to-end training and knowledge learned from the unlabeled videos; see Fig. REF for visual comparisons between SSIS-Track and “SSIS + Mask2Former”.
{{table:4faed111-0ba8-4aa0-bb09-4956028f8ac8}}{{figure:048a326f-e9e4-4144-a558-ebcb78f9e095}} | m | 2217d5935a6d7381fae3d4cfe3c257a1 |
A standard model for learning in the mammalian brain is CLS (Complementary Learning System) {{cite:f1bb7672622dd5be0846d38bfde648e94c68746c}}, {{cite:6eb4812cafd8e77716ed030ef177904de4d9ee31}}. In CLS, Neocortex and Hippocampal Formation (HF) comprise two complementary learning systems with bidirectional connections. Neocortex graudally learns structured representations of the environment and HF learns specifics quickly (one-shot). HF slowly consolidates memories into Neocortex without interference with existing memories (continual learning), through interleaved replay of stored memory patterns. Replay occurs when the animal is in a passive state, and it also occurs as a response to external cues when in an active state. In this model, the HF constitutes short-term memory (STM) and Neocortex constitutes long-term memory (LTM).
| i | 1c476c9dd04b6e20169ca8411796e855 |
The setting for Out-of-Distribution Generalization has been defined {{cite:96b72768cd05903dbd603b6d33ed188362a866d6}} as follows. Data is collected from multiple environments and the source of each data point is known. An environment describes a set of conditions under which the data has been measured. One can for instance obtain a different environment by using a different scanner or studying a different group of patients. These environments contain spurious correlations due for instance to dataset biases.
| m | ae9018f747382118123443fbc6cdfd22 |
The use of Bayesian optimisation (BO) can alleviate the issue of high computational cost {{cite:09c69d79816d8dbece343119e4174f733ff0bc84}}, {{cite:40cc01c53d802fd6f456a1c563ceaa9ef77a31d8}}. The framework of BO is concluded in the sequential model-based algorithm {{cite:0569e27be26bb0009d8d83f03ecf2800705cb2db}} that iteratively exploits historical data to fit surrogate models or other transformations, while the most promising candidate is drawn according to a predefined criterion (i.e. acquisition function). The work presented in {{cite:f2f1d34fa652b06fb31fa94e306397f5c87b0fec}}, {{cite:04598828805d3a8ea6ac50a6e109a1f4757ed383}}, {{cite:fd63e93c2dd51ac4605e20a17e144420e8012928}}, {{cite:e8694ea54a59be8ec54439445c562cf47129d648}} makes use of Gaussian processes (GPs) to approximate the distribution over the objective function, and the candidate is selected by the acquisition function: probability of improvement (PI) or expected improvement (EI). In contrast, TPE {{cite:f2f1d34fa652b06fb31fa94e306397f5c87b0fec}} employs Parzen estimators as a surrogate to directly model promising and unpromising hyperparameters, and the candidate solutions are drawn according to EI that is proportional to the ratio of two density functions. During searching optimal solutions, the acquisition function plays a significant role in balancing exploration and exploitation.
| m | 4f5a6a2256bd604d993052fbd6650153 |
In fact, private schemes such as multi-party computation (MPC){{cite:8a57e6b6655c757acdcac9148f5110ce4bbc9a62}} and homomorphic encryption (HE) can enable provable privacy while handling users' data. However, the large overhead of computation and communication of these private methods are not practical for public use (though they can be good for cross-silo use case). Thus, companies start to investigate cheap but privacy-preserving learning schemes. Google presents Federated Learning (FL) {{cite:bb305d6ebc12ff806da4034185f352d6eba0a22b}} as one of these schemes and deploy it in its Google assistant and Google Keyboard applications.
| i | abafe7f7186a0db521b94c0842aa1d57 |
Besides THUMOS14 and ActivityNet, we also evaluate the effect of end-to-end learning on HACS Segments {{cite:007676c36153ead2bb9e5ed52aa150f3e9afddd5}}. As can be observed in Tab. REF , end-to-end training improves the performance by 4.71% and 6.42% in terms of average mAP with TSM ResNet-18 and TSM ResNet-50, respectively. This again demonstrates the benefit of E2E learning and its generality.
{{table:b752d6ad-c25a-4525-b4c6-99066af3d3df}}{{table:f8dc230b-dd85-40ad-a24e-f67ff8d5921e}}{{figure:5d48c16b-f26d-4300-9bc8-2bee5f06548f}}{{figure:f8e01d4c-ca14-46a2-a2f3-679fca6182b5}} | r | 8d0f3c6caa6f57701d95f229272c8b3e |
The adversarial attack in ITS has been studied under the context of recognition tasks for traffic signs {{cite:56496b3fe4d18021ee562598adb3c76869d9bc30}} and licence plates {{cite:7bfd95a2617806d51a8fb97a4af77241025c79be}}, and control and coordination of the vehicle platoon {{cite:1265fe00772a5a560850a9fdd538302e06d5bf58}}. However, its influence on traffic state prediction has not been receiving equal attention {{cite:31911a1a0106e63cc1786a13db8a567771d2e302}}. In this paper, we propose an adversarial attack framework to degrade network-wide, multi-step traffic state prediction models by treating the model as a black-box, i.e., we assume no knowledge of the model architecture, training data, and (hyper)parameters. Nevertheless, we assume the adversary can oracle a deployed model (i.e., the target model) using arbitrary input to obtain its output. The input-output pairs are then used to train a substitute model to mimic the target model's behaviors. Next, adversarial signals are generated based on the substitute model via Fast Gradient Sign Method (FGSM) {{cite:0838220cf082be75f460a6adad3b7e5a5bdebd69}} and Basic Iterative Method (BIM) {{cite:7ebceb22f8048bb0f7f34d00ad351b0bd411cee9}}, respectively. Using the transferability property {{cite:e1d932080c10f7590db6c00a1c4d4401556d473f}}, the adversary then attacks the target model using the produced adversarial signals with the goal to degrade the target model's prediction performance.
| i | 2878c4f1d80f1429d4b787d28ed36984 |
Different from most of existing works which only consider part of the BS-RIS-user association problem with an ideal RIS model, a more comprehensive and realistic practical RIS-assisted multi-cell network is modeled, in which we concern about the joint design of BS-RIS-user association, the active beamforming of BSs, and the passive beamforming of RIS.
It is worth mentioning that we adopt the practical RIS reflecting model which describes the frequency-selective characteristics of RIS {{cite:34b44fca5a1035bbee2d1758423fdc1f0c62ac8e}}-{{cite:ecb632fcb8dcd2d308567ec54a94fd13d5b51fca}}.
The sum-rate maximization problem is then formulated, which aims to maximize the sum-rate of all users in the cellular network subject to the power budgets of BSs, the association constraints, and the restrict of RIS elements.
To facilitate the joint association and beamforming design, we utilize {{formula:8798589c-f328-458a-8571-66b75aa46cad}} -norm to combine the active beamforming with BS-user association and integrate the passive beamforming with BS-RIS association.
Then, in order to solve the non-convex non-smooth joint association and beamforming optimization problem, we first adopt the fractional programming (FP) and block coordinate descent (BCD) method to deal with logarithmic and fractional parts, and then decouple the problem into several sub-problems.
Efficient algorithms which combine {{formula:1028cc86-9243-487e-8358-2260878300d6}} -norm approximation, majorization-minimization (MM), and alternating direction method of multipliers (ADMM) are developed to alternately solve the sub-problems.
Finally, extensive simulation results are provided to verify the importance of the joint association and beamforming design as well as the effectiveness of our proposed algorithm.
To be specific, the proposed joint association and beamforming design algorithm facilitates load balancing between BSs, which is beneficial to the utilization of resources. Moreover, our proposed algorithm has significant performance improvement compared with benchmark algorithms, which demonstrates the advancement of joint BS-RIS-user association design.
| i | 0885a83cfc146daeb841964a8a953150 |
To attain a model-agnostic local FI measure it is possible to utilize a local surrogate model. For example, local surrogate model interpretable model-Agnostic (LIME) {{cite:5af65ff9698d5a07557019caf2e18acc15e25146}} was proposed to attain a FI for a specific observation by training an interpretable model in the proximity of this observation. Specifically, for each observation, LIME creates an artificial dataset and trains an interpretable model on this data. The observation FI is estimated as the global FI of the interpretable model. In SHapley Additive exPlanations (SHAP) {{cite:4fa395f7ec7f468fd3be5f3ee864846ca575e4b4}} unify surrogate models with ideas from cooperative game-theory by estimating the Shapely value.
The Shapely value of each prediction is the average marginal contribution of a feature to the prediction over all possible feature subsets. The Shapely value has many theoretical guarantees and desirable properties. In general, obtaining the exact Shapely value in a model agnostic scenario requires exponential time. Later, {{cite:79827bc16521b3990eab262a22688dc2dda10a92}} introduced an exact polynomial-time algorithm to compute the SHAP values for trees (TreeSHAP) and tree ensembles.
Although theoretically promising, TreeSHAP can assign a non-zero value to features that have no influence on the prediction {{cite:f9232d4920d424619e21c3c63be6fa38d91f4123}}.
| m | a63650a7f6718a011bd8bc47aca34282 |
We tackle the challenging large-hole guided completion task where the goal is to complete whole objects or large parts of objects that could be arbitrarily located in a natural scene.
Different from methods {{cite:7687f8f25507ed5788feb01a9179de3a886a7cfa}}, {{cite:e24a90ec150d6d0ae5b0371f90facc7de3fe883e}}, {{cite:60ffc824b7bdf80033aedad1a15912f45217f1f5}}, {{cite:e2f2fb25c077431ae163415a39f4071d347a0670}}, {{cite:20d621383febd25bdd2419717a08bde75bbfa96b}} that treat guided completion as a straightforward extension of inpainting {{cite:19c09f0acfeb733f95a3217dcc72bb85dbe03919}}, {{cite:20d621383febd25bdd2419717a08bde75bbfa96b}}, {{cite:6337fb8c01309da110f8d3abbd79989ab8396e7c}}, {{cite:f4886fcf56a1f571714c0360fdf304085189e9d4}}, {{cite:113a0f2d0f28ed443b4d1b99cdf47dd2aa04cc8a}}, {{cite:0b33889c6a98ba21c164108f47f89f213317a370}}, {{cite:022c9a13ea8227578a9c455f1733842ddcccf256}} with additional conditions,
we argue that properly imposed semantic and object-level constraints on the generated image are crucial for improving the realism of complex semantic layouts and objects details.
| i | 698e39afcfb0b709f5eaa97c1c7462ca |
Lemma 3.13 ({{cite:2013b719c7879d2a8a495ab37d0e0fc14f00c223}}, Lemma 3.7.10) Let {{formula:cd1e96a8-98dc-4973-ba77-f1d34e28fd88}} be a finite graph. Then
every vertex in {{formula:c73b9a62-4dd0-4489-b8ca-e295548f6d80}} connects to a sink or a no exit cycle or an extreme cycle.
| r | 4a6e689c260c6018efe807a4f992c93e |
With the advent of deep learning, medical image segmentation has drawn great attention and substantial research efforts in recent years. Traditional supervised training schemes coupled with large-scale annotated data can engender remarkable performance. However, training with massive high-quality annotated data is infeasible in clinical practice since a large amount of expert-annotated medical data often incurs considerable clinical expertise and time. Under such a setting, this poses the question of how models benefit from a large amount of unlabelled data during training. Recently emerged methods based on contrastive learning (CL) significantly reduce the training cost by learning strong visual representations in an unsupervised manner {{cite:a38ff3f92822e23456252abf8040393e15a54d01}}, {{cite:5c0d5202392b80126e48474f3aa3efd64457e4d1}}, {{cite:f3c1ef83ff81a10835c005e71cc51acb21961045}}, {{cite:750bd2e7624e586c0e6aea820862652e93361d00}}, {{cite:c31911cd3d1511542e4a6f5091ed74dbae8b1ef8}}, {{cite:92411b56024570bc5efb45f87ad414e17447aa5b}}, {{cite:c7be5ec71282679dd44bb4e0b0c5c306c3801a24}}, {{cite:7f1c2244b4fc024d6ccab6e596e8e234f1a464cd}}, {{cite:0105d2e25ced05038959ab3035482bedd8e5e3f9}}, {{cite:e73e361e8d55687c579d4187d6d5aaf0673a03d1}}, {{cite:429939192033ac93488ea58d202cca0398a42c0c}}. A popular way of formulating this idea is through imposing feature consistency to differently augmented views of the same image - which treats each view as an individual instance.
{{figure:e1fdd44a-039e-4623-91af-42ed6f204391}} | i | 7f0a1f5fb06ca002de62bc8bf0a8cd5d |
First, we recap the method of {{cite:1865069f80df0fb444997c9f3bb79210614a081b}}. Eigenvectors of the covariance matrix {{formula:7ae7e550-13f2-43ef-8f9b-0ee24b0c9a09}} can be interpreted as the best linear subspace approximation of {{formula:988760b9-5295-45ef-88f3-f076b6d89038}} because they minimize the squared residual errors
{{formula:e437891b-3f39-4625-8081-1b21d70d185a}}
| r | db2162205b0a28e977a05e7a781ad5a8 |
Transferability and discriminability are two key criteria that characterize the superiority of feature representations to enable domain adaptation {{cite:12a7501ee9c88caff8df4c82bbd27b89df47250f}}, {{cite:fa636dc98e521341f7d319771d7a43cae6e6598b}}, {{cite:9993968432a8c42b3c9214214357024fd609047b}}, {{cite:b7d68a581300151ee58cb92ae0096d8d155027c7}}, {{cite:0f7253b0123faf1c595b3d8a3fc734153b5c415f}}. The transferability indicates the ability of feature representations to bridge the discrepancy across domains, and we can effectively transfer a learning model from the source domain to the target domain via the transferable feature representations {{cite:9993968432a8c42b3c9214214357024fd609047b}}, {{cite:b7d68a581300151ee58cb92ae0096d8d155027c7}}, {{cite:0f7253b0123faf1c595b3d8a3fc734153b5c415f}}. Discriminability refers to the ability to separate different categories easily by a supervised classifier trained on the feature representations, and the model can achieve better classification performance via the discriminative feature representations {{cite:fa636dc98e521341f7d319771d7a43cae6e6598b}}, {{cite:12a7501ee9c88caff8df4c82bbd27b89df47250f}}.
| i | 856160903df8aa5a10a6e3f91883a6f0 |
Theorem 1 in Query2Box shows that any embedding-based method that retrieves entities using a distance-based method is not able to handle arbitrary disjunctive queries. To overcome this problem, they transformed the queries into its Disjunctive Normal Form (DNF). By doing so, the disjunction is placed at the end of the computational graph and can be easily aggregated. The transformed computational graphs are equivalent to answering {{formula:900dd2e4-39af-47d9-b80c-6d53207de123}} conjunctive queries. {{formula:9ca2379a-d198-45f9-addf-0d8b8c377dfa}} is meant to be small in practice, and all the {{formula:c16f5157-27b4-4336-b958-caf803cbadcf}} computations can be parallelized. As expressed in {{cite:b84f622767b15fe3816cdded09dae742b9f85b29}}, all First-Order Logical Queries can be transformed into its DNF form. We refer readers to {{cite:763d12952ef40b6f79a89dcce14850a82ad6dc74}} to understand the transformation process.
| d | b97005b9c58a123041606de2ec209c79 |
tab:mainresults shows results on the 5000 samples when performing defense against SOS attack. We first show CACC and ASR scores when no defense is applied, followed by the baseline ONION defense and our proposed defense methods. We use cosine similarity to estimate the level of similarity between transformed and original sentences using {{cite:682d01485882117f8a7dd5fde8ecafb61a6b61dd}}. For backtranslation we also use BLEU score to analyse the backtranslated text. Figure REF shows how changing the percentage of changed words for word synonym replacement and random character deletion transformations affects the CACC and ASR.
| r | 2516322738ed1766e5ac9ef5a34e2171 |
Note that in this paper, we consider a class of perturbations of the idealized equation (when {{formula:a46b1369-8a15-4bf8-913a-b0128506bbcd}} ). This is quite meaningful since, physical models are sometimes damped and hardly come with a pure power source term (see Whitham {{cite:0aced6d780788606bb9cb185e94f3c8a3dc02b69}}). For more application in general relativity, see Donninger, Shlag and Soffer {{cite:28ed777c5196d809d6fbbbb8c5a73882e00b3582}}.
| r | b1cdb19754030b7f6d5036a35801ea46 |
Spectrum Sharing:
In this technique, both communication and radar systems share the same spectrum when it is free from the other {{cite:75373b2fe02cb680e6570028901659db2cac5252}}, {{cite:f0015134b4f3b0f81a920b37bb03b3f4d4a912a9}}, {{cite:8050977b91c848137f58eb367d60184732f253c7}}, {{cite:f0015134b4f3b0f81a920b37bb03b3f4d4a912a9}}, {{cite:6db8ec9e027b2cf22158dab80caabf43967f9742}}, {{cite:c56b8b31a37d99787f6efe16c097e8d0684a47ae}}.
Communication Centric Joint Communication Radar (JCR):
In JCR, radar parameters are estimated from the transmission of the communication system {{cite:4a05431fbe324b25aed379a45e9648e5d7a11577}}, {{cite:9341f693f696639f8b81d67cb14ba0376a526147}}, {{cite:357c4f70cc11dbf488ce95c4060532c2f3f9c5be}}, {{cite:5a6f36401e9bf04a59e46ea74f322796926bd9a7}}.
Radar Centric Joint Radar Communication (JRC):
In JRC technique, communication is realized on a primary radar system. The pioneering work in this respect is presented in {{cite:166087faad9f11cadade0c7459a2154ab3f9bc04}}, {{cite:a43236a2abeead38319b4a4af063d7c3be90f200}}, where OFDM communication is employed on MIMO radar.
dual function radar communication (DFRC):
In DFRC, radar and communication functions are implemented on a single hardware platform. Here, a single waveform serve the purpose of both data communication and radar parameter estimation. Based on the system model, DFRC can be further classified into two categories: i) Communication Centric DFRC {{cite:00b836ad31e3b0d0173d1993195161285007c0d8}}, {{cite:265054b7f9d377a24ab03ae735ae45e2cbc38701}}, {{cite:1690acd1bd583506232e969ef143ed5f6c08334d}}, {{cite:51a1aff6076c1ad7c803d36260d4a0cede23fdec}} , and ii) Radar Centric DFRC {{cite:adcedec8b03cb0f322fe62ce7be768321f35c379}}, {{cite:c534f896490c5f2b460705b9bebebbaabdbe5925}}, {{cite:2f02e6197006f5c16a4de01c541757e1cccad311}}, {{cite:0f66627e7c720ef1f918152b0d333bd2bf316724}}, {{cite:48e9e1a8573760858ae7043defc35f092132a388}}, {{cite:46b8c6767b0a748fb875a1a012dce75313ec87d5}}, {{cite:9416c622367479bef3ade6f145ec18be9b6c013e}}, {{cite:1b8d09e32e3ea3217848bac68fe84c338a21351e}}, {{cite:0c8eb7f0134013729306d6e528c46c4924d46ef9}}, {{cite:c48166b725bd21ab3055d8fdef1358ee76d5a629}}, {{cite:a7575a0f2085fdafcc4e0361a545556a8fe2847d}}. Both of these technologies rely either on the design of receive filter or waveform design.
| i | cbc56b19857c424aff648efcd7d231f4 |
The obtained results are shown in Tables REF to REF . The performance of the different approaches is evaluated using three well known indices of quality: the adjusted Rand index {{cite:d8c4959ca6c178a0254c4baadc06e86580e05802}}, the normalized mutual information {{cite:b0403073f017fa9de26091da2b1d15b520f083ec}} and the V-measure {{cite:cfbb8a5d65b6e71639eddd5a8f19e5073c27505a}}. These indices take values in {{formula:d44f20bf-4f99-4117-a2ba-68bf8ce84ab1}} , 1 being a perfect match with the expected clustering and 0 denoting a random solution. Internal indexes such as Silhouette, Davies-Bouldin or Dunn {{cite:bf28f37ab36cfcfa0d48ed7473a3251f689b33f6}} are not considered in this comparisons, because they are directly based on the similarity measures that are different in the several approaches, resulting in non-comparable values despite the normalization.
{{table:7ec71ffa-ddae-4a0f-a90c-3ff0a39f4c65}}{{table:ebc199cf-42bf-4438-83ad-e38ba5fa4eeb}}{{table:d3b8ccf8-20c3-4858-b03a-5987160af906}} | r | 4a7691ddeca531afc13ff3b8d78d0707 |
GWs detected by the LIGO and Virgo experiments {{cite:a547f2c195a1dc5fd8d57e8943f51175c3dfc821}}, {{cite:21bb41eef818d434b2c07ed77946bfabef04b000}} have been used to infer {{formula:61b7b910-c91f-4c16-925a-049a9a589d05}} using various approaches and data sets. A first approach {{cite:8b24b663158c5eea3c5829252e23f4805a9573ad}}, {{cite:11e9bbbcee2e0038de155cded69f0a916be9a773}} is to obtain the source redshift by locating the host galaxy thanks to an electromagnetic counterpart to the GW signal. This approach has so far been applied in two cases. The measurement {{formula:3ad05196-441e-4f77-b09c-1329c0759514}} in {{cite:00529a2daa3caf07472484a980c8467c2c44c706}}, {{cite:b8ad79199a98f605eb7091a93553736a36dcc2c2}} was obtained after the observation of the kilonova optical transient that allowed the galaxy hosting the binary neutron-star (BNS) GW170817 to be pinpointed {{cite:215c817cfd3bdbcaa8e8c4f860719ee41c606d07}}.
Similarly, the optical transient {{cite:4dec2f7b14f10afb0400fc3e51e3823172a62a53}} tentatively associated to the binary black hole event GW190521 {{cite:2b6f7528cb371daff8a97aff262fc68843089921}}, {{cite:39eea89fb0293338b5b62632602755020f86ec7c}} led to {{formula:f6f899f1-7d94-499d-b1b0-3baca417154a}} {{cite:13dd6aca5b6b7bc8e08e80e2045d3799601d6b4c}}, {{cite:1754fb3c9184b23dd2ea89202e5b197cf063d693}}. From GW sources with electromagnetic counterparts, it is also possible to test the general theory of relativity from GW propagation {{cite:81b73723c0b292dd354ac4364e95b97e50437c06}}. In order to make an accurate measurements of cosmological parameters and testing general theory of relativity, it is also crucial to correct for peculiar velocity of galaxies which is mainly essential for the GW sources situated at low redshift {{cite:1a7ec3c0eaec40baaa0b71cc9a578a21981e9e67}}, {{cite:f5dbc17928fc142e91d90fee834b7aadfbd21729}}.
| i | c54362ad5062e4b2e2354975b52d01bb |
Comparing with Aug, Cndt shows statistically significant performance improvement in four test sets with long {{formula:598d6419-9c92-4af8-a57c-c67702e8b903}} (R055, R068, R077, R134) ({{formula:86675d8e-cd5d-4271-bf0e-711b87fed673}} {{formula:23b68bef-6f6e-4f78-8780-35f0876ebd1b}} 0.01). It suggests that the proposed conditioning method complements the performance of the Aug, and the longer the reverberation time of the test set, the greater the effect.
The results of the R055, R068 indicate that the proposed conditioning method gives an additional enhancing effect even if the reverberation time of the target rooms is considered in the train set by augmentation technique. Also, the results of the R077, R134 indicate that with the unknown target rooms in the training phase, the proposed method can improve performance using non-training information of target rooms. It means that it does not need to re-train the classifier, so it can reduce the amount of data required for training by alleviating the need to force the model to have the generalized performance to all rooms.
For scaling and biasing operation for conditioning, similar to {{cite:9918565b337c9040413575f95886d781c1b4fe0c}}, using only the scaling layer showed more performance gain than using only the biasing layer, but using both layers achieved better results. An extensive analysis of what information each operation is conditioning is left as future work.
| m | d49dd9a1871ce35097fc1b29cd798b22 |
The most established way to study these theories is through
holography. At large {{formula:0bfa20ef-9edb-4832-82f5-a1cf598e3aa6}} , the {{formula:509fac8f-3577-43f5-a4a4-f3561f710a65}} and {{formula:ca39aa15-d6b6-4aa0-93cb-a4b9807146ef}} {{formula:f80dce5c-008c-486a-a179-8e1862e3e511}} theories are
dual to 11d supergravity on an {{formula:2b99d50e-9b61-4e95-9a2a-ce59f54eb059}} {{cite:6e4d690dda2d863b7409ab7bfe659414e58aa0b5}} and
{{formula:4afc544a-78a4-4795-abe1-d4d1ff196ab4}} {{cite:7e063ccaff7a1a94d05bce2bda2c874a991e40f5}}, {{cite:b87b5ae2d9a1124ee6a78d23e906b79dd57607cc}} background
respectively, where the radius of {{formula:00126798-8b1d-49bc-b7f5-ebc46e618106}} (in Planck
units) is related to {{formula:4a6e9ca3-cab4-4234-b863-18d8d1e71fe8}} as {{formula:7b9492a5-383c-455b-abad-e0c1bbc06440}} .
This supergravity description is useful and leads to concrete predictions;
unfortunately it is also impractical beyond the large {{formula:73624f86-137b-45e4-8ec4-2b3364529ed9}} limit: Subleading
corrections probe high-energy corrections to 11d supergravity coming from
M-theory, and we currently have no way to determine these systematically. It is
therefore imperative to find new ways to calculate observables beyond the large
{{formula:13fa94dc-e20b-4729-9a1b-cf795f119e6b}} limit.
| i | 6fcc6af39a53bfe55ca51ef3318897d4 |
In this section, we compare our method with other representative ship detectors including RetinaNet-Rbb {{cite:5feacda39bb065494ca141cb06437b111a394015}} ROI-trans {{cite:e4ed085813c3b2db6c7a181a42905864ab37601d}}https://github.com/dingjiansw101/AerialDetection/, R{{formula:fec22c6f-6258-4834-9830-7d6abe92b4f4}} CNN {{cite:a3da39e41be980ca1e278f63a57c9ec0fbcd58f3}}, CSL {{cite:642fbcae10813c709e6fbd8f0c968bf92a3325bf}}, DCL {{cite:f926a26418ed46c0bb3ae1c5811aeb54be8a5163}}, RSDet {{cite:ac4baf6effe6e478091ebc110322b25ff983eb42}}, SCRDet {{cite:8a99ac135d3a2ccccb225f6601fd4971e45f7039}}https://github.com/yangxue0827/RotationDetection, and S{{formula:027ccee2-2a10-4cc2-8ddc-586805871ca1}} A-Net {{cite:739a9e48eb2ef96215964299dccf60e2446eb07b}}https://github.com/csuhan/s2anet on three benchmark datasets including FGSD2021, HRSC2016 {{cite:a20071332ffa78f9c80c294124ea83087bc2f3f7}} and UCAS-AOD {{cite:5da3f9874abdd63d48d7cf913415afb7f27e7a3b}}. To achieve fair comparison, we used the default settings of the original codes on the DOTA dataset including the same data augmentation strategy, and the number of training epochs.
{{figure:eb799d03-ce9a-4641-8e3a-7eff4613a0a5}} | m | 3122b83b80f685caf3880fa952dab9ed |
[noitemsep,topsep=8pt,itemsep=4pt,partopsep=4pt, parsep=4pt]
We provide a review for state-of-the-art ae detection methods and categorize them with respect to the knowledge of adversarial attacks and with respect to the technique that is used to distinguish clean and adversarial inputs.
We provide the first experimental study for state-of-the-art ae detection methods that are tested to detect inputs crafted using different aa types, i.e., white-, black- and grey-boxes attacks, on four publicly available datasets, MNIST {{cite:a37d866c0c48608d3621cd7d26a0887a7d5b0180}}, CIFAR10 {{cite:0775387703177a365dbd8f55575831d4059974f9}}, SVHN {{cite:4b268bebd3a26890fa06326eed3389dc3c5be160}}, and Tiny-ImageNet {{cite:7fa30a189b7b3a61acd8657467eb820048207253}}. The summary of the experiments is shown in Figure .
We provide a detailed discussion on ae from point of view of their content and their impact on the detection methods.
We publicly release the testing framework that can be used to reproduce the results. The framework is scalable and new detection methods can easily be included. Moreover, a benchmark website is releasedBenchmark website: https://aldahdooh.github.io/detectors_review/ to promote researchers to contribute and to publish the results of there detectors against different types of the attacks.
| i | 41b25d753c149c75f5a253790e1d62ff |
A natural way to discover data properties is to study the underlying probability density function (pdf). Parametric and non-parametric models {{cite:5255831ff0b1b8d34d23cd39105f9df67e34604b}} are viable solutions for density estimation problems that deal with low-dimensional data. The former are typically used when a prior knowledge on the data structure (e.g. distribution family) is available. The latter, instead, are more flexible since they do not require any specification of the distribution's parameters. Practically, the majority of methods from both classes fail in estimating high-dimensional densities. Hence, some recent works leveraged deep neural networks as density estimators {{cite:9e2c2ba642fefcc0c14da81485373466006c27c0}}, {{cite:749d24bb1201bdf00bda79cce22ec55530e1ce7d}}.
Although significant efforts have been made to scale neural network architectures in order to improve their modeling capabilities, most of tasks translate into conditional distribution estimations.
Instead, generative models attempt to learn the a-priori distribution to synthesize new data out of it. Deep generative models such as generative adversarial networks {{cite:fc25a8f5856847444f59548eed45a1534abdddb9}}, variational autoencoders {{cite:9a800690e60df1abfa6ed5e2206132996acfb268}} and diffusion models {{cite:60a69972d7a8d89f67a8c276ef792e77682d27a9}}, tend to either implicitly estimate the underlying pdf or explicitly estimate a variational lower bound, providing the designer with no simple access to the investigated pdf.
| i | 10cd69ff013b0d2e5f69273dc324c387 |
Now, we can minimize the two-body energy Eq.(REF ), with
respect to the variations in the function {{formula:47ff1978-4ccc-478a-b84b-db690ce95c47}} but
subject to the normalization constraint {{cite:cfedd06ce53295c8820fc86bfdd7a9a45b8bc1dc}},
{{formula:122e6518-fe09-4ca9-ade1-c46e6f16a77e}}
| m | 12238dd2fb495059e8edcdcf42a8e1d0 |
We consider boundary conditions with {{formula:b8930376-1c3f-44ff-909e-abb4e5d53286}} , {{formula:8c0ef901-ac63-49f7-a876-e6d056bc68fc}} and recall that Ising symmetry implies {{formula:8e5197ec-15b4-4fe0-8445-6864592684bb}} . The spontaneous magnetization {{formula:ae7e40eb-dbb8-409a-abbd-cbf41db24d54}} is given by {{cite:22e10cfc568e8d4dbff4f528334a16194ce0cabe}}, {{cite:4326e73afe8dfbb193cef1cf21d612ae9b0aa3ac}}
{{formula:dfb74da8-ae11-4d8c-8655-07f6837c6daf}}
| r | a1693a692cede869d11014f45de30199 |
Theorems REF and REF involve reductions from the Positive Not All Equal SAT (Positive NAE-SAT) problem where given a formula {{formula:a2e6a984-ca14-4f0b-a5bf-b85c475cac65}} in conjunctive normal form with no negative literals, the objective is to determine if there is an assignment of True or False to each of the variables such that for each clause at least one but not all variables in it are set to True. Such an assignment is called a not-all-equal satisfying assignment. Positive NAE-SAT (also referred to as Monotone NAE-SAT) is known to be -complete {{cite:00d35a7ee4b5f4e67b97e8e02153a152f8759d1a}}. Also, a straight-forward reduction from Set Splitting or Hypergraph 2-Colorability {{cite:45c0a26aa0e3fda50303cbe70866f4c1fefaf734}} to Positive NAE-SAT ascertains this fact.
| i | 4170370bc68cff7c538ad0c219d5ce99 |
While imitation learning (IL) is a powerful paradigm for skill learning, popular approaches such as Generative Adversarial Imitation Learning (GAIL) {{cite:afd5621a38b7a953973400579f1bd3dd1ec091fa}} and Inverse Reinforcement Learning {{cite:0092de41585e2bad5db6b8fb094ee7f156e32b12}} require large amounts of data. Improvements to adversarial imitation learning (AIL) methods have made positive strides towards addressing this issue by leveraging advances in the sample complexity of RL algorithms and bringing them to bear in IL {{cite:742c5756f0ccf0583ca394c5d5dad63b3b854949}}, {{cite:6456a59911c5ea955d62268eb9688341c9ebc83a}}. Nevertheless, the problem of sample complexity in IL persists. Behavioral cloning (BC) is an exception as it is the rare IL algorithm that is also offline in nature. Aside from not requiring any additional interactions with the environment, BC is also a relatively straightforward algorithm. Given a dataset of state-action pairs that encapsulates interactions with an environment by a demonstrator, the problem of offline IL is reduced to a supervised learning problem, where a model is typically trained using maximum likelihood methods to predict the action taken by the demonstrator given an input state.
| i | 0c027f0694febc5a56997fabb73c3eba |
Denote {{formula:2aa0d9c5-e5e8-4160-8700-1c2509a67155}} . Before giving the main result of this paper, it is convenient to list the following concept similar to that in continuous-time setting {{cite:e4f7d0859a8c391c61707998c560b0d15b5d87d1}}.
| r | 6bb60db94f9633d49852fdc9114134b9 |
A rigorous treatment of the case with continuous parameters would be worthwhile, as would more analysis of continuous or high-dimensional data where summary statistics are commonly used {{cite:1a5fefc060ed96e94b412b66223065244921b080}}.
There are open questions both in how to choose and sample from an optimized distribution of parameters and in how to estimate the likelihood and posterior given simulation results; as we see in Fig. REF , each step matters.
| d | e67f5d32acad8ede41780646db8d52e8 |
A particularly interesting direction of the contextual bandit problem is the linear contextual bandit problem, where the expected reward is a linear function of the features {{cite:ccd66cb20d3a7f92e365953327444129525500a6}}, {{cite:c44905fbac5fd89f8ed2e0c2b5e6a4ebfd1419ae}}. Under this setting, {{cite:a5b6d9c9b3355df1c42f45d816a70a4c799fa3d1}}, {{cite:8e242e4dce3760825f99d5ce99a129392d39e50c}} and {{cite:d805e7b89175a6c943ccaad794c8a44471b06885}} showed polynomial dependence of the cumulative regret on ambient dimension {{formula:4d71e125-ea97-4f5b-9a57-c9354b0cb120}} and time horizon {{formula:dde614f3-5e51-47cb-a3b7-c62651442b28}} in low dimensional case. Specifically, {{cite:a5b6d9c9b3355df1c42f45d816a70a4c799fa3d1}} and {{cite:d805e7b89175a6c943ccaad794c8a44471b06885}} proved a regret upper bound scaling as {{formula:3f0ea2ac-6007-4240-a8ba-911e13856c77}} , while {{cite:8e242e4dce3760825f99d5ce99a129392d39e50c}} showed a regret upper bound of the order {{formula:6025bca3-e562-46db-b64c-5c3f876bc7c3}} . It is worthwhile to mention that all of the aforementioned algorithms fall under a certain class of algorithms known as upper confidence bound (UCB) type algorithms. All of these algorithms rely on the specific construction of a confidence set, followed by solving an optimization problem constrained on the set. A similar idea of confidence set construction can also be seen in a very recent work {{cite:dc4e2adb11568747c8c33085cfa82dac3ca1f472}} in the context of high-dimensional sparse linear contextual bandit setup, where the reward only depends on a small subset of features of the observed contexts. This area has recently attracted considerable attention due to its abundance in modern day reinforcement learning applications (e.g, clinical trials, personalized recommendation systems, etc.). There have been ample theoretical works in this field. To mention a few, LASSO-bandit algorithm in {{cite:6353b94ed75654a3b888b7fc1087583a17f53cbe}} and MCP-bandit algorithm in {{cite:77753503a5d7270387dd200e49f6dbb3e6ff5145}} are shown to have a regret bound with logarithmic dependence on {{formula:427cdd68-dfc5-4565-825f-69b0794c31fb}} , under a margin condition. However, in terms of the time horizon {{formula:bea5989b-1e59-4d54-9388-d9009383503f}} , MCP-bandit enjoys better scaling compared to LASSO-bandit. In particular, MCP-bandit has regret upper bound scaling as {{formula:7d3cdadd-50ed-40d8-ad63-3c028233b713}} , in contrast to {{formula:02b6a00c-346c-49ae-adf5-eada4f4a0de8}} regret upper bound for LASSO-bandit. {{cite:d8b72f30bec97510a202ebfdbe61174ecd530a48}} proposed doubly robust-LASSO algorithm which also enjoys an upper of the order {{formula:18ca334a-1f18-47fc-aec0-867136dba5c5}} but at a cost of {{formula:37ad9deb-0876-4055-8dfd-3f272444b053}} dependence on {{formula:bb06747a-3fcd-4964-ad8e-fcbabb077a86}} . {{cite:d2c22052bf2e3f5173d15bd62bb20af71964049a}} proposed explore the sparsity and then commit (ESTC) algorithm which essentially randomly draws a sequence of arms in its exploratory stage and greedily chooses the next arms based on the LASSO estimate obtained from the data collected in the exploratory stage. They also show that in the “data-poor” regime, i.e., when {{formula:b9c29973-77a5-4791-9dbd-fc5aa4b115ac}} , ESTC has regret upper bound scaling as {{formula:30777f95-8076-4297-83b1-3cc2d5b2e291}} . In a very recent work, {{cite:c1eb9882539cd31dd269cd46df5ad495bc002f5f}} proposed sparse-LinUCB algorithm and sparse-SupLinUCB algorithm for high dimensional contextual bandits with linear rewards. Both of the algorithms are based on best subset selection approach and enjoy {{formula:817a2745-783c-4a35-8597-3cd217b0bc04}} scaling in the time horizon and poly-logarithmic dependence in the ambient dimension {{formula:60f66518-1276-4481-b998-a40a01ccc623}} . However, their theory is only valid when the coordinates of the contexts are independent of each other, which is indeed a very restrictive assumption in high-dimensional regime.
| i | 058d6974a82ebd275e22e3a4362d0123 |
Besides, some works have shown that the effect of FL is also related to encryption algorithms {{cite:c3e9859b7c1df5b4b4871894f1836f833e06bd5b}}, {{cite:69282941dc5c17b6c62a01135bad6e768998e42b}}, {{cite:bcd9321d5a3c511188c243fdce285ce241aaa6c6}}, {{cite:18573778f79dab575426c7ee7f1352f7063bab2a}}, {{cite:a5f052d8ce6abf1b49b6ec60cbcf628f9e8bcfff}}, {{cite:a63e35a99094317f175c06ac669bc404a27c97a9}}, {{cite:931fad4aa2ade9f05276dcbe5862fdf41cbbc8fa}}. If the encryption strength increases, the information loss on the data will be worse. So that the effect of FL decreases. The trade-off between data privacy protection and the accuracy of the trained model is still inevitable. It is still considered an open question of how the effect of FL is related to other factors.
| d | 4d4e663416739b069c0dc0b031ba9abe |
As mentioned in {{cite:2c41f3e4d139c8947b5ece79eb0041ba0c1ad5df}}, a possible reason for including reject options is to obtain classifiers that are more reliable but also less expensive to use. In our case study of Section , for instance, a possible option is to consult an expert who would be able to identify the bird species morphologically, without using the measured traits. Although expertise does not come cheap, this could still be an alternative when the expected cost of algorithmic classification exceeds the cost of consulting an expert. The latter cost might be independent or a function of the number of hypotheses she needs to consider. In the former case, the reject option of {{cite:2c41f3e4d139c8947b5ece79eb0041ba0c1ad5df}} would suffice, and in the latter case a partial rejection, as proposed in this paper, could be beneficial.
| d | 28153ab59ea48bcd89b5031e49f4b274 |
Given a color image of a face {{formula:231e431b-7dde-4e2b-89bb-45c97f247f89}} , our goal is to remove all the wrinkles present in {{formula:0527ff08-96f0-477a-b177-6fc95de8ad40}} while preserving photorealism. Ideally, we should only modify the facial areas with wrinkles and, at the same time, the modified regions should preserve the local statistics of the skin of the person.
We propose to solve both wrinkle removal and wrinkle cleaning via a two-stage model based on image segmentation and inpainting techniques.
First we estimate a segmentation map of all the wrinkles in the image using a state-of-the-art CNN segmentation model. Once this is obtained, we propose an inpainting model based on LAMA {{cite:322f0d740652e14f8faf0867f5ac4c4309152ab8}} to fill in the wrinkles regions with photorealistic clear skin. We illustrate our pipeline in Fig. REF .
| m | 8a03b734605ff2d904baf29e3353509c |
As the number of peers increases, the probability of failure peers occurrence will also increase. In Fig. 3 {{cite:f252777f0b173439e07aaa1fcaa7ed831cb8a048}}, the mean latency to consensus increases with the augment of the number of peers. This is similar to what our model (7) shows.
| r | e0ad233cffee6a17838865a71fcdcea8 |
Chartrand et. al. asked if there existed a function {{formula:6722d9bf-359a-4807-8166-1fd45889f6d2}} such that for all {{formula:dc7c1b46-ec8c-4b42-a9e7-e6e984bc64b4}} with {{formula:d56d4d9a-936a-434f-b1b6-4f6a74efacac}} , we have {{formula:1f9af10d-21ad-4eb9-8612-4a85aa4c2198}} {{cite:84c0501454465abdc3f24f25db23ca35862e15e4}}, however there is no such function as {{formula:5ae5cb24-4bf4-4669-864d-e9368972699e}} for all {{formula:4a62982a-72e4-4962-bb3e-d67f7f89cdd0}} {{cite:1aa166ac1e861bd71e1970757c17bf4a1e6b34a3}}. Thus, we could not expect a lower bound on the part sizes of an unbalanced complete bipartite graph to be able to force the rainbow {{formula:e81970cd-4f5d-4193-aa14-55e7a7e4fc52}} -connection number to get down to 3, as happens in the balanced case. Similarly, for complete {{formula:924dde1e-9041-4d3c-a440-f450042790dd}} -partite graphs, if {{formula:90a7a18d-6232-48fc-ad7c-3cae6a3ad62c}} , and {{formula:728e6671-997b-4307-8126-16aa15261a34}} , where {{formula:23cdf266-4520-4bc8-abe9-d03380d683e6}} , then {{formula:99074535-02c1-452a-82bd-196547cc152a}} {{cite:1aa166ac1e861bd71e1970757c17bf4a1e6b34a3}}. Thus, in the unbalanced complete multipartite setting, again we cannot expect a lower bound on the part sizes to force the rainbow connection number down to 2. One could wonder how close we can get though.
| i | eb0e4cab231fda2ea9fce393a0f87a62 |
Causal dependence between infection outcomes is a major obstacle to identification of meaningful treatment effects in infectious disease epidemiology {{cite:c480d49466d85f1ce54f6275861d2f57f12aa03e}}, {{cite:c8b6145c0030c280a80cd75df53ead03191eddca}}, {{cite:75f0783d35518a0fc6315889f8b85a6d4edde442}}, {{cite:8b1566c32a4a4de15fa9073660d3cc970483f9df}}, {{cite:deaee7005b877941639578d33b726d8396825496}}, {{cite:8fbb929e8892822ed705d797bed4a75f1a37baef}}, {{cite:bbc2ea6ea4ece8000305c2619f181328e7b6d2e4}}, {{cite:bdd3cdd5d21702983e649d9370b6bf78de146846}}. Inferential approaches that neglect this dependence can result in misleading estimates of vaccine effects, even in randomized trials {{cite:e1864a3ff78707db0d1d84bf6c5005363ac34d9c}}, {{cite:eeeda5c23b7c59d1ba1adf1493389eae54e3dfbf}}, {{cite:018831d347ba2899f93e94e4c23fe2f0dfd5f946}}. In this paper, we have presented a framework for estimating meaningful treatment effects under contagion by clarifying the temporal relationship between infection outcomes in clusters. We describe a new strategy for identifying average potential infection outcomes. The approach proceeds by first identifying the average infection outcome under a deterministic infection exposure history, then integrating this history with respect to a distribution over others' infections, in the situation where the focal individual is not present. This work is a direct extension of prior work by {{cite:018831d347ba2899f93e94e4c23fe2f0dfd5f946}} and {{cite:b416af48840fc9fab7706ad6aa8664081989616b}} to the general cluster setting, with no restrictions on the direction of transmission or treatment assignment.
| d | d816dad98d223fcf915e762c74fd6354 |
Instead of operating on the original evaluations {{formula:46948af5-fe9b-4513-9c73-3259bfc008c6}} of response curves {{formula:f58c9648-34e3-43ac-8404-f1db6b2fd489}} as in all applications above, another frequently used approach expands {{formula:d8f4784b-5219-4272-bd5a-1d5bbd6d8657}} , {{formula:0021c0a9-b38e-4dbb-bb08-35fa3a2a1c60}} , in a common basis first, before carrying out statistical analysis on coefficient vectors (compare e.g. {{cite:c1398067f8f3e27082842ad5867ac74d6fd7f003}}, {{cite:83de2886a1b2d185384edf4e6075a2449ea51a18}} in FDA or {{cite:398cec9b8f87dc45ed26ad00904a872cd22c4139}} in shape analysis literature). This is, in fact, a special case of our approach, where the inner product is evaluated on the coefficients instead of curve evaluations, since the entire quotient space geometry and model is determined by the Hilbert space {{formula:57f7bd71-04d7-4e80-8c2e-251ee1360db6}} . To illustrate this explicitly, let {{formula:5c93c52e-db42-46d3-a91a-245261049af8}} be expanded in the same basis {{formula:b1dd4447-2145-4b10-ba65-85f3ca5a7afe}} used for the construction of the tensor-product effects {{formula:ed11a91f-2cf9-46e0-9768-562cdde93ef2}} in Section REF with complex basis coefficients {{formula:034d4f55-ce32-49d5-b498-8203fc11d20d}} .
Note that we may also represent {{formula:8f1b9744-3e11-4142-b8ce-1ec5fe8071ff}} with a complex coefficient matrix {{formula:5f9ad180-4d95-4578-9321-2eef7152f4a0}} instead of real coefficients.
The pole {{formula:02333baa-fead-459d-9d44-0dd0be56c1b3}} is expanded accordingly.
Then it is easy to see that modeling the mean shape/form {{formula:750b5ad0-6035-4796-8bce-0b4c0ccda140}} of the coefficients {{formula:9755b8f6-ab6a-4688-8e33-01f6ba6074f9}} as alternative 'landmarks' is equivalent to our presented model on the original level of curves – up to different inner products involved – if effects {{formula:06fd2c98-d7d2-43c1-a265-19be85e4df43}} are specified on coefficient level in the tangent space at {{formula:47590e6f-4d99-43be-af8b-6ccbc8e9bd44}} : on the original level of curves, the model induces the same tensor-product model structure {{formula:7bbfaa5c-c5d5-48de-b5b9-914e87b8df35}} .
Instead of {{formula:564d2533-96d6-4e4e-bef1-ee91d4080a72}} for two evaluation vectors {{formula:b9e07c1a-8f36-4065-a55c-45b5a68fa2ab}} , we then have {{formula:a1e13acb-bfe4-4a9c-82f0-fb6071fbc393}} for curves {{formula:9f9e757e-7ddd-425b-bfa5-9028385751c8}} with {{formula:0f6bc2ef-8f16-492f-b953-8ca430070be7}} the Gramian matrix of {{formula:786dd16f-2824-4a69-9e2c-5866b7f0ed39}} .
When, for dense grids, it can be assumed that both inner products closely approximate {{formula:a2b6e612-da02-4124-a7ee-df41c3c6efd6}} on the level of curves, the approach based on the coefficients {{formula:2f4c8db9-b02d-4fb3-a5a4-a1ce48bb0818}} may be computationally preferable, guaranteeing regular and typically more sparse representations demanding for operations on smaller design matrices (in particular when utilizing the linear array framework {{cite:99c9a8dc27fce517108de8601b3347a49de0d613}} as proposed above).
| d | 2811e778c4e97a46d65260cf6441dae1 |
In addition to prediction accuracy, the prediction time of an ML model is a part of the total solving time, which should also be considered for developing efficient solution methods. Figure REF shows the increase of the prediction time when enlarging the problem size for the compared ML models. Comparing graph-based models, we observe that in practice the mean prediction time by LG-GCN is much less than the one by TRIG-GCN. The high computational cost of TRIG-GCN prevents us from building it with a large number of layers. The computation time of LG-GCN is close to those of linear models on VCP and MISP and shifts away on DSP and CAP. This is understandable because the complexity of LG-GCN is linear in the number of edges in a graph {{cite:ba415bdc38dc837d5687a2769bc713d8e2b9de38}} and is polynomial with respect to the number of decision variables, as compared to linear growth for LR and XGBoost. However, for MISP and VCP where the constraint coefficient matrix is sparse (i.e., the fraction of zero values is high), in practice, the difference in the growth of computation time with increasing problem size is not as dramatic, but it may be significant when an MIP instance has a dense constraint matrix, e.g., Combinatorial Auction Problems.
| r | 63cd57001d1b4b12dbcef69bf3a3b9db |
The feature activations of intermediate layers in a deep learning model are known to be excellent descriptors of the training data {{cite:e4ffa658dfbdfa91acb0d353008bb82257529dff}}. In this work we hypothesize that feature activations from a homogeneous set of training data (CXRs) have a specific distribution and that OOD samples will produce feature activations some distance away. We use the Mahalanobis distance {{cite:ecc186654d358af6f472273e0041ecb9634a3583}} to measure the distance between a distribution of previous observations and a new sample. This method has also been used in {{cite:f573d34b840370c964b866a0ce5a23cf5acc3777}} and {{cite:6f001e21736438f01628739218375eca5c26862e}}. {{cite:f573d34b840370c964b866a0ce5a23cf5acc3777}} investigates its use in the context of an auto-encoder. {{cite:6f001e21736438f01628739218375eca5c26862e}} shows that this metric is useful in a class-conditional setting. Both methods work with tasks involving natural images. Our method differs by restricting the in-distribution images to an anatomical region of interest and being independent of the task (e.g. classification of a pathological condition) of the model.
| m | 41bacc8b87c0eae249f487637f436354 |
We recall that in the decomposition atomic, we can always choose atoms with additional vanishing moments. This is, if {{formula:14deade4-d68e-4f6c-b010-b3c84ccb6515}} is any fixed integer with {{formula:2d24d3ac-8b50-47be-9ba5-d64a405ac28d}} , then in the definition of the space {{formula:2a4c9c79-a821-408a-9f2c-813887cff09f}} we can assume that all moments up to order {{formula:66d13300-6962-49ee-b088-7851b9aef5b3}} of our atoms are zero. Thus, given {{formula:387ab8b0-3d80-460f-b775-7e265f86d5b3}} , we have that {{formula:836998f0-f239-461c-871b-b2156071ffa6}} , where {{formula:bb54ffc8-0cf7-4046-88f9-52a46c156c25}} are atoms with moment condition up to order {{formula:c1303121-8f63-428d-b380-1026550b4983}} .
By Lemma 17 and since {{formula:d27bd12a-44df-4169-b4ac-7839e3d9cb15}} we have
{{formula:6717452f-bf9e-418d-9348-39de2a6f0254}}
{{formula:af9fae02-b13d-4c27-9736-dc7bee41d47e}}
Thus, we estimate the last integral for an arbitrary atom {{formula:4d9ef4d9-829b-42bb-a503-4596b29f0b14}} supported on a ball {{formula:1e5442b4-cd53-4acb-8579-d92519406636}} .
{{formula:ff8981b0-5d06-4ee2-90c2-1f247198a128}}
{{formula:b65536a2-cb80-4ce4-823d-16c2322c38ad}}
We first estimate {{formula:a0a07861-5891-40fa-af79-8c0e90598620}} , for them we use the fact that {{formula:3b7b9fa4-27c3-4bec-ae85-7121fb6fc7e7}} , for all {{formula:b0a249d3-ee91-4b96-ab8e-6702f394cbd5}} . We have that {{formula:fe06a8ba-ef27-4e51-b635-d081366bd4fe}} , and since the Hardy-Littlewood maximal operator satisfies Kolmogorov's inequality (see {{cite:e9a3c6051dffbffac7efb48fe8813d7105ddf9fc}}), we get
{{formula:d0f3f9a4-e38b-4b27-a9a0-cc958a203822}}
To get the desired estimate for {{formula:5b9674ee-b052-4f05-9674-21f28bed7c20}} , it is will suffice to show that
{{formula:9ef1b914-7f5f-43dd-b08d-b181a31b3d1d}}
To prove this, we split the integral
{{formula:6d2cedbf-28c8-446f-b872-231f97714430}}
To estimate {{formula:5b915079-e6a8-4d71-a52a-4949fca328c2}} , we take {{formula:d0f9e749-eb58-4c31-ac75-7c1b60da055f}} such that {{formula:511f09bc-6b9b-4a34-a606-0e07ed5b5537}} , so if {{formula:11e9ae05-cdee-4b85-824c-c30426d50de6}} is defined by {{formula:9bc923d5-55f8-4a6f-89ec-ccf03a6e1fba}} , Hölder's inequality and the {{formula:ee96d653-5449-4fd4-a75b-17ced2c09bcf}} boundedness of {{formula:cebc9288-6942-441d-838b-aff4e21c4a73}} give
{{formula:aa249311-c08a-4669-aa33-eba5791238f6}}
since {{formula:1132e273-c6ff-4c59-8edb-5ecfb0679b16}} , Remark 8 gives
{{formula:6dd43e6f-d7f9-477d-876f-8437f4e7a76d}}
a computation gives {{formula:68d47c07-329d-42f8-8035-09c5cffb7abb}} , since {{formula:78ce8d1c-900d-405f-b152-6da3bc930fec}} it follows that {{formula:1cc26071-643e-4a5f-801e-5d5c17dbccab}} , thus
{{formula:1dcb7057-785d-44f1-9bc5-fb02d1a73619}}
To estimate {{formula:43916397-bd3b-42c9-9ace-895301f10a2b}} , we use Remark 19 and Theorem 9 in {{cite:725fcddab56abb9b78bc0473faf2e8e8a0d17fcf}} to obtain
{{formula:0f704cfa-f20d-4574-b575-2ee95d9b7b41}}
From the estimates of {{formula:8fd75460-cbda-49a9-8b78-f3d021d05335}} , {{formula:5f5f8e05-a082-4141-8ff6-3a56926a6889}} and since the weight {{formula:d47884c3-f466-4875-9f39-6a40a1fd92a1}} is doubling, we have that
{{formula:49348656-c3c2-40a7-bb92-736c4192fb2e}}
Now we estimate {{formula:aeb2b481-4f4d-428c-bc49-ebdc35a88963}} . By Proposition 18 and since {{formula:65b347c7-394e-4925-a8cc-9bd56c03c850}} , once again by Theorem 9 in {{cite:725fcddab56abb9b78bc0473faf2e8e8a0d17fcf}}, we obtain
{{formula:6a238416-4169-42f6-be3e-e7fecddc9ead}}
{{formula:3565a5c1-e1ff-4668-84a8-c19e5432ce8a}}
Then
{{formula:a6735a8d-d087-4dee-9dc3-77888add49e3}}
So
{{formula:778212a2-bea5-4725-ab29-1ed49f1e9c49}}
the embedding {{formula:a60aa8ca-4b9b-4bbd-b4c9-4a7498c6bb98}} gives
{{formula:20c62989-e75c-4ac8-b9a4-640ecb2f1e18}}
a computation allows us to obtain {{formula:0047570d-7ff0-4af0-a381-6477c8a0a9dc}} , so (REF )
{{formula:8d1da20f-0f10-4f7d-bcc4-66f9c8a36226}}
{{formula:fb7d9101-a619-48d8-b399-f7eef80d3ef6}}
because {{formula:4a807a8d-c47d-49db-9096-1a485862dd8e}} and {{formula:c50f079a-b41f-4d2f-bdd8-f83d75df95e8}} , by Lemma 13 we have
{{formula:a5866433-c028-4f4f-b551-d9e8bbdff588}}
{{formula:139343cf-3d7e-4b14-9590-89f3e8708076}}
Since {{formula:47c4e0c9-be8f-4c1f-be89-27c2f8828655}} (see Lemma 1 and Remark 10), and {{formula:3bac84f2-732c-4971-8b25-a53669b08acd}} as sets, by Lemma 7.11 in {{cite:e7dde1901e2ff4a6cd4fb8437c7ecc9ad8014f0d}}, we can take the infimum over all such decompositions to get
{{formula:dc96a66b-9a89-4896-aade-e42855dc3a74}}
for all {{formula:6f49e2e5-765c-4087-a2e8-1e2a960bf845}} .
| r | b716b625e7032f0f1e0eff23ff86cb95 |
The work by Rocha et al. {{cite:f4218d24aeed742a4ca6f87f250d453d9b09078f}} was among the first that attempted to derive a gradient-based strategy for active mapping of occupancy grids. The authors proposed a gradient of map entropy with respect to the robot pose at a cell center via finite difference of entropy values at adjacent cells. Julian {{cite:1ceec48e2ae0cf389f4e25a2b7e2564639c6211e}} proposed a divergent beam sensor model, where the width of a beam increases radially as it travels farther through space. While the derived mutual information formula was shown to be differentiable, it suffers from high computational complexity as it requires numerical integration of the objective function. Charrow et al. {{cite:76028231cccb105a3994f8a45e112cdc426335c5}} proposed a numerical evaluation for the gradient of Cauchy-Schwarz mutual information (CSQMI) {{cite:dc6a84f79f72f47a0bfbd58adcbd05f6b28b5270}} using finite differences of CSQMI evaluated at cell centers. Our work is most similar to {{cite:0f13bd4528fb2267cca8bb77b61432d714120247}} and {{cite:eabc99991721c9bcef8cf3bf494a8d862ed1faae}}, where the authors formulate the information gain as a sum of informative elements weighted by a discount factor. In particular, {{cite:0f13bd4528fb2267cca8bb77b61432d714120247}} defines informative elements as frontier cells between free and unexplored area visible from a candidate pose. However, unlike the mutual information between the map and a sensor observation, using visible frontier size as a proxy for information gain does not take into account the effect of sensor noise which is inevitable in real-world sensing applications {{cite:dc6a84f79f72f47a0bfbd58adcbd05f6b28b5270}}, {{cite:af037bf9a4831b40017a180333447c0e573b944f}}.
| i | 3daa98fc3c296dfc19ab875a48ca857a |
The key idea of {{cite:8fe1f6cef67c162953fe98dbfb888d639991880c}} is to establish various differential inequalities for the local Dirichlet integral of a nontrivial solution and from that to estimate the growth rate of the local Dirichlet integral. This idea dates back to the so-called Saint–Venant's principle, which was first used to establish the decay rate estimates for equations of elasticity in {{cite:60e0a101c6f031a69d036ce5b7c34aa7f4837fa1}}, {{cite:e0332778825ff0806b4bf0cd7f4c2ee8bd38f9ed}}, and was generalized to estimate the growth of local Dirichlet integral for solutions to linear elliptic equations in {{cite:0fc78abfa8812ccaac13f44520ded29a265eaf46}}. It was also generalized to deal with Navier–Stokes equations in a pipe by {{cite:4e881b301d11d7775de10e90a16680aac958cfc4}} and {{cite:49ae6efc5e6091fd7f44b90391a0bf5cc226da9e}}, and we refer readers to {{cite:c847b956c0dedb5c1b9c50c0df231de3cdf71dd5}} for further information.
| i | 831123124e95fac2770c70da2de86cb7 |
Despite the benefits, there are key challenges that limit the applications of FL over wireless mobile networks {{cite:fd8dc330773b9f665fbc600e4254363a8f168d12}}, {{cite:81ffad2310eb550bb5aa1f79fa4de681bb378eec}}, {{cite:90ad41a725b5e7d97b0a663438ccc83db36f34a2}}, {{cite:0338211d9daae60fa3b1a832fde2f770377bb20e}}. To date, FL has been applied in a wide variety of fields {{cite:b4c9fd68556c910c10770f89a1085e57fb28510a}}, {{cite:dac1133883983aad0bbaabeef0152962a9bd9e9e}}, {{cite:73b4b87cf71ff6542436052c17ad528bae3f589f}}, {{cite:69747b6d1f56ce26d0d776b986aa7ff44db1a8f3}}. For example, mobile service providers such as Apple and Google investigate FL for enabling enhanced user experiences through learning from users' behavior {{cite:0cbc6f27362aaa4b2db220a3f78ff4940bc85a82}}, {{cite:b14a2151c88f8a38ca36b28cd9ea865b67df9f8e}}. However, in such important application scenarios, FL needs to deal with practical challenges {{cite:d503b75c5cdbf527dd643c8b03bae6f0d1e68b10}}. Specifically, FL usually involves a large number of mobile devices to ensure sufficient training data. In this case, it is challenging to deploy FL due to the limited bandwidth resources. In addition, the energy constraints of the individual devices also increase the difficulty of implementing FL. Although most existing spectrum allocation optimization methods mainly focus on total energy consumption, the energy cost of individual devices should be considered. Therefore, an effective spectrum allocation optimization method is critical to the adoption of FL in important applications.
| i | 36a794851cba73611f05d36c823afb81 |
When reduced to the case of linear constraints, the proximal ALM suggested in {{cite:6ef101192bcdaaca8143f2e1d0b1d8d3ed68b26e}} is a special case of MEAL with {{formula:743101af-64ec-4727-9a35-abeda244ca3a}} , and the Lipschitz continuity of certain fundamental mapping at the origin {{cite:6ef101192bcdaaca8143f2e1d0b1d8d3ed68b26e}} generally implies the KŁ property of the proximal augmented Lagrangian with exponent {{formula:e8a7a222-fb6c-4e65-bf63-b64f066f7380}} at some stationary point, and thus, the linear convergence of proximal ALM can be directly yielded by Proposition REF (b).
Moreover, the proposed algorithms still work (in terms of convergence) for some constrained problems with nonconvex objectives and a fixed penalty parameter.
| d | 6935d5a8f398e6167cafcbfc5f0eb3b9 |
Lastly, our results obtained in this study are non-trivial and against expectations.
The reason for “non-trivial” is that our result can be obtained through the highly complicated calculation as in section ,
field re-definitions and disentanglement of mode-mixings.
The reason for “against expectations” is that
supertranslation and superrotation map one spacetime solution of Einstein equation to another solution, and they differ by the soft hairs which are the
associated charges of supertranslation and superrotation. The values of the soft hairs are determined from the process of the black hole formation, thus the soft hairs
are considered to correspond to the microscopic degrees of freedom encoded in the spacetime of a black hole {{cite:7fd5b06014fe0dbffccbcc88c34dfcff83e69aaf}}, {{cite:664951573906ec4d44d964732e09ab310d3a06ed}}. Naively one may
expect that the Hawking temperature and Hawking flux will be modified by the addition of the degrees of freedom associated with soft hairs.
| d | c531c3d482445ae86ccf1a0e65bc7ff4 |
2) Classifier Learning Stage.
After the learning of representation, we retrain the classifier with currently available data {{formula:9742fcff-6ea8-4253-a6ff-a56be589b62b}} at step {{formula:50942c6c-f5fd-4c07-a04c-f0e62e9b7eba}} to deal with the class imbalance problem via adopting the balanced finetuning method in {{cite:dd68b23263c5af862d6afd9ced236ce2bbb24f3f}}.
| m | c9f23b320a8fb0746cc610cde1ca03e6 |
paragraph4
.5em plus1ex minus.2ex-.5emComparison with Fully-Supervised Transfer
We compare the performance of GroupViT with fully-supervised transfer to semantic segmentation. For fully-supervised transfer, we fine-tune a semantic segmentation head jointly with a pre-trained representation {{cite:6f248d2659bb45c484493658de8bf091da0351d4}}, {{cite:ebf0d32e4334f8ac9669684093a7815dcec4dc7b}} on the training sets of the PASCAL VOC 2012 and PASCAL Context datasets separately and report their performances in Table REF .
For a fair comparison, we employ a ViT architecture comparable to GroupViT's for all baselines. Specifically, we append a 1{{formula:f4286f2e-98ba-46ce-b320-497d77d462bd}} 1 convolution layer to a pre-trained ViT, trained with images of size 224 {{formula:8ff02f26-b8f1-4328-9012-7f39b68de2f3}} 224 and fine-tune the whole network with ground truth masks for 4k iterations.
During inference, we resize input images to a shorter side length of 448.
We compare both supervised and self-supervised pre-training methods against GroupViT (Table REF ).
GroupViT outperforms all variants of ViT pre-trained with self-supervision by a large margin on PASCAL VOC 2012 and is comparable to them on PASCAL Context. This implies that GroupViT, without any pixel-level annotations is able to transfer to several semantic segmentation datasets and can outperform existing state-of-the-art transfer-learning methods requiring more supervision (i.e., pixel-level labels). Interestingly, on PASCAL VOC 2012, the zero-shot performance of GroupViT (mIoU of 51.2{{formula:b1e8514d-8350-46e4-af78-01617f7765b4}} ) approaches that of fully-supervised ViT (mIoU of 53{{formula:fd5dd4d4-4ea7-4499-b551-a275b58e505e}} ) trained with both image classification and segmentation labels, which is significant.
| m | c91b4447af3da5214682113b62a5a9e5 |
The statement of Theorem REF equally holds for the imaginary part of {{formula:d5bb90b5-35ef-454b-97e6-8a7706ce0765}} , with the appropriate change to the leading order correctionIn this case, one would instead find that {{formula:172e8ad7-91bf-45a2-bd08-f3cf8d6bd71d}} , the leading coefficient of the {{formula:0ab4f487-0541-44c7-b9f9-15f29e9c9035}} moment of {{formula:425808cc-4b8f-497c-8444-563f385d449b}} .. Additionally, the statement of Theorem REF can be generalized to other classical compact groups, where the distribution should again be Gaussian but with a different correction {{formula:a1a1dc9f-f363-4af5-98b8-11ea9bdc4853}} (corresponding to the relevant matrix group moment), see for example {{cite:1de2dc65350ca5b475b2a9ad514e980488796a23}}. From such calculations one could deduce conjectures akin to Conjecture REF for symplectic and orthogonal families of {{formula:9cd7739e-a322-4004-90cd-2447da1fac09}} -functions (cf. {{cite:44c3b1c2e6d9e2ac33dc0af2ebc5b7ecc01a79d1}}).
| r | e2040a9bc7bb07a0132f035766da3371 |
We then invoked one such method, simultaneously constraining the tracer DF, the MW potential, and the LMC mass, using two complementary datasets of objects with 6d phase-space coordinates: globular clusters and satellite galaxies. We demonstrated that the method is able to recover the true potential in the presence of the LMC perturbation, and that the neglect of this perturbation biases the inferred MW mass up by {{formula:aa6441c5-6715-410c-8163-11676935b34a}} %, in agreement with estimates by {{cite:72e5197ec7f6b44c974a4adf6d5e57602c22801b}}. Applying our method to the most recent measurements based on Gaia EDR3, we measured the MW mass distribution up to the virial radius with a relative uncertainty of {{formula:72931d41-8245-45de-86e8-bea1022c9a7b}} at 200 kpc (see Table REF ). The most likely range for the LMC mass is {{formula:56a3b352-4d33-4694-8fe7-c35bb7c35439}} ; models without the LMC perturbation are noticeably worse in fitting the observed dataset (the difference in log-likelihood is {{formula:064e939f-2f0b-48a6-bcc9-0a780f32ee73}} ).
| d | 061458393c5d943608a52ce42d637de3 |
where {{formula:c08859b5-eaa4-4b55-825e-af82c63febb7}} and {{formula:aa74032f-cab9-452e-83cd-c13bd05bf725}} indices refer to {{formula:5e2add21-2baa-47cc-84e8-163cd1554c5b}} group symmetry. There is a universal bound for the conductivity that is {{formula:ab0cfcbd-9336-4078-baea-c089f830f017}} where {{formula:2a2ac764-3552-4a69-b079-af9bb47f3513}} is measured in units of {{formula:6016c494-76e8-4012-af14-192afb5e51d4}} and {{formula:3cde5fa4-e0ac-4eac-9c0c-994dd98db568}} is the charge carried by gauge field {{cite:abc53f029740ce52fbc1dc7071751ca20814988b}}. This bound is violated for massive gravity {{cite:abc53f029740ce52fbc1dc7071751ca20814988b}}, models with background fiels {{cite:506cdd66aac727639413a29a8c187ee563097bf2}} and non-abelian Born-Infeld theory{{cite:b29b2f77957c6e4bdf898772dd279196f584f6f7}}. We want to investigate whether this bound is preserved for our model or not.
| i | 2f11088784a8cc33a261f96c808df398 |
The projected measurements and uncertainties are evaluated using Monte Carlo (MC) simulation. The MC simulation relies on a calculation similar to those described in Refs. {{cite:f72a5ee9576d00cd9ad0cf8f96c9af8d9229bd20}}, {{cite:90aa056d10514d9c19836f528a1c14d49b76b134}}. Both the {{formula:be22da79-dd5e-496d-a591-6ac880fd20f9}} and the {{formula:26bdcfa8-d849-46e9-a87f-789c00373eb5}} process are generated. For the latter, the flux of {{formula:c5797dd1-7fd6-43a4-8ec4-b22d5755b8eb}} used in the simulation is determined using a GEANT4 {{cite:d5917734121cd01b45892b477f32ad5107044336}}, {{cite:1a2bda5ebbb8d0daa3d907175df68115c44ab810}}, {{cite:fb2ed63ea0e52070d68ac304f2db8ca3cb5b4a55}} simulation of the electron beam hitting the converter (a {{formula:3252e884-4438-4a8a-bccb-508bec57371c}} foil, see Sec. ). Typically, 10% of the beam electrons radiate a photon when using a target with 35 {{formula:b1956706-e396-443c-9d72-a3160d2a0c3d}} m.
| r | 717808cb5bbf1afab7aa861a877220d2 |
For humans and many animals, interaction with the environment often involves the use of hands or feet. More intelligent species can even grasp and use tools to expand their range of the control over the environment {{cite:c2b81ccae43002e424c0ebbf95288dfded4f1429}}, {{cite:0cb3a8f7d3a00c3b4940ba76edc507ba6c30e8fd}}, {{cite:07ee1caa9345979d1b938fc5d80ff09d1ae5b139}}. Similarly, a robot system which can control both grippers and tools can empower it to handle different kinds of problems {{cite:b444dddf543c259a24fd80d02ecec8a9fd5575e1}}, especially if it is on the fly.
| i | 67f374648463c90f839148eb71899dd5 |
Table REF reports the micro-averaged F1 scores of EIGNN and other popular baseline methods on PPI dataset. As we follow the exactly same setting used in {{cite:56b012a5f26a30941bfd9256d1d9c3dbc12b6926}}, the results of baselines are obtained from {{cite:56b012a5f26a30941bfd9256d1d9c3dbc12b6926}}. From Table REF , EIGNN achieves better performance compared with other baselines, which can be attributed to the ability of EIGNN to capture long-range dependencies between proteins on PPI dataset. Additionally, we also conduct the efficiency comparison between EIGNN and IGNN. On average, for training an epoch, EIGNN requires 2.23 seconds, whereas IGNN spends 35.03 seconds. As suggested in IGNN {{cite:56b012a5f26a30941bfd9256d1d9c3dbc12b6926}}, the training generally requires more than 1000 epochs. Besides the training time, EIGNN needs a one-time preprocessing for eigendecomposition on {{formula:cd3e2e25-4e7a-4863-ab67-9a805f4d0a55}} which costs 40.59 seconds. Therefore, considering both preprocessing and training, EIGNN still requires less time compared with IGNN.
| r | dc50e9146f9f7cab6a9303cfb490bd83 |
A number of theoretical models have been proposed in the recent times to study the nonlinear dynamics
of Deoxyribonucleic acid (DNA) molecule to understand the
conservation and transformation of genetic information (see for e.g {{cite:8949fcf2d81a8cfce578a5322363198add66876d}}, {{cite:f25716b2c6ab6477238c7fa243a6a2cd3515f276}}). These models are based on
longitudinal, transverse, rotational and stretching motions of bases. Among these
different possible motions,
rotational motion of bases is found to contribute more towards the opening of base
pairs and to the nonlinear dynamics of DNA. The first contribution towards nonlinear dynamics of DNA was made by Englander and his
co-workers {{cite:1685e71a99d0b52c7fb5ba2cacf8f54adc209c3d}} and they studied the base pair opening in DNA by taking into account the rotational motion. Yomosa {{cite:844539fab4355e3ba9163ec8fe8073b3973ead11}}, {{cite:b8ecb517d23092d5f357f62670f4fd2a6ff35f3c}} proposed a plane base rotator model by taking into account the rotational motion of bases in a plane normal to the helical axis, and Takeno and Homma generalized the same {{cite:c194d9b29b887599eef52035a4602c2662cf4218}}, {{cite:4d0d29cf67b6dd72e9cf1be21eb674f4ba176881}}, {{cite:342532fc39ee842fd6ac93af7dae9272ef253e50}}. Later using this model, several authors found solitons to govern the fluctuation of DNA double helix between an open state and its equilibrium states {{cite:f6ad74f57c459ff0705fabf61d469c21a20e218a}}, {{cite:99763fac3489c8f1e07ff425aba93c4374dc6de8}}, {{cite:4b670d94898729755276d1dffc7bd14bef51f592}}, {{cite:cbee3187a354e0743377efe70c86bb68f682325f}}, {{cite:7a7bba8686c551b494502dbd58968d99952ec731}}, {{cite:b4a83588bee821281f471d5231c9eab0c165c2c5}}. Peyrard and Bishop {{cite:1bd1985762f66b9c3657776b6ef761caee9eb83a}}, {{cite:c7ddca75894bf4083c322dd201251a657776be95}} and Christiansen and his collegues {{cite:4c0e98c54b397ce065d2b0130d1c7f8aad60420e}} studied the process of base pair opening by taking into account the transverse and longitudinal motions of bases in DNA. Very recently, there have been extensions of the radial model of Bishop and Peyrard {{cite:9407ba284d3738444e809963b19aa6f249f7d61e}}, {{cite:52513f05838a3eac0c92678e347443c9c9327cc3}}, composite models for DNA torsion dynamics {{cite:23df67b6fe746ceda2284596cfada0f36ef913df}} and models interplaying between radial and torsional dynamics {{cite:61343fcee54fcc4affc4112326f72d07097deae9}}, {{cite:4e0cd67136f6e0335733651e013431ea47500989}}, {{cite:acbc49e453813e86fac5c180df5e6e781f149453}}, {{cite:68a37130b3c2d8e3294a4889e2c33a1e316b465e}}.
In all
the above studies, homogeneous strands and hydrogen bonds have been considered for the
analysis.
| i | f486d44b9a060304560b3da021d82991 |
Ever since the work of Witten {{cite:b4a24c736df4386e42b07e4db1f88f1feec70a2c}}, theories based on (or inspired by) massless twistors have made tremendous contributions to scattering amplitudes of massless particles. It is tempting to conjecture that
a similar degree of success with massive twistors is on the horizon.
| d | dbc7bf4143434a46b9ab8e5e0f9578a0 |
To evaluate the quality and intelligibility of the enhanced speech, short-time objective intelligibility (STOI) {{cite:3856a163ac53b0fa1449a861aaf21ad09d9d529f}}, log-spectral distance (LSD) {{cite:ba339f851742bda07ca417ed8af8f98365b57e32}}, and P.563 {{cite:455413be90fba6bb587c93c3d2ced342d7da8470}} are used for objective evaluation. STOI is a measure of speech intelligibility, ranging from 0 to 1. The higher the score, the higher the speech intelligibility after enhancement. STOI is defined by
{{formula:316f0abf-2140-4ecb-a806-165ca4dc3957}}
| r | 953f163cd88cd8baaf5363fc8938e8f5 |
Given the resonant scattering nature of the Ly{{formula:dee4cd8c-f2b2-499f-b2a3-b9a53442ee31}} line, the emergent
profile is modified and suppressed by many physical effects, e.g. H i content, gas geometry and kinematics, and the dust content and
distribution. For instance, in an optically-thick static medium Ly{{formula:0e8ce4cb-47fc-491e-bd70-e6eec8112924}} escapes through successive resonance scattering leading to a
double-humped profile, with the position of the peaks determined by
column density, temperature, and kinematics of the medium
{{cite:0ead02f937c4f25f995cd4698369a5b30da6c681}}, {{cite:210f9e8faf54ecb0179b245579ff8a97786e3621}}. In
addition, scattering through an inflowing (outflowing) medium leads to
an overall blueshift (redshift) of the Ly{{formula:991fd845-cdaa-4665-b366-e9376db7d4d5}} profile with enhanced blue
(red) peak and suppressed red (blue) peak
{{cite:22b469d1ab062fa44b3a5101bfd48a8fb80530b2}}. In a pure static medium, the
expected velocity offset of the Ly{{formula:e3a03541-46b9-462d-9e86-57f790644105}} emission is {{formula:408d0c39-42bd-49b3-b700-cd58cbdf5cc7}} 344 km s{{formula:f7406b00-1381-43d5-885f-9e26ad50e78b}} if
we assume H i gas temperatures of {{formula:9ddedecf-c504-4395-8eeb-3a01e564ef3c}} K {{cite:210f9e8faf54ecb0179b245579ff8a97786e3621}}. For a column density of
{{formula:8046ecc2-3786-48ac-991e-5f71e014187e}} , consistent with this DLA, a velocity of
{{formula:36914024-c5cd-45ff-8578-8f13116f72d6}} is expected, in line with our observations
({{formula:30e4d968-ad9e-47c8-81fd-a08178d3500c}} km s{{formula:ecf6b476-68f0-4af7-ad3e-6622a2bed902}} with respect to the systemic redshift derived
from metal absorption lines).
| d | 7ac5b9716c776cbcfff01bca3a847fe4 |
From the results of our study, it would appear that {{formula:708f7748-c7ae-45e7-913d-4461c2dd5a6f}} evolves
significantly in the redshift interval {{formula:ee147086-129f-4117-a369-d1c4f0c2c1b1}} , but the precision
of our measurements should still be regarded as crude, particularly
taking into account the covariance between {{formula:46362ab9-69e9-4186-bc00-a5ae4a89b116}} and
{{formula:daea0e4f-4255-4012-999c-f87c64ce0a22}} . Better statistics at bright absolute magnitudes are
essential for breaking down the statistical uncertainties on {{formula:a12a4a09-c5ce-49cc-816e-ef595f501344}} ,
and so larger sky area as well as deeper magnitude limits would be of benefit.
Thanks to XMM-Newton's long service, tens of square degrees of
extragalactic sky have already been observed with XMM-OM in UVW1
{{cite:2deeecd42259d1809a4b6d7146557979832a3cd4}}, hence the XMM-Newton Science Archive could prove
a rich resource for such a purpose if it can be combined with suitable
redshift data.
| d | 47b5cc20e58bcb904de0a8d3d91b0ffc |
Baselines. We compare our method to a strong set of baselines, including both two-stage (, G-TAD {{cite:28406126acbb1722ec9e5e06437981efd424f1ac}}, BC-GNN {{cite:200fdd98e629ca7ed3c8a91424249a162fc64ad3}}, TAL-MR {{cite:39d82c6c293988382ebc62921cff3c30f639f030}}) and single-stage (, A2Net {{cite:4fb55a6e0808126d2defb5fc3d7c98efdb16c9b0}}, GTAN {{cite:bf88d1e3a36295f85e827338ddd5f17fe5faa2f3}}, AFSD {{cite:94128cb425c79264410f3c9464a60a0742ef2356}}, TadTR {{cite:f1d4a15483c79168276a9429a6ce9453e5ac0b33}}) methods for TAL. Our close competitors are those single-stage methods.
| r | ec143362ba02e988a50df72b16f7a124 |
The results of the multilingual models can be seen in Table REF . The results are calculated using the classification_report method of Scikit-learn {{cite:970dfafcb7854a497cf10141c595b405e7dfd882}}. There is not a big difference between the multilingual BERT and XLMRoBERTa models. XLMRoBERTa seems to be slightly worse than the multilingual BERT, however, it has a better overall accuracy for French. All in all, the sentiments that are the most difficult ones to predict seem to be disgust and pained. Perhaps because they might depend more on the audio cues than textual ones. After the neutral label, anger and happy seem to be the easiest ones to predict for the model, although the overall performance is not very good.
| r | 355faf122cca89ac411abf88ce10b007 |
Few-shot learning attempts to resolve this problem by training a model that classifies an unlabeled example based on a small labeled support set. Specifically, {{formula:0739d6dd-ff44-4916-9212-dab6130e840b}} -way {{formula:2dc93dce-0472-4406-a7e5-e4da857e9ab4}} -shot learning is the task of classifying an example, termed as a query, into one of {{formula:7213334e-40aa-41a3-9758-6692746acc5e}} classes, when only {{formula:9b30cab6-4df6-429f-9566-1ef6a7b4690e}} samples per class are available as supervision; these {{formula:25a3a382-1435-4a3c-8c22-3b6304c7555b}} samples with labels are termed as a support set. During training, support images and some query images are sampled. The meta-learner needs to distinguish the category of query images using only the support images. Moreover, the categories of the training set disjoint with those of testing set and they are randomly sampled to prevent direct semantic relationship and visual similarities between. Referring to {{cite:67af1d61d60ab6eb55f8858a8e427a4b50a24022}}, the batch of the support set and queries is termed as an episode.
{{figure:159e12d1-1006-4e20-b39d-b37bedbd8fe1}} | i | 7a389a802f274ba56a7cbcd92ed014cc |
In fact, pushing even further and extending our analysis to the multiphoton extreme of continuous-variable (CV) encoding {{cite:de9f505aee6a3d2ca0d5047c7bbd0f7815c1d532}}, {{cite:1134ae686866481d18b7181108a5e646dcb78ba0}}, {{cite:a7c270a7d4fb05dd0eb712265829bc6ed4b9d6ba}} should provide valuable opportunities for QFPs specifically designed for emerging applications in CV photonics, such as Gaussian boson sampling {{cite:c66068e0dc279f81dee848422fd4a9dfa5442026}} and Gottesman–Knill–Preskill qubits {{cite:f12b123339a8c09769a50bc069b8e23e9b4acfe4}}.
The development of integrated CV sources {{cite:63b0e78f5358870540185dab416f985e95d9d5cd}}, {{cite:619ed64bf5abd396817aae4503ec30e2c893057e}} and theory showing the QFP's potential in non-Gaussian state engineering {{cite:4aca2c1835cd210b1803e3942d0c986b4661ca0c}} support a promising outlook for monolithic platforms combining squeezed-state generation and QFP-based control.
Nevertheless, significant enhancements to the model presented here will be required.
In particular, expressions such as eq:finiteRes—which does not preserve commutation relations—or eq:fidBroadband,eq:probBroadband—which assume postselection on a single photon in defining wavepacket {{formula:777422a3-eb07-46ab-b88b-c349bf0c549c}} —must be generalized to treat loss in a fully quantum mechanical fashion, possible in light of recent CV formalisms {{cite:b596259b34ec6c28bd8c38340a9b464e5eefd25d}}, {{cite:589fc2de0a53de10882e60844d9ea8747f63a9d0}}.
| d | 7effed0ec6f060a5e8d0725eee8c60c8 |
However, such detailed models are not feasible for very large reaction networks because of the high computational cost and unavailability of kinetic parameters.
Several other approaches have been proposed with the primary aim of capturing general and universal properties of biochemical reaction networks in self-replicating living systems.
Examples of such models include {{formula:94c19bd9-4f09-4728-a710-cfee4a6eff5e}} -systems {{cite:02c7a6e57485d8516a7e5cc67bd20a8a63031933}}, hypercycles {{cite:7e089655fa8f81e7c33c4ee45fd2aabee2455168}}, autopoetic systems {{cite:3f802724a3ef437abf987ca6981e0151ecbb6f79}}, chemotons {{cite:efba8361458f6d48eb661a07167ca5c2e2942f64}}, autocatalytic sets {{cite:c816fece265ac1ae9731d8a982acfd3809ea3940}} and chemical reaction systems {{cite:e5b9d8231bf856838ad2cffca74c1da036b38a5f}}.
The common property of such models is that they focus on the catalysis of network reactions by chemicals, which are themselves produced by reactions within the network.
They usually do not require kinetic details, but only the knowledge of all reactions.
An in-depth discussion and comparison of such approaches can be found in {{cite:0c79e898b5dd976959c5428ca621eb72a3dfab76}}.
| i | fdf59c3ee3ab147727ea8eca3f8191da |
In {{cite:431af431cd24305662104bbfd3131b4ff9f15fe4}}, sperm head region of interest (SHROI) is producted from original sample. First, center of SHROI is locadetd at where mouse points on. After that, an Otsu adaptive threshold algorithm {{cite:6374881ab0fdc84894e992f9a6915adad9e751cc}} is employed for SHROI binarization. Third, to improve the performance of final detection, a morphological close operation are applied as the post-processing. Finally, the processed SHROI is converting into binary image, where object is setted to one and background is setted to zero. After all the operations are done, centroid of sperm head is located by analysing outline of the sperm head.
| m | 718a62d90fe6508fe10863f07e4653d0 |
The combination of local and global features through an ROI-GAN has thus provided a benefit. The rationale sought was to enable the global FCNN to learn the useful features through the help of the local FCNN, coordinating their training with a generative adversarial game and sharing parameters (i.e. a CoGAN). CoGANs were proposed to avoid the need for paired datasets during the learning stage and reported excellent performance in the problem of Unsupervised Domain Adaptation {{cite:34f3d582fb8d2bb705a51a3f9cf32527c879fb9c}}. The problem of image segmentation at two different fields of views in this work do share many more features than images from very different modalities (i.e. depth and color image, as in {{cite:34f3d582fb8d2bb705a51a3f9cf32527c879fb9c}}), and we interpret this to be the main reason why sharing parameters was actually only beneficial at the generator, and not at the discriminator of the GANs (performance of ROI-GAN-A being superior to the ROI-GAN-C).
| d | d06d56118954c0226cee3b56b45f434f |
Future work. We foresee the evolution of future research towards several directions. First, we predict that the community interest will grow towards the exploration of behavior forecasting in the frequency space. Many works have already stated their benefits for avoiding the freezing behavior in single human motion forecasting {{cite:9581a729997436ddd1a76f704a157c2df04b048d}}, {{cite:3853b8b9386362ea5a9a9934c5c0acd22faa42b7}}. In fact, few very recent works showed very promising results in this direction in highly dyad-driven scenarios like dancing {{cite:ce96e80b0da789c4ff441d9db3e1d76f2144144f}}, {{cite:42ea50af0bcf03084b87fc3aae66011fb6746191}}. Second, we think that the biggest benefit will come from methods that are capable (at inference time) of adapting their learnt behavioral model to the specific behavioral patterns of the person for which forecasting is being applied. Meta-learning could be a good candidate to represent one of the next breakthroughs in this field {{cite:fe7b933964277a539c9d2d30dc6fe9e7ca0382c4}}, {{cite:a86700604cc2c79827d5cbf94d0198389e9068b6}}. We also think that models that promote the interpretability of the behavioral predictions (e.g., causal models) may help us to automatically detect the dependencies among visual cues/signals, which may eventually lead to the discovery of new underlying mechanisms in human social communication. Finally, the exploitation of more explicit contextual cues, and the exploration of stochastic methods are still open issues.
| d | 30a0d4b6c39d37e2a4ab2c89ee69b339 |
Using training data in the source domain, we train parameters {{formula:b2370d90-4987-4324-bc39-e1bf1422e1ec}} and {{formula:b98ce520-2899-47b0-93aa-b086384468e5}} on a diverse set of domains.
Our intuition is that, by exposing the model to diverse (simulated) domains, we implicitly constrain the learnable calibration parameters {{formula:7b5a829d-5886-4daf-938e-96f25449de10}} and {{formula:1e32c692-0f14-445c-b37a-358192477e61}} to be robust and invariant to unseen target domains.
However, since we only have one single source domain, we need to generate multiple pseudo domains based on the source domain.
Instead of adopting complex generative models to generate pseudo domains, we find that applying appropriate strong data augmentation during training leads to promising results.
We explore three different augmentation strategies: RandAugment {{cite:e4de00fda46ac461bdf3ad9d8cb21a58b3901456}}, AugMix {{cite:20531900a7fe60510fd4dc0855ca0f5ead0490d1}}, and DeepAugment {{cite:1bf84d2490304194d6f85d0493d054d4c5359218}}, and empirically find that DeepAugment performs the best (Table REF (b)).
The details of augmentations are in supplementary materials.
| m | 3abc978e43df6b705168f790115223d5 |
Arguably, for many species, their reproduction rate is often empirically estimated at densities which are away from the extinction threshold, where the Allee effect is not well-pronounced. As such, we suggest that in theoretical models the impact of the Allee effect on dynamics should be assessed via comparison with a scenario without an Allee effect, where for both scenarios per capita reproduction rates are assumed to be the same at some `safe' densities. For the current theoretical study, this mathematically signifies that we need to compare the Allee effect in model () with the same model with {{formula:0845d6b1-fa98-413d-b95a-1a0ec73069a2}} since for large {{formula:c9d596aa-edce-4811-829e-b5285b73e739}} the value of {{formula:fe6d350a-6632-4fc0-8ba8-d782eda5f083}} tends to unity, which corresponds to the classical Rosenzweig-MacArthur predator-prey model {{cite:eeceff9f5912eb0fe7a7a2e691793ebd6731a4ed}}. Note that the model with the Allee effect becomes the Rosenzweig-MacArthur model in the case {{formula:ecfabf12-2b2a-46bc-8de7-8082487167df}} .
| d | 547bd6dda92a6256af7d48f5323685d6 |
Pattern-matching methods {{cite:5cdda5c4b7b0da221376d20e515dc5ed4f4c25ad}}, {{cite:66f5f5255d054139dbff18abe05878087b59d99f}}, {{cite:9bffcef0e262753ec68f9ae91d72909ad7b9bc27}}, {{cite:d018c1812f799f354fbf3a1171d5339dd6c1f4d2}}, {{cite:9300257953b66e5e0b58158b57716f44792762f8}}, {{cite:24070d6b85db999a8ab34eec80f4366fe581ed80}}, {{cite:2c22239ad5f05101defba20920cf73b0ade70145}}, {{cite:0dba51b681777e92181e61aa10493770175f543f}} try to extract concepts from free texts based on handcrafted patterns or rules. They can acquire accurate concepts, but only have low recall due to the poor generalization ability. Comparatively, our MRC-CE achieves CE task based on MRC model beyond the limitation of handcrafted patterns, and thus acquires more concepts.
| m | 1bcd3aa69649cbc4ddc68e9b0b2f28c7 |
We compare against the following PEFT methods, using a linear decay with warmup scheduler with a warm-up ratio of {{formula:bbc1e156-74ff-42ff-bf0d-84dfcd16f4b5}} and the Adafactor optimizer {{cite:c9be35e76e90d3f6f2c2a46ef5f911efce6e4882}}.
Table REF shows the full per-dataset results of all PEFT methods we considered.
| r | bac1c51d80fccf75529f7529307977ab |
This paper provided an efficient solution to L1-regularized orthogonal sparse coding, displayed the orthogonal sparse coding basis functions, provided a sparse coding derivation for the forward transform of neural networks incorporating ReLU activation, extended the derivation to a complete convolutional neural network without pooling or normalization, and suggested improvements that may make for more robust neural networks that are resistant to fooling like that of {{cite:4365abbb84562e7aec5c63bb3b7df52247639f64}} and {{cite:af740885525aa9bb4b33fda227a464a8acaf1603}}. To our knowledge, aside from the links between sparse coding and convolutional neural networks provided by {{cite:52b1e386ee8d38e151350864fbcba0ddb91539e5}} and {{cite:15929f890cc5f9fdeb57d37dd602fafa8a0233cd}}, no comprehensive derivation of a convolutional neural network from the first principles of sparse coding has been published. Here is provided a derivation of both the forward transform (ReLU of filtered input) and loss function of a convolutional neural network from a slight modification to the sparse coding model of {{cite:473afa15ed3c0113ccd92517a0ba51c34282ba2c}}. Future work may investigate an implementation of the full model and compare its performance with convolutional neural networks on image classification and its ability to be fooled by computer-generated images.
| d | b44513631dd3ad63578a42b2038329a2 |
In Section , we deal with the NC curvature corrections to NC Minkowski spacetime parametrized by spherical polar coordinates. We have done both first order and second order noncommutative corrections to inverse metric tensor, Christoffel symbols, Riemann curvature tensor, Ricci tensor and curvature scalar in the twist-deformed diffeomorphism framework of NC gravity. We repeat these calculations in Section for the NC Minkowski spacetime parametrized by parabolic coordinates. All of these calculations are based on the formulas derived in {{cite:6a407f2c5d1b4739ec3aff645e6f5c6bb9b64fb6}}.
| i | 47ec1cc42c1131a2c18550f39c0f8e36 |
Since 2013, a variety of experimental quantum optics groups have implemented proof-of-principle implementations based on different architectures, such as reconfigurable integrated photonic circuits {{cite:b500b4099a140f46663bc951cc270e66e9051d56}}, {{cite:949d239c15a9410a19a4ac0ce3b3409487da05e9}}, {{cite:2ea192235d1272e36ce989a2395b798839407394}}, fiber-loops {{cite:e7d8d46d56dacf6edacdf3d0960931e1a64190a9}}, 3D waveguides {{cite:b343e53b0794b98c7d1547ecd1fd5ca7a8b36947}} or multimode fibers {{cite:bad8f4b8a6241ff3cadcfce608525e2469c53d25}}, that have the potential of being scalable and therefore are candidates for a quantum advantage demonstration.
The motivation to further simplify the experimental scheme led to the proposal of scattershot boson sampling, a sampling problem as hard to simulate as the initial boson sampling proposal. Scattershot boson sampling solves the problem of obtaining {{formula:37804aef-2cee-415a-8b5e-ecfed7eeb314}} single photons from state of the art probabilistic single-photon sources by using {{formula:1ea56207-7178-4772-9ce5-bbc9363ad602}} heralded two-mode squeezed vacuum state sources,
one per input mode of the boson sampling circuit {{cite:98fde160ee5a1299e49ae484c05041d1f287f223}}.
| i | c9d37ca8a985fbfa54a0eab29013679c |
In our material Pd{{formula:73c7d07c-64c0-446a-819c-bfd8d2493ed4}} Bi{{formula:c318952b-cb73-4fc1-899a-aa2000bd3361}} S{{formula:db6aecd6-21a8-4a52-b006-d3b4ff162244}} , the spin orbit coupling (SOC) is also expected to be high. We fit the experimental data by Eq. 5 with fitting parameters {{formula:4128a4eb-4e8d-48be-9a39-b98a586dda80}} and L{{formula:71fe6d83-ab7f-4d54-90f0-e74f221b4c76}} . The temperature dependence of the obtained fitting parameters are shown in Fig. REF . The extracted value of {{formula:e88e3399-1a41-4da5-9b8a-6d92f82814af}} at {{formula:6f2db34b-ca20-4e19-8785-25719820fe40}} K is smaller than the theoretical value {{formula:479b08ac-3881-404f-a317-fee1e4267767}} expected for a single conduction channel. This indicates the presence of more than one conduction channel. For example, the presence of Topologically trivial bands at the Fermi level could contribute to the conductivity in addition to the Topological electrons. Figure REF (b) shows that {{formula:94a5d0d2-8de0-4aa7-b372-45d99da94702}} reduces from {{formula:cb163953-d2c4-4733-a14c-a0b46931c7d9}} at 4 K to {{formula:325beae0-3f12-4120-b0a0-1c036a4686be}} at 10 K, consistent with the trend previously observed for other Topological materials like Cd{{formula:0c08b185-a2a8-4cdb-adb6-12c4e00fe2a3}} As{{formula:80d30aff-111c-4e01-8c65-0e28289fe352}} .{{cite:e26cb2fc3af0900545e480fc05ea6034ba17e876}}
| r | 72b970fcd7947521461a137a419a1e4f |
latent ODEs: Rubanova et al. refine the Latent ODE model of Chen et al. {{cite:303b880f8828ee33fa98e10d587891e274f8123c}} by using the ODE-RNN as a recognition network, where ODE-RNN is generalized from RNNs with continuous-time hidden dynamics defined by ordinary differential.
In the experiment, we modify its official code and extended the ODE network to GODE network which can be used for graph data.
| m | 97a09053ba859e76043cd6997a18c14c |
Model training: The experiments are completed with VGG16 {{cite:acf0dfa382fcc9352bd0a8c7be1fc7164b32fcae}} and DenseNet {{cite:99a9d80b0e6bca8d84687890575f60efa13c79b9}}. VGG16 is a 16-layer deep network with all layers connected in series. DenseNet, a more modern network than VGG, consists of dense blocks of interconnected layers in between traditional convolutional layers. Both networks work in a feed-forward fashion, and are trained with the CIFAR-10 dataset {{cite:840817972dd8b2b2d58eb1bfe36ac8eaf582534f}}.
| m | 8f994f6401f6b3f5259c213511df0e65 |
In this paper we have ignored stellar rotation (with angular velocity {{formula:f9579674-a6aa-4216-ac78-add3757c0566}} ) except in Appendix , under the assumption that the star is rotating slowly relative to the tidal frequency {{formula:9981897d-51da-49e5-85e2-dd38b2fb3fe8}} , i.e. that {{formula:82a2af10-1497-4aa9-970d-50b9ccae323a}} , and certainly that {{formula:0cabb1e5-fae3-43e5-95ba-421582fb206d}} . The neglect of rotation is likely to be valid for predicting the current and future evolution of most, but not all, of the shortest-period hot Jupiter systems observed. This is because their host stars typically rotate with periods much longer than their planetary orbits {{cite:915637e3957f5428ba4ac0fdee9cddee8b346a93}}, so we can probably neglect the corresponding frequency shifts due to rotation (as long as their radiation zones rotate similarly to their surfaces). For example, WASP-12 is inferred to have a rotation period longer than 23 days and perhaps as long as 38 days {{cite:03535b29ac81a6c706adddebf1b33366e4afeff3}}, whereas the planet WASP-12 b orbits in only 1.09 days. A brief calculation of the possible rotational correction to our results in Appendix confirms that it doesn't appear to be significant for such orbital and rotational periods. In addition, inertial waves (restored by Coriolis forces) will not be excited by planetary tidal forcing in these stars (which would require {{formula:cf92f7e8-ae22-443a-a6ea-952a10e79e2d}} , in which case we could no longer describe the modes as a single spherical harmonic in the form of equation REF ). However, the effects of rotation on the mechanism we have analysed should be explored in more detail in future work to enable us to study more rapidly rotating planetary hosts, such as young stars or those that are nearly rotating synchronously with their planetary orbits e.g. {{formula:47c26e62-1475-42b7-8f4f-6661a771ead8}} -Boo or WASP-128.
| d | 0dd942928a16d7a2fb5d11dffc00eb0e |
Given the point cloud associated with descriptors, some methods {{cite:2179793655f35f3291ae272087dd47f1835373f1}}, {{cite:1663c55025d1ce6b6d1227bfe907fbdb6cd91f7f}}, {{cite:e63f6ee148cda4c884520a1631c91f34e2858f49}}, {{cite:064bb57925c073e4be7622487f5d2e34a697fb2b}} rasterize the point cloud into 2D image space with a learning-based differentiable rendering scheme. Instead of explicitly rendering the point cloud representation, Dai et al. {{cite:b4f01485b7aa1294f1ce1dfa66d0de30b397c452}} and Song et al. {{cite:ca8b526eb625a27c4a9c2c6bf8de9eec3819e082}} proposed to extract features from the point cloud representation to construct a multi-plane 3D representation, and then render the color image from it via a neural network. Other methods {{cite:09a1a27335bfb0591033997e71f01608d64a24fa}}, {{cite:0e41da3100608d2e4f8780751e0fddfe86866465}} use the point cloud as a base geometry model, which is further fitted to a surface mesh, and generate novel views by blending weights of sources on the mesh surface.
Some methods construct the point cloud by estimating the depth maps of source views.
Niklaus et al. {{cite:2e1e575f82a767b3951b35ad54a832a517bf9d8f}} proposed to predict the depth map of the source view guided by semantic information, and then render the novel view from the colored point cloud constructed from the source view based on the estimated depth map.
Wiles et al. {{cite:ed7ba3f9f1a2586becc35b97f52346dcd3d68553}} projected the feature map of the source view to the 3D space based on the estimated depth map, and then synthesized novel view by decoding the feature map rendered from the point cloud. Le et al. {{cite:4cd68e74b7e9b9bdd39c590aa0ae193d4edf5970}} proposed to backward warp the synthesized novel view to the source one to supervise the depth estimation of the source view. Cao et al. {{cite:4bdc09fde9ddb5196dfb075ca968eb30ade1fc22}} forward warped each input view with a differentiable point cloud renderer similar to {{cite:ed7ba3f9f1a2586becc35b97f52346dcd3d68553}}, but extended to multiple inputs by fusing rendered view-dependent features.
| m | 3484840f702f9e1cb4d9c4c77cb282cd |
Table REF lists, for all-label defenses, the attack detection AUC for our
proposed defense and for three other existing defenses (i.e. feature squeezing (FS) {{cite:3a71653f2c320684265e9f68b845ee4663902a15}},
MagNet {{cite:5d1e155750b1002c5fc0da505da0294474bc761b}}, and latent intrinsic dimensionality
(LID) {{cite:c22564c605afddc51c461a72ab2cb2b9b90252ea}} described in
Section REF ). For FS, MagNet, and LID, we use the
implementations provided by {{cite:3a71653f2c320684265e9f68b845ee4663902a15}}, {{cite:5d1e155750b1002c5fc0da505da0294474bc761b}}, {{cite:c22564c605afddc51c461a72ab2cb2b9b90252ea}}. Again we consider the four tasks and six attack methods as above.
| m | 753863a46636a09f0a312ec85f21cbe3 |
Before we detail how annotation transfer is performed, we first provide a brief review on the Supervised Descent Method (SDM) {{cite:fade9900657b166edbe57da89b930362eac8d7a4}} as it forms the basis of the proposed approach. We note that our concept of transferring annotations is not limited to the SDM method but can be adapted to other existing cascaded regression-based approaches {{cite:7adf80ebeedca87bda4f823bfdc99fb024978484}}, {{cite:efdfcd98e86961ed29e933b9751b0a7762111360}}.
| m | 33c8cf54ebf9253a73fa21edfd8e90a1 |
We also achieve competitive results to other systems on the official leaderboards (Table REF and REF ). Notably, the top two systems, T5 {{cite:bdca12f05a3e38f1a5bd22d3f664a27662d0e6a9}} and UnifiedQA {{cite:ed0468394933323a50439e459a51b067217aff0e}}, are trained with more data and use 8x to 30x more parameters than our model (ours has {{formula:6d81a664-033f-4a3a-b15b-131cc3e7a871}} 360M parameters).
Excluding these and ensemble systems, our model is comparable in size and amount of data to other systems, and achieves the top performance on the two datasets.
| r | 42caa7d8a7159883455cdc4075f23226 |
A time-varying transmissibility allows to quantify the effects of interventions and changes in the behaviour of people, in real-time. This is of key importance to policy makers, as the interventions often have immense societal and health costs. Understanding, while an epidemic develops, whether the implemented interventions are sufficient or not, or if interventions could be lifted, is essential. The Norwegian government's strategy was to control the epidemic, and this was achieved by multiple national and local non-pharmaceutical interventions, which are reflected in the temporal variations of the reproduction number. Our approach is the first which allows to monitor a daily-varying reproduction number when using a complex compartmental model informed by multiple streams of data.
The fact that our estimates of {{formula:61971213-7900-470a-a3cd-e84454db4ce5}} react rapidly to changes in the test data,
means that the situation is captured only with a delay given by the generation time of the disease under study and the time gap between transmission and testing. For Covid-19, this amounts to about a week, because of a generation time of about 5 days and a delay between transmission and test of about 2 days.
Picking up an exponential growth ({{formula:6bade9ff-20c0-40b1-811b-5244b4bff9df}} ) before the epidemic grows out of control is essential for surveillance. The possibility of our method to validate the efficacy of contact tracing, to lead back {{formula:b4cc11ee-18f8-434f-bcb1-f3f1524ba940}} to below 1, or not, is also very important.
We have shown how daily reproduction numbers and the latent compartment-wise populations in an SEIR model can be put into a state space model, so that an SMC technique for inference can be used. Obtaining unbiased estimates of the marginal likelihoods also makes it possible to do parameter estimation within a particle MCMC framework, although more work is needed here to make this computationally efficient. Compared to a parallel effort using Approximate Bayesian Computation (ABC) {{cite:1c8e97a18388f6c836c1a5f57bafe3fe841302ad}}, the SMC approach is much faster and also easier to modify with respect to model changes, confirming the findings in {{cite:fb02f7a22470861922a2e49f6a35bde99acc351b}}. Our implementation is modular, does not depend on the specific epidemic model (here SEIR), so that alternatives can easily be tested.
So far only simple bootstrap filters have been applied. This can be improved by utilising more efficient proposals, alternative algorithms such as the resample-move algorithm {{cite:1f3e9c819af5f11dd233dedd220787c8b8e57b9c}} which was used in {{cite:fb02f7a22470861922a2e49f6a35bde99acc351b}} or the recent promising ideas of iterated auxiliary filters and twisted models {{cite:e6094eab859b00c2779af39db6e6e0ec2e35c961}}, {{cite:22ba428abbba15671b45c66d388eb1c7f163186b}}. We expect these approaches to be very useful when we expand the SEIR model to have different reproductions numbers in each of the eleven Norwegian counties.
We compared three simple dynamics for {{formula:5b118d25-3046-4b1b-93e0-942a829e7583}} and found that the autoregressive model was slightly better than the others. More work needs to be done here to compare the models in terms of prediction. In our approach it is easy to predict the future hospitalisation incidence and the number of positive cases tested. Note however that such simple dynamic models for {{formula:9c76ee0c-1546-46ca-a4ed-e681f2dbf22e}} are mostly suitable for now-casting and for short term forecasting, because of the lack of stationary that interventions and feedbacks imply.
There are then interesting questions on how to use our probabilistic prediction of the time varying transmission strength to propagate uncertainty, in the context of variable planned interventions.
Estimation of (static) parameters is a challenging task in SMC. Several parameters related to the SEIR model, as well as parameters related to the observation processes, were pre-estimated based on external data sources. In {{cite:1c8e97a18388f6c836c1a5f57bafe3fe841302ad}}, we used a version of ABC. Parameters in the {{formula:24ee9572-794c-4826-8d22-7465a1ea6e2f}} dynamic process were estimated online based on the procedure of sufficient statistics, a method that can lead to degeneracy. We have also tested out the particle Metropolis-Hastings algorithm by {{cite:2037ffc9efc656c60dfc0c71953ce84b77b6dd1f}}. However convergence was slow and challenging, because of the computational cost of running the SMC algorithm even for a small number of particles. We validated our estimates in the supplementary material, and find that our online estimation procedure worked reasonably well for the given models and data. However, some experiments with the AR model when also including estimation of the parameter {{formula:ce8e3aa4-6a42-4892-91f4-ecff5b3665fc}} , did cause degeneracy problems as have been reported in in {{cite:60a511ba37cf1e12ad4d6bd280cd6d58f6657366}}.
A more efficient SMC algorithm might be better in utilising the potential of such algorithms to estimate static parameters. However, our estimates of {{formula:195840ca-d591-4c14-bfed-88b903533184}} appear to be relatively robust with respect to changes in these parameters. Other parameters related to the SEIR model and the observation processes can be more important.
Communicating uncertainty of estimates and the effect of stochastic and uncertain time lags from data back to infections, is a major challenge. Our current strategy has been to report estimates of the reproduction numbers one week back in time as the most reliable estimates, and this needs to be studied further.
As pointed out by one of the reviewers, it is quite easy to extend the modelling approach (and the algorithm) to include dummy variables describing interventions made by the government.
In this case, one would need to estimate a time delay between the time point in which the interventions are decided and when they effect viral transmission. This delay might change in time and be intervention specific. In our model, interventions appear in the data after a delay, and are then reflected in a change of the reproduction number. It is possible to interpret changes in the estimated reproduction numbers in the light of the interventions set in place.
Finally, it is interesting to compare the estimates of the instantaneous reproduction number produced by this SMC model, with the estimates obtained by models which keep the reproduction number constant over longer time intervals, for example over four weeks. The SMC-based {{formula:ac0f345a-5435-444a-92bf-5bc4a5f90d77}} is able to capture changes at shorter time scales significantly better, but possibly with larger uncertainty than estimates of reproduction numbers assumed constant over longer time periods, if the transmission has been stable during such periods. Comparing prediction power is a further aspects that can be compared.
Results from our SMC model are currently used in the weekly reports of the Norwegian Institute of Public Health, see www.fhi.no/en/publ/2020/weekly-reports-for-coronavirus-og-covid-19/.
| d | f63ea3e96b14adb6a4dbe29b9fe651af |
In this article, we examined matrix integrals of a certain type. We called them matrix models associated with
children's drawings—the so-called dessin d'enfants. They include some well-known models that have found
applications in the theory of information transfer and the theory of quantum chaos. We hope that our matrix
integrals will be in
demand. We think that these models are related to quantum integrable systems {{cite:67f18e543314dddd33cc847f90e887297f39b932}}, {{cite:cba932a012173ef917e68778dfe796bb58a59dab}}, {{cite:0f6d8284100cc137ec4bc1c9ca6d042e6d476e4c}},
but this topic is waiting for
its development; we expect connections with
{{cite:d50ecac54bc6dd205c5b4d80f03260957fc8665a}}, {{cite:e1eea9504123ac3113d6c6d9c0c85c49089aab1f}}, {{cite:a9f57fc2cc45c2d2f9c75bfc5bd1d7b311b6a68f}}, {{cite:86ce2414d5065799ad50e251dec3bfd3fbf36a6a}}, {{cite:ec74ca420140e9612e0671e921949f7808b25a1d}}, {{cite:270d2983529a52ee69d0e2a56f9bbdb7204963a5}}, {{cite:af9701937d9cdef6b32725276137ce87faa74a5f}}, {{cite:85504b186a399d3688e6cd37067983ef7c75addd}}, {{cite:23b88b7d08d07ea2a428619f10445ce418445508}}.
| d | a6d76cdd9cfddde66aa98562ec1310b7 |
Recent works employed concatenation-based feature volume and 3D aggregation networks for better context aggregation {{cite:50be7ebd455704cffad6fc54a53ce84b9c70b906}}, {{cite:fa841770149391f55810fa43c3a6791c50fb9a8a}}, {{cite:f6c0170d931e6f82029e8d11fb3021ea66b29191}}. Kendall proposed GC-Net {{cite:50be7ebd455704cffad6fc54a53ce84b9c70b906}} and was the first to use 3D convolution networks to aggregate context for cost volumes. Instead of directly giving a cost volume, the left and the right feature {{formula:395f988a-bc26-40af-a866-3b2c167c427e}} , {{formula:985a56be-970d-4a46-a3db-bb92f4f00674}} are concatenated to form a 4D feature volume,
{{formula:3e12af52-5c06-4692-970d-5a3d8e367c84}}
| m | c7eb22a48019e0edb291cc209fcb827d |
Deg
This scenario sets all the BSM Higgs boson masses to be fully degenerate with:
{{formula:a960930a-4f3f-41ef-b30a-c0370d3f0122}} . This is designed to easily evade the EWPO constraints.
Par Deg
This scenario sets the BSM Higgs boson masses to be just partially
degenerate with {{formula:b4ab7060-da89-4110-9093-0d58b7a206e3}} but {{formula:77646d4a-82bb-4498-92b7-6cb1c9087a11}} different. In this scenario
we consider two different cases: 1) {{formula:316e6d3b-305c-4ebd-a2e9-fe143311f7a0}} fixed to a particular
numerical value and 2) {{formula:44358d92-7ee3-41c0-ab22-e14cbde2dec8}} . Again the dominant EWPO
constraints from the {{formula:142f6bb5-6c50-4a25-88e7-ffcb365f373f}} parameter are evaded.
Split
This scenario is designed to possibly accommodate a new world
average of {{formula:dfb9cea6-2ceb-4a7a-bdad-5b855e822d2e}} , including the recent CDF measurement {{cite:41b89d81382a5ef72623e78af6d098d8ee35679f}}.
In this scenario we consider non degenerate {{formula:dc4d92ec-fc39-4afd-bf9e-bd67c9ef0be3}} , {{formula:e69bfb00-5954-44e5-9c2a-7fb28fd2f097}} and {{formula:6209b2c4-42de-4d64-8e2b-76a3883f3ad0}} . We
set {{formula:82a6a800-ca01-48f4-a969-2d2f8756a5a8}} , but with mass values {{formula:4278a1a2-5685-4fd0-b651-ddfd0b97cd02}} and
{{formula:a81b3a81-04e1-4b69-aa3f-fea648b0ddd3}} split by a fixed quantity {{formula:c622a16b-0271-48c9-90ac-462c34c985c2}} , as {{formula:ef89ed56-cc33-4a93-b056-3d552549b673}} .
The other Higgs mass is again set by: 1) {{formula:cd4c4534-1bb1-4bff-bc26-73d12313f1d8}} fixed to a particular
numerical value and 2) {{formula:cc1f3ae8-a680-42b1-ba11-c6d34856b5a8}} . We explore different values for
the splitting between {{formula:180c246c-fedc-445e-8060-802d257d3529}} and {{formula:94947116-44d9-4421-bfd5-84a0ff9ba76d}} and require that the
value of {{formula:edd90295-1311-4384-bb8f-941ae2dc31af}} predicted in the 2HDM to be in agreement at the
{{formula:5f76bd4f-fd18-4b3b-a667-ed35981468f3}} level with a possible new world average {{cite:0fd50e99f1ca2797cf762d76f79841f38ce40870}},
including the recent CDF measurement {{cite:41b89d81382a5ef72623e78af6d098d8ee35679f}},
{{formula:ca69f9d8-76b2-4623-9e23-23596bb76878}} .It should be noted that the values given so far in
Ref. {{cite:0fd50e99f1ca2797cf762d76f79841f38ce40870}} are rather approximate.
To calculate {{formula:a408bc0d-aeea-4174-b278-cc92fe42920e}} in the 2HDM we use the one-loop
approximation {{cite:b73579986603a81dc135273522bbd3eadbf0a22e}}, {{cite:6975554648fa54e64e3157796d382cb7d426b707}} given in terms
of the {{formula:e03284d1-f6b2-4c2c-a1ab-919aabdf5996}} , {{formula:70a5b1c0-1c44-4e22-993b-759b7e775fe1}} and {{formula:e646c01b-ba7c-4edc-9531-3164a3977e27}} parameters,
{{formula:45c4f9c0-8737-492f-a3d7-505bd07c644f}}
We take the numerical values for {{formula:3bd3aac8-7f87-4b1a-a02c-cb3a087a381a}} , {{formula:a132b44c-9e37-4683-8248-53738b09f71b}} , {{formula:2b541558-cb22-4333-b948-eb8d42bcd0c3}} from {{cite:0fd50e99f1ca2797cf762d76f79841f38ce40870}}.
| r | 8a3eee4256606122a40e4ef264fd30bb |
The singularities in QCD
partition functions in the complex {{formula:1a321fca-93fb-4d3b-be6a-10ea38a8ed73}} -plane also
have impact on the range of applicability of series
expansions performed at real values of the chemical
potentials.
Limitations for the determination of the searched for critical point in QCD, arising from a finite
radius of convergence of Taylor expansions, can however be circumvented by using appropriate resummation schemes for the Taylor series {{cite:71b6659b9e85e9f93540a06ef9a1aa88fea5b954}}, {{cite:acebe06e5c81d107fe7602a3b22ad3c16992cb88}}, {{cite:6a60ab618a141429bf4cc7423e80abe470ae96b1}}, {{cite:3eee2d23cf4324689e76ab7177c99c9b10275958}}, {{cite:1afcfccccacda0fdfbd8133c4c635f0fc1618d08}}, {{cite:b751f2765c0e3eb1c3e9422558e050f4cc8b6751}}, {{cite:3fc44d48b2c920ffdbd540e4b9823606af926a38}}. Using Padé
approximants is one way to gain information on the analytic structure of the QCD partition function. They allow to explore e.g. the
pressure of QCD beyond the limit set by a finite radius of convergence of Taylor series {{cite:76f115eabde6fa0c775072ea858d45b71eee244d}}, {{cite:6a60ab618a141429bf4cc7423e80abe470ae96b1}}, {{cite:551f0c5d903e0de357e0197b0aa04b6b7c504836}}.
| i | 6f1f5c785a94ecdd6adbd7db41be66a2 |
A second and related limitation of this multiple imputation analysis is that were MAR to be assumed, the parameter estimates from the imputed data are very sensitive to the model employed for the probability of response {{cite:3e42345aa5c059b4860a7ace10512d6944af46b8}}. The probability of response can be modelled by using linear regression models for continuous variables, logistic regression models for binary variables, and multinomial logistic regression models for categorical variables with more than two classes.
| d | b32d946564cbe2fb0f4ce8bd2f14b0bd |
Future investigations could include analytic formulae for the limiting variance terms in the dependence case for various parametric families. In addition we could compare these with other competing concepts that exist in literature. For example recently a rank based correlation coefficient was proposed in {{cite:41e8da7df5ab4fdea56c0b7883caf3c67cc3fd51}} which also captures independence as the null-case.
| d | 900c8fb66f9127875cca149ac76cbd55 |
The Fig.REF displays the quantity {{formula:c91018b2-0a62-46ac-82df-4a92da1ddb44}}
as a function of {{formula:be003728-28eb-44be-bc9c-9421af8e9ee9}} while
Fig.REF displays the {{formula:eebef07d-64b4-4d5b-88d3-a3c72f2387be}}
as a function of {{formula:1bdcbb59-c744-4d1f-aed5-170fc7104c43}}
where all other relevant parameters are fixed.
Each curve presented in Figs.REF ,
REF contains a single peak. All the peaks of the
curves are obtained at mass parameters at which the contribution
of the Higgs-mediated diagrams to {{formula:73a55563-c03c-4646-89d5-ebf00eb7666b}} s are
dominated. Corresponding to each curve, there are two regions of
mass parameter space separated by deep wells. Deep wells,
which divide the parameter space into two parts. The first part,
the mass parameters are located in the right hand side of deep
wells. In this region of parameter space,the pure gaugino-mediated
type can give main contribution to {{formula:2f99e2c7-a7f0-414e-9a75-6bb76c864aca}} s. The second part, the
mass parameters are located in the left handed side of deep wells
at which the dominated contribution to {{formula:f1532cfb-8d61-4136-9c9c-b22faed0af51}} s is obtained by
the Higgs-mediated. All of the maximum points of the
curves in the
Fig.REF and Fig.REF are reached at
{{formula:745fd6ee-a786-45d6-a352-4628eb278215}}
and these values depend weekly on the changes of values of
{{formula:c104f3d6-d4b3-4797-afa7-5c3f331c6780}} and {{formula:c6b3f8e9-aa4f-4eac-8635-e234c1451f5b}} . On the other hand, the
maximum values of the {{formula:9932b06d-9bab-4b7a-b81e-c1c4e3f5b2c6}} is {{formula:fc88ad05-91a9-4490-abab-d1921e897ad9}} ,
as concerned in the MSSM
{{cite:acc06927a9da2d7d005d738ffe2b5cb1d96e7721}}, {{cite:05217f44ef3cf254caae1719b4b5f98bca2410b5}}, {{cite:4b69d545ab64418fdb80ad037fc8a79a3355997d}}, {{cite:371b8b90080c58e1ca4aaf4fe9efa599b3c31ed0}} while the maximum values of the
{{formula:932e5ba9-2fda-445b-a8dc-4f95efaed41d}} are much smaller than those of {{formula:b4ccddd8-e1d2-4353-b583-ba8ae7d44de1}} , specifically
max{{formula:63fee165-1b2c-4e96-9a78-ff90ade1209e}} . This large difference comes from the symmetry of
{{formula:9938cd71-93e6-4dbc-955c-5a40513f3d65}} model. In particular, in the left
side of wells the main contributions to {{formula:9cf89603-5bf0-4b90-a12e-606add0140e6}} of
SUSYE331 model come from the {{formula:37cb6f47-a190-421c-8e5d-33a1f1d35c91}} gaugino-mediated diagrams,
namely diagrams ((b), (c), (d)) in Fig.REF . In contrast, the
main contributions to {{formula:b2eca6c8-467d-454a-9c35-32463f7e57ee}} come from only
{{formula:2d4f7569-062d-49bb-b69e-d8d9a612bedc}} gaugino-mediated diagram. Figs.REF
and REF also show that both {{formula:7d002d05-cf58-420c-b17c-3435756cbc03}} and
{{formula:f68d040e-7f41-48ea-b06e-d2023d4ef39c}} are very sensitive with the changes of
{{formula:905b49aa-1930-4669-ac87-5dd5050d730d}} and {{formula:bfb37285-a055-4517-ab47-40d066ce2254}} . More details, Fig.REF
draws the dependence of {{formula:43b50c44-0666-4d14-81f0-c0996f50ff56}}
on {{formula:28009539-8140-416b-b3b7-b809b346cb25}} , where
{{formula:3cfdc68d-b85f-4e73-bf35-f78e856e0fd1}} and four different
fixed values of {{formula:c7fba524-f35e-4884-8ba4-d858d69d4b22}} . The
maximal and minimal values of the ratio
{{formula:484c7ded-ef06-46f9-960c-e8decf899c74}} on all the curves in
Fig.REF have the same value at different values of
{{formula:74ce8ade-7f7e-412a-9a53-24dc4f1d8b83}} . In the parameter region where the
Higgs-mediated diagrams give dominated contribution to {{formula:85b604ff-055a-4d42-956b-dca28c6ba870}} s,
the ratio {{formula:9590d267-40ff-4a1c-9f6e-2f9fe67e1b27}} is very small
{{formula:c3a7daa9-d2a0-4254-8e6c-9e4c9ddbaaad}} . But in the remaining parameters, that ratio is
increased. In the limit {{formula:aee1aad8-2623-402a-868e-11298b712fc3}} ,
the ratio {{formula:e49eba82-9b55-4f63-9e6d-17b8431c0c88}} reaches a
constant value. More general, we can investigate the influence of
{{formula:dbc0cca4-3f92-4dad-82bf-aff7a80dbb21}} on the ratio
{{formula:6f6327b6-eec3-45b4-995e-3c574d184797}} through contour plots
drawn in Fig.REF . On the drawing results showed that
{{formula:a7afb5eb-44e2-49b0-8004-737f3b8a1546}}
whenever {{formula:5be38419-ff6c-48e3-977e-2abfd0a09083}} and that ratio does not
depend too much on the ratio {{formula:7992ac08-1928-4dd4-9a55-ce60dd3e2949}} . However,
in the limit {{formula:d6e5b4a9-5adf-4cc1-ae75-3db933ef7bad}} and {{formula:c2cad1e8-716f-4f88-a78d-7264dae7cf4b}} ,
the ratio {{formula:133b021c-74df-4c74-a4c3-9a78bbd4ace4}}
changes very rapidly if small changes {{formula:02f0573c-c7c3-4058-acc3-682ecadb2963}} and
{{formula:bad60b26-e5b4-41a7-a62f-63c160ac2452}} .
It means that chirality effects of phenomena relating with
{{formula:c892d1d2-db00-4fd2-a58f-47a96febe108}} and {{formula:9fe65687-a3a0-43be-acc5-a34ecedcfd74}} are sensitive with the
change of ratio {{formula:834e5551-ee6c-4638-a3b0-24dff3b517fd}} at large values of
{{formula:5afc3c71-8b5f-4a28-b1e6-df243da2ec9b}} .
On the other hand, the left picture in Fig.REF
indicates that when the ratio {{formula:9a19c6d6-baa3-4ee8-9dfd-32e19dd3710b}} , it
will exist in some regions of parameter space of {{formula:eb84b011-77b8-4a39-b59b-144cbf0880e1}} at which the contributions of left- and right-lepton
sectors into the {{formula:0edabeac-54b9-4ee2-b4b0-340b1d8eef0d}} decay process are of the
same order. In this case, the pure gaugino-mediated diagrams give
the dominated contribution to both {{formula:cfbaa567-335b-40b1-93ab-6476029748dc}} and
{{formula:7c70f193-addc-4b81-9457-aed4536f5fd0}} , and also {{formula:42b8dfea-4d02-4fd4-9826-a8822e68ba52}} ( see
(REF ) in Appendix ). Recalling that
large values of {{formula:a737622d-8f15-4c99-9139-64c076740170}} can strongly affect directly
on the ratio {{formula:a8a28dc1-6234-4854-9ebe-63127d37467a}} . The results presented in
Fig.REF again confirm that whenever
{{formula:f4632360-4852-45e1-92e8-2e1dbede4739}} and {{formula:4d717b7d-8892-4a77-8b6b-cd66a0c4a31c}} ,
the right-lepton sector gives dominated contribution to the
branching ratio of {{formula:f29dd86b-cf07-4cda-939c-586742c69360}} decay process.
{{figure:5f2bee1d-20df-41f7-ac7c-2ccc15a97e7d}}{{figure:9b5fe6d3-1fb5-4762-b341-9bf3121803e6}}{{figure:d7e4ac10-932c-44ee-b569-91f7166a4956}} | r | 0b1f56984683a20dc42d014fa6a95712 |
Almost all existing few-shot video classification methods {{cite:4cb806aff46addb7124eb71688fa97652bc66ffc}}, {{cite:c57b283051b25abe278b6556c16ecb3b90054c92}}, {{cite:b043d1b9a279ef31171ba249d56f95a2e7038ab0}}, {{cite:587b66f0cebc2bb873bc3d34e4420777347e20aa}}, {{cite:e98f1f1b1ff7fc085c9d7f60050f9cd0fc7da7b4}}, {{cite:e6e888746f3b4553ce242b2be5c53240353c215d}}, {{cite:e5e5ab9ef926adfa3167b597c1f7555be202cd7d}} use pre-trained weights (ImageNet or Sports-1M).
However, this pre-training step might transfer the semantic knowledge learned from the ImageNet or Sports-1M to downstream few-shot learning. This transfer process might be problematic as the novel classes might very similar to the pre-training sets, thus violating the assumption that novel classes cannot be seen during the training phase. As shown in Figure REF , some novel classes of the Kinetics are very close to some classes of the ImageNet. However, the key idea of few-shot learning is that the network should learn to generalize to novel classes (unseen during training) rather than new samples. In contrast, the pre-trained network may have seen hundreds of samples that belong to novel classes through pre-trained weights, which may not reveal the real generalization performance and violates the principle of few-shot learning.
| d | e7321d2048150249a2b9aad174752ca8 |
The study of quantum information theoretic quantities in the context of the holographic duality has been a fruitful path to understand different aspects of quantum gravity. Here, we have given a local instantiation of the IR/UV Connection which was previously undescribed, relating local deformations of the boundary UV cutoff to local deformations of the bulk IR cutoff. Previous discussions of the IR/UV Connection focused on global deformation of the cutoff and in that regard, we could fill a gap. Although the result for the entanglement deformations (REF ) is well known in the context of two-dimensional conformal field theory and was described in the literature {{cite:3f4cdedde541e637ed37ff06edb049f42db33283}} {{cite:257f21ddb4ddab77319eb3cde8cafac15b890389}}, our derivation can be extended straightforward to higher dimensions as was done in {{cite:d601eee4e55aeb5fed40153747d05f1cc1ba13a5}} for the four-dimensional case, which is different from previous derivations in two dimensions, which either relied on the fact that the entanglement entropy of an interval in 2d-CFTs can be written as the two-point function of twist-fields {{cite:257f21ddb4ddab77319eb3cde8cafac15b890389}} or they employed the Liouville action {{cite:3f4cdedde541e637ed37ff06edb049f42db33283}} which results from an integration of the Weyl anomaly in two dimensions. Furthermore, our calculation on the AdS{{formula:d33eafea-b6b9-476e-8045-be33f8bca4df}} -side is not present in the current literature.
| d | ce9a9173cd4cfc89f011cd0b0cd57445 |
Despite the increasing discussions about enabling sensing via communication systems or the communication/radar co-design, little work is done on the experiment-based massive MIMO for ISAC except for our preliminary works in {{cite:695f5a256c627009e6a081b5298c65bb26cfe2c1}}, {{cite:1c011eb32a8e07a4b871eb1f20fbe39e7b52f072}} for single-target sensing. To this end, in this paper, we have prototyped a sub-6 GHz distributed massive MIMO communication system and investigated its radar-like functionality for human sensing. The established massive MIMO system has adopted a standard cellular signal bandwidth and the orthogonal frequency-division multiplexing (OFDM) waveform, which is compatible with the available WiFi, LTE, and sub-6 GHz 5G communication systems. Inspired by the user-centric concept of cell-free (or distributed) massive MIMO {{cite:a88fe72672927196c17ba9d11909c1d7510ed22a}}, which aims to surround the users with a large number of base station antennas, we implement a distributed massive MIMO radar-like system with a large antenna array separated in an indoor environment. Based on the prototype, we conduct the contact-free multiple pedestrians tracking experiment and analysis. The major contributions of this paper are as follows,
| i | 008d11301210286c86cf537b08d3a0c6 |
In our paper, we take the indoor datasets, ScanNet {{cite:d2e882b80e3e00af2511bb1172bd60e539c16d04}} to discuss our failure cases, where there are a lot of textureless regions. As shown in Figure. REF , because of the textureless areas or the blurry problem of the captured images, the 3D-2D correspondences are badly distributed and limited in number. Therefore, the registration of these images is hard to solve and results in bad initialization for the pose estimation. As seen in Figure. REF , the trajectory of cameras becomes unsatisfactory due to the textureless wall. Meanwhile, due to the incremental reconstruction fashion, the subsequent pose is based on the former, so, the incorrect pose estimation results by the textureless region will influence the whole process. To alleviate the problem, the recent robust deep learning-based 2d matches method may play a core role {{cite:6a6fd0af5b8e62a8c8711ab050c9f267a875873b}}, {{cite:aeb7695a3c1e19a3a6aeacf092bf66f610b8ffb2}}, which will be our future works to explore the solution to this problem.
| d | c56bfed747a8b78933ecd51f53c19a32 |
For some time now, black holes have been a popular testing ground of various
effects possibly implied by quantum gravity. Examples include quantum
corrections to Newton's law, modified horizon dynamics, implications for
Hawking radiation and the information loss problem, potential resolutions of
the central singularity, or speculations about the post-singular life of a
black hole. A large variety of methods have been applied, ranging from
effective field theory {{cite:6d97692795ee5949bbdacc33892edcd10d9096c4}}, {{cite:09aa336b4cc13ea35b8581e42c7b26ce0b194caf}}, {{cite:645a8458815314101cbb3d09c83c6315648b0e9d}} to
proposed non-perturbative ingredients of approaches such as string theory or
loop quantum gravity.
| i | a53f5e0fae8290175b46280f1082d2fd |
The number of shortcuts and approximations needed to interpret the
signal of RINGSS is impressive. However, similar approximations are
involved in any turbulence monitor (although not always recognized
explicitly), and the atmospheric theory itself is only approximate.
This point is further discussed by {{cite:c04c988547e3c80c8d962d6462eb011edc86a4fd}} in relation to DIMM
and MASS. To give an example, the `golden standard' of turbulence
profiling, SCIDAR, uses analytical monochromatic WFs that neglect
pixel and exposure-time averaging {{cite:b8523fd91f0f3de6842546ecc7d6a4d84ccf1713}}, while the deviations from the
weak-scintillation regime are also ignored {{cite:6b6107dbc815e68bf79fbd3aede05f74cd0c7ffa}}.
| d | efdb0d9ef4b7e13991e45fb3e0e4a43e |
Existing LNL methods have significant performance differences depending on their hyperparameters, and if we select hyperparameters value improperly, the performance is provably lower than of standard training. However, the values of the optimal hyperparameters vary depending on the network architectures or datasets. Despite its importance, many works do not focus on hyperparameter selection, and it is challenging to achieve performance improvement in real-world datasets, as reported in the original work. We combine the ALASCA and existing LNL method with various hyperparameter settings: (1) Standard training with different weight decay coefficients (2) ELR {{cite:00cb0239a3d3da165ab31d7ee31de57f5343e7c2}} with different regularization coefficient (3) Co-teaching {{cite:827e8d9d9ed3959cde7eff5f4d3a8a2820a1d3ba}} along the different warmup epochs (4) CRUST {{cite:35ac033e7909f3a0fbd981d7c1934c693c7ca122}} with different coreset sizes. The detailed experimental setups are as follows:
| m | 60b34bcbcbe2074c38819ec65eb84cda |
where {{formula:aa0b3f9e-d1e2-4e79-a3a0-8c0539883014}} is an {{formula:f6d5fc07-c009-4e79-b080-363efc34e1d6}} matrix function of the variables {{formula:c9799184-ecbd-4c6b-989f-656709a12faf}} . The operators {{formula:0e24de2e-c3bd-44f6-b76c-3de3956bff69}} , which we will define in detail in the upcoming Section , are the fractional divergence and gradient. They were firstly introduced in the nonlocal vector calculus presented in {{cite:54999550e3f260b843eb3e4a1f278bfdec89772b}}, and their properties were further studied in {{cite:ccb2435368a24b1e24fa159bcb4f3f4a697cd985}}, {{cite:a9545bf61a926db80c97096e62a112b8b06650eb}}, {{cite:4f57af19911d51fe6741df0f47f92a7dcb763e57}}, among others. They represent fractional counterparts to the classical divergence and gradient operators. As a matter of fact, they enjoy the expected properties {{formula:135e00a9-2314-420d-8f82-e597de3b8e0d}} and {{formula:64b22809-d256-4891-88d2-8e486232e110}} . The operator {{formula:99b80b92-8f58-413f-bd39-fa040addcf96}} will be rigorously defined in Section .
| i | ad275e20a083d56d8643c2a57a48ddce |
Despite being able to establish relations between semantics, they do not enforce any intra-class relation. To ensure intra-class consistency, some proposals use the center loss as regularization (L1 distance between the center of each class and each uni-modal embedding) {{cite:69fe33afb412fb57c369784258109d4fc15f004e}}, {{cite:834086511d6b818d759cd290d823064431d66bbc}}. Similarly, to align both modalities and reduce the heterogeneity gap other works constrain them to have similar representations using the distance between embeddings (e.g., L2, cosine similarity) {{cite:0e26f539fbc5232ded2fc703c00adb674f4b432a}}. All these terms are typically leveraged in a joint optimization scheme for avcl.
| d | 829f525b6f9d38fc2c58e527cad6e89b |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.