text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
where both {{formula:3980d790-eb53-4915-982f-09c58f7e33d3}} and {{formula:b8ef2380-edee-47e5-a26d-15db45e4b70e}} are assumed to be (nonsmooth) weakly convex functions and {{formula:5b154d21-e2b8-4cd5-a7e8-1388595f9820}} is lower bounded, i.e., {{formula:8ca900c4-e25b-4210-a704-d2ecc53a8687}} for all {{formula:42d001be-4bc5-4846-83d3-07c7951835dc}} . Classical stochastic optimization methods — including proximal stochastic subgradient, stochastic proximal point, and stochastic prox-linear methods — for solving (REF ) are unified by the stochastic model-based methods ({{formula:37317f70-0319-441d-9f10-b5bd3a44bac4}} ) {{cite:3cbc2769df09f977f04e4551af21a92690e16b06}}, {{cite:d0063e9675000b6a365264eda3c26f145e31dc20}}:
{{formula:6ad44f02-50dc-4daa-a14b-88135036dd22}}
| m | fa4b9ad5764143ae7bec066658e0f5c7 |
Next, we compare these methods in terms of their performance in foreground regions which mostly correspond to moving objects. We use an off-the-shelf semantic segmentation model {{cite:fdeaa8d6e6f21909c7aa680431eaaf15d7aa46c3}}, and consider the following classes as the foreground: “person, rider, car, truck, bus, train, motorcycle, bicycle". We calculate the metrics on the masked image by setting the other regions to the mean color, gray. We show the results for Cityscapes in Table REF and provide foreground, background separately on both KITTI and Cityscapes in Appendix. Our conditional model outperforms the other models including SLAMP and our combined model in terms of PSNR and SSIM. This shows the importance of modelling object motion separately by conditioning on the static. Similar to overall results in Table REF , Improved-VRNN performs the best in terms of LPIPS but the gap is much smaller in foreground regions compared to overall.
| r | b8668057d7ce8723abe73fbc0bc1efbf |
Joint learning methods try to learn joint representations based on the relations among different modalities {{cite:727a97ec224ced6155508b54094952ef279bf973}}, {{cite:56da00df38c7e2a3a0afee60b6138a8fd71ffb30}}, {{cite:636a03f2433cd09248520d49b1d54946baad3ac6}}. Based on the idea that the cycle consistency loss can retain maximal information from all modalities, Pham et al. {{cite:727a97ec224ced6155508b54094952ef279bf973}} investigated learning robust representations via cyclic translations from source to target modalities. Zhao et al. {{cite:dc3cece5093c5b1cc0b417fe5d69767631536601}} also applied cycle consistency learning for missing modality imputation, where the CRA-based cross-modality imagination module is designed based on paired multimodal data. More recently, Yuan et al. {{cite:ac96ddac3913ad4169a4023c49c88467b6943ab1}} utilized the Transformer to extract intra-modal and inter-modal relations, and designed a Transformer-based feature reconstruction network to reproduce the semantics of missing modality.
| m | ac0787632827a2cb460bf82cb6a43b13 |
Note that the definitions of {{formula:ad51d96a-0084-48a2-9d0c-96408555800e}} and {{formula:fb14c8b4-43ce-405f-8316-7981f777ce3d}} above
are different from other {{formula:f8126412-6160-4612-8966-6c664b981c32}} (in fact, {{formula:6f71764a-cf0f-4e53-b95f-dc5b37adffd0}} for {{formula:e1d0f1f4-b44e-4dca-aa28-59783ac17bc1}} while there should be some corrections for {{formula:647e0538-3e20-46a8-bc82-dd7d670a2494}} and {{formula:97066ace-290e-455a-9f72-984e180cae56}} to define {{formula:95ead8d4-1e3f-4566-9354-af91c56ef43f}} ). Moreover, {{formula:28cf5901-bfbc-495a-aea4-9c25d02f6cd1}} and {{formula:f297cffe-edac-4396-bb15-2c3f7aa99ee0}} are not {{formula:349eb0cd-33fb-4363-8585-efbeb2a0e340}} in (REF ). They are different by some terms involving the corresponding Lagrange multipliers. See the proof of {{cite:4b1b879fd6eb266e439476477c6d2244aa5d6dbc}}
for the details. With the quantities defined in (REF ), one can define the discrete Lagrangian
{{formula:fac78181-3858-49ee-ae5c-08996fac4256}}
| m | 7abafe096306a9044e21501e61057f06 |
It is clear that, in order to optimize all detectors simultaneously while exploring the situation for a network of up to 5 detectors, the method adopted by {{cite:48ba1e9d6935c63ffe56e9906e794448acd11329}} would not be appropriate.
In {{cite:48ba1e9d6935c63ffe56e9906e794448acd11329}}, the geographical regions that are suitable as detector sites were reduced to {{formula:6df4459b-e262-4213-ae45-3a8656080ed9}} discrete candidate locations and the optimization was achieved by an exhaustive search over all of these candidates.
The computing overhead for this approach is tolerable if one optimizes for only one site at a time.
However, if we allow even 3 detectors' locations all to be free parameters over which to be searched, then we have {{formula:f1a249d0-6373-4df5-85c8-45cadea73481}} different combinations to explore.
Extending to a 5-detector-network would increase this to {{formula:7ca66f1e-87a4-4617-b637-3118daeae56a}} combinations – which is far beyond what is currently realistic.
However, equally clearly, there will be a significant fraction of these combinations that correspond to networks with a relatively low FoM in which we are not really interested.
The question then becomes: how can we efficiently explore only those regions with high FoM, even for networks with a large number of detectors?
Bayesian inference methods like Markov Chain Monte Carlo (MCMC) {{cite:42f7b0740848d4b1c7eb5fcde719dbcadc0851ae}}{{cite:1cd7050a7f4c5275e3b5ce3252eb0efffd20deac}}{{cite:4238310261e33daec7e910703414fe43ba02ad31}} or Nested Sampling {{cite:450a5be6593b7bea639af745016e923e39c66fba}}{{cite:74bc42318ced08b509cecebaec305929f7340994}}{{cite:21d6102f64f5c4c77302affe16ac6cbddf394598}} are designed to deal with such problems, and they perform especially well when the parameter space has a high dimensionality.
We therefore adopt a Bayesian inference approach here, and assign as the posterior some monotonic function of the total FoM.
| m | 82107069cae9e17b57400fb53b5b2b11 |
Appendix B contains an overview of the flavors of test data we observed.
We found evidence to support the claim that evaluations of NLP models have “historically involved reporting the performance (generally meaning the accuracy) of the model on a specific held-out [i.e., I.I.D.] test set” {{cite:affe8186a8b2d6226eb395dd71e490b7bcd69f82}}.Two observed non-I.I.D. evaluation patterns in NLP were: a) testing on a different linguistic “domain” (e.g., training on texts about earthquakes and testing on texts about floods {{cite:c69603eb93c7aada476877fe92ca0676281e66bf}}); and b) testing a model's ability to predict properties of a manually compiled lexical resource (e.g., {{cite:79ed1cbf9c7db6d1dd599baa956e5247fa6bce77}}). See also Appendix B.
CV evaluations seem to be even more likely to utilize I.I.D. test data, and—consistent with {{cite:4bc99e922515c722eb69d063bae8a767dae2baa7}}—CV papers typically either introduce a new task (and corresponding benchmark dataset) {{cite:4e96466bd8df94a4266e97d94ae7d37ee08744a4}}, {{cite:1cec796e11fb66214fdb3b30a525698cf25d9579}}, {{cite:c1f66d5fecd983cff1de249d591498ce0e89bc39}}, {{cite:fd73eb78f51a2ff6145b3afa2bee50a87dd4db70}} or present results of a new model on an existing widely-used benchmark {{cite:a8ff0ebc6cceb725935472c6ad8fd2889472a797}}, {{cite:f6f50b263ad4e0fb22dab70c573864decc7d2297}}. An exception to this trend was CV papers which explored shared representations (e.g., in multi-task learning {{cite:f918d5b2c7c33dda4783ce7ba8a238f9b6e9f510}}, {{cite:1f618a6f23351867b37d7fdfc560b272b953d433}} or domain adaptation {{cite:846fe4e4b9ae49ba97c5a363dd1d676f5a25c0fc}}, {{cite:f351d3faa50fdb0bb243d17067eb060eddef9e60}}).
| r | 1e6c228e471d1f4db4cc7aba6ee74e92 |
Another corollary of the strong inductive bias in DNNs is that generalization performance should be poor on data that deviates from this inductive bias. Indeed, since the DNNs are biased towards simple functions, they generalize badly, for example, for complex data, see e.g. {{cite:1b607a3d2c46ccc969fae091c932b59588a0895a}}, {{cite:4c26e23ce08b08f68a5c0e055a9f03e300b652a8}}. Therefore the interaction of the algorithm with the data must be taken into account in an explanatory generalization theory for DNNs.
| d | cd4a4235ec3bf126bb68a990edd55151 |
The dimensionality of the shared space plays an important role when performing domain-shift adaptation. While Figures REF (a) and REF (b) show the performance obtained in a shared space of dimension 5, Figures REF
and REF
in Appendix
show the performance when the dimension of the shared space is the number of positive eigenvalues in the empirical covariance matrix (13 in our experiments). The performance of the unsupervised domain adaptation degrades significantly, while the semi-supervised approach remain roughly the same. We hypothesize that this decrease in performance is due to the difficulty of reliably estimating metrics on probability distributions in high dimensional spaces with a limited number of instances {{cite:e8850d3f433ffc4631541a57860a7d78d5192631}}.
| d | 93448c3db0207480d6ee75b14ecc06ce |
Note that the problem of universal discretization for the collection {{formula:eb69685c-d775-48c9-823d-7e5e75b7eb2d}} is
the sampling discretization problem for the set {{formula:b5e8b385-d440-49f5-ad88-fdeca3570d8e}} . Also, we point out that the concept of universality is well known in approximation theory. For instance, the reader can find a discussion of universal cubature formulas in {{cite:c17f4c18637bad77590c1bad950d9883bfb2443c}}, Section 6.8.
| i | b3d4dfd77867b79a5c8dd531eabdde14 |
We compare against the state-of-the-art, perform ablation studies, and provide qualitative results (Fig. REF ) using the KITTI dataset {{cite:690170db4eec40cb52df58fea2e0fa8280c389e3}}. The KITTI dataset contains 7481 training images and 7518 test images, and categorizes objects in three categories: Easy, Moderate, and Hard based on 2D box height, occlusion, and truncation. To compare with the state-of-the-art, we follow the 1:1 training to validation split of {{cite:97d0e350673c7b95d247cce38dd34a161e7b7d3f}}, {{cite:8643aeae75b6f042ec16447923d616a67c7ff1aa}}, {{cite:3fb4b5be72b240aefac9eca3d6818df42ccbb843}} and the standard practice of comparing BEV and 3D AP performance using IoUs of 0.5 and 0.7. We also benchmark our results on the online KITTI test server.
| r | 7f68189616ad78be520f4daec00ce924 |
Some recent works open perspectives to overcome the difficulties with
discretization of nonlinear integral equations.
Significant progress has been made on the construction of methods based on
a branching diffusion process.
As the nonlinearities to be treated in numerical discretizations
of PDEs in Finance are mostly Lipschitz-continuous
(Lipschitzian, for short),
often piecewise linear and continuous,
the most efficient Monte Carlo algorithms focus
on treating precisely such Lipschitzian nonlinearities.
Among the most interesting earlier pioneering works,
we would like to mention the article by
P. Henry-Labordère in {{cite:be9eb661161ccded69099d8ba5a3b11a20441168}}.
Here, the author approximates a simple piecewise linear nonlinearity
by a polynomial in order to be able to take advantage of
the probability-based computational technique,
a branching diffusion process.
This technique was first introduced by
H. P. McKean {{cite:2901fb83dc61e6a18e25e7ac13c0a3c08c35293f}}
and
A. V. Skorokhod {{cite:e4b3d7af75cd31329d88ba10d245372b96237d44}}
to provide a probabilistic representation for the solution of
the Kolmogorov-Petrovskii-Piskunov PDE
(KPP equation, for short) and,
more generally, of semilinear PDEs whose nonlinearity is
a power series with nonnegative coefficients
from the interval {{formula:f70192e4-71a0-49c4-8b48-c75cbb37fc92}} .
Since the KPP equation in {{cite:2901fb83dc61e6a18e25e7ac13c0a3c08c35293f}}, {{cite:e4b3d7af75cd31329d88ba10d245372b96237d44}}
has only a quadratic or cubic nonlinearity,
numerical approximation using branching diffusion process
is quite efficient.
However, for applications to
the counterparty risk valuation model
in finance treated in {{cite:be9eb661161ccded69099d8ba5a3b11a20441168}}, the nonlinearity
that one is interested in is not a polynomial.
It is therefore interesting to investigate methods that can treat
directly monotone nonlinearities
that are not necessarily polynomial.
As to the speed and precision of the PDE and Monte-Carlo methods,
a brief comparison is given in the work by
G. Loeper and O. Pironneau {{cite:9c5b0eb7d5f420edf61d21e6e653d378c409ca5c}},
together with a mixed PDE/Monte-Carlo method that provides better results
for the Heston stochastic volatility model.
An entirely different approach to
S. L. Heston's model {{cite:02a0e7d91542c5f485c8db728f69a1f95ebdf386}},
based on “orthogonal series expansion”, is employed in
B. Alziary and P. Takáč
{{cite:d6d9d42dabdbcd6261fbe002ba574ed51ea2b2df}}, pp. 48–50,
in P. Takáč {{cite:51db7c2166e5468d0e51093caf05be2d3627f918}},
and in the numerical simulations by
F. Baustian, K. Filipová, and J. Pospíšil
{{cite:1c25ff9da6abff28b985720c0ca6ce56d1b9db3f}}.
Replacing a piecewise linear nonlinearity by a polynomial
introduces a significant error into the algorithm in {{cite:be9eb661161ccded69099d8ba5a3b11a20441168}}.
| m | 5fdbbb2e9924d0aa1ffd5bb4f8393cfc |
In this study, we use the data collected from the ADNI database {{cite:5a188d2277886d2ffe42dfe7e47cdc4017d9beb2}}.
The views we consider include MRI volumes (90 features),
CSF biomarkers (3 features), selected SNPs (924 features) {{cite:b3a539b26ae52e72254e8c149b9032538819ef10}},
demographics (7 features). The resulting dataset after incomplete
data imputation/removal, consists of 589 study subjects (128 AD, 174 CN, 287 MCI).
We use SMOTE oversampling {{cite:ae57114c06b559e1fb12ac4be64b43906ce86843}} to overcome class-imbalance and
and {{formula:73db80c0-9394-41c0-b0b6-ffe4147098ba}} -normalize all continuous features. We report the prediction performances over ten-fold cross-validation
using the standard metrics of accuracy (Acc.), precision (Prec.) and recall (Rec.)
for multi-class classification and, RMSE for MMSE score prediction.
{{figure:de223edd-23e1-46fb-b4ae-f5a6595452b0}} | r | a91e9de2420b8d2034f57088d6b4fc0b |
i) We note that the main contribution of Theorem is to establish the existence of a fixed-point. Indeed, for any {{formula:12488261-a189-4923-a4d5-fe0ce27b0cbe}} , in Lemma , we define the function {{formula:d7d87a79-1376-4b4a-9434-454db0de47a4}} and {{formula:ed70f270-4379-4027-a7c4-b134d1f7a028}} . Using these functions we can define {{formula:77c1002d-3f10-49e3-b2a6-3639dd4f8621}} . Then, given {{formula:d42da8a2-d8eb-485a-b8fe-ec6656b19c84}}
one can find, thanks to the Brenier theorem {{cite:342215a29c2d54de7a853c71ab45a59baef40ab3}}, {{cite:70724b3057448cbd455205fed13631f35bf3630e}}, a convex function {{formula:d5350835-f92a-4951-a641-e3f13b63bbdf}} solving
G(0,0,T,x)=det(Dx2(x))f((Dx )(x)).
Thus, we have defined a mapping from the set of functions {{formula:da8c71b1-1771-4c30-9e27-a5acdbeb7b71}} to the set of all convex functions. Theorem states the existence of a fixed point for this mapping.
| r | 46147befe7130e766ae42caab00943c1 |
In recent years there has been an increasing focus on using blockchain in the engineering of software systems {{cite:2094ccd371b198a0196297319848bcc4ce761b3a}}, {{cite:7cf280d6e1f9d0fe681f87150bfdfef7d81cc20b}}. Compared to traditional software, smart contracts development presents unique challenges due to the underlying blockchain environment {{cite:837a67b8e191b0e3b355675afc7f7ac56055d33a}}, {{cite:2c92bc389b9710a3b64ec2f4583e97de56ed34b7}}. These underline the need for appropriate testing tools that allow the developer community to write and deploy safer code. The following non-exhaustive list defines the mains reasons as to why smart contracts ask for high-reliability guarantees, and a thorough testing process.
Smart Contracts manage valuable assets: Smart contracts can control large amounts of cryptocurrency and other valuable assets. Deploying faulty code can result in the accidental loss of the assets held by the contract. The potential financial gain and the anonymous nature of the blockchain further act as an incentive for attackers. Even a small loophole in the code can allow malicious users to drain large amounts of funds. A typical example is the famous DAO attack {{cite:d6672b10f73c47ea80cf6393d57e9e66fc5c3c61}}, in which a reentrancy vulnerability was exploited to transfer 3.6 million Ether (around $50 million)
Transactions are irreversible: Smart contracts are deployed and executed in the blockchain environment, which does not allow to revert transactions. A transaction becomes irreversible once it receives enough confirmations from the network. At the same time, it is not possible to recover assets lost during the smart contract execution.
Smart Contracts are immutable: Smart contracts feature an anomalous development life cycle {{cite:a34a80afba148cfd11c3689cfe1ce1ec8ac6848a}} that cannot be represented by traditional software development models. Enhancing the code or fixing bugs after deployment is not possible. Immutability ensures that the code is tamper-proof, but it also prevents further upgrading and code maintenance. Correcting a bug after deployment is a very costly operation since it requires the creation of a brand new contract on the blockchain.
Blockchain environment: The smart contract execution is dependant on the underlying blockchain platform and the possible interactions with other cooperating contracts. Developers must carefully consider these relationships and the peculiarities of the new distributed environment to write safe code.
New software stack: The smart contract’s execution environments and programming languages (e.g. Solidity) are relatively young and continuously evolving; many issues and vulnerabilities are still being discovered. Such characteristics make it harder for developers to program with confidence and to write safe code.
Lack of best practices: Zou et al.'s {{cite:838843418ba60cad04af686c07cfa5bc5bdf9554}} recent interview with smart contract experts exposed a lack of best practices for writing reliable contract code. Finding code examples and development standards is particularly difficult when working on new applications. This issue can cause developers to pick up bad programming habits and dangerous anti-patterns. An anti-pattern is a solution created in response to a recurring problem, which appears to be appropriate and effective. However, it ends up being ineffective or even counterproductive.
Lack of mature testing tools: Smart contract development cannot count on the wide selection of testing tools available for traditional software. The currently available tools are not as mature, and in some cases, they are ineffective at ensuring the quality of the contract code. Zou et al.’s {{cite:838843418ba60cad04af686c07cfa5bc5bdf9554}} research shows that the developer community is especially interested in code auditing tools, which help in discovering bugs and vulnerabilities.
| i | 55a1095826cb9cdd80849122cef68c92 |
Since both Levin et al.{{cite:45e52725c28da4091eeac09afc70ae37574cbcb2}} and Sun et al. {{cite:205dffa1a5740534369c3529d0c66031a186b467}} consist of gray scale images, to evaluate the performance on coloured images, we generate a test set using 100 test images from {{cite:21dd8df78e9049d991ecc9a7f6cccb51b02c0dfa}} using kernels from {{cite:4c628fe43022c2b0a42073c5c6b3f38b71e631b9}} for three different noise levels. As we can see in Table REF and Fig. REF , our method is able to perform well and produce results with fewer artifacts and noise compared to {{cite:4f298d49f1f9bb92402a63d41793ae678f1c4142}}. Our method is even able to produce good results at higher levels of noise. The residual maps show that the absolute error is much lesser in our method when compared to the other techniques.
| r | 1e85759f88c3dda638592813b4c6a07e |
Previous studies {{cite:9b5bb3faaef43e180d56b37ea58543e202e7c10a}}, {{cite:7e289b54f671092e29db8ba3659d848becad3e14}}, {{cite:6b28c518d60171d91fe2d6a29c6026acfe8b7dfd}} have verified that different pooling methods might lead to very different results, and different models may prefer different types of pooling methods. Therefore, we also investigate what pooling method is preferred by our PaSeR. We mainly investigate four types of pooling methods. (i) Average representation over the last layer of BERT, denoted as Top1 avg. (ii) Average representation from the last two layers of BERT, denoted as Top2 avg. (iii) Average representation from the combination of the first layer and the last layer of BERT, denoted as First-last avg. (iv) Directly using the “[CLS]" token as the sentence representation.
{{table:4953d459-cb0b-4324-b919-f83cb677f824}}{{figure:aa0426a1-dab3-4c4a-92ec-58faf2f0d127}} | m | a8050b1f5068fdfc26719e254eed406d |
Numerous half-metallic ferromagnets have been predicted and verified experimentally since NiMnSn was predicted in1983 by De Groot et al. {{cite:1a118d392d4694d48833a96ab554b0ab4b8e4a7d}}. Ferromagnetic materials display diverse electronic properties in the spin up and down bands, with metallic properties in one spin band and insulator or semiconductor properties in another, thus leading to {{formula:817a82d1-2e24-41e9-90a9-3e8e9abe962b}} spin polarization at the Fermi level {{cite:5084f05041f21c469cbef30c1727f4fa5a5c850c}} {{cite:931471cc1abbbc010d3da21b1b834ced00abfcfe}} {{cite:0984490e1ad5048f136639e0d57e172629d3936c}}. Heusler alloys are a class of inter-metallic compounds, simple structures and unique properties {{cite:0974800d4f6b3f264a83b8b0d708e6718cd32bcd}}. In 1903, a German scientist Heusler found that the atoms in the alloy Cu{{formula:da0d07ff-cc19-4fe7-8c01-605a5ad6f993}} MnAl were non-magnetic (NM), but the alloys showed an adjustable magnetism through heat treatment and chemical components. During the past few decades, Heusler alloys have been favorable candidates for multifunctional materials because of their numerous excellent properties, such as: Magnetocaloric effect {{cite:328824c28f7ccb846b195a19b5ea617b91074152}} {{cite:19408dcbdd7fee7dec08e7cbf29fb9333e37ad84}}, giant magnetoresistance {{cite:004ed64d03cecc649af5828f87d92da96857039a}}, magnetic field-drive shape memory effects {{cite:3fafca7ba8250d8a1d9e8983574ba0d73c356676}}, half-metallicity {{cite:52462c3f76c3f51a76205fc9032159471d1473e8}}, Hall effects {{cite:91ff6261e2c8bc53007e72415ee589f3c286a304}}. In addition, some Heusler compounds exhibit excellent thermoelectric properties {{cite:749b73bd6cd9f7f810da0607e7c32fd07e7f91d1}}, {{cite:a15216569f1cff8b85e9cf5ebe0b64d66e2262d8}}.
| i | a28efce989f61416bf861c59c430f230 |
To further validate the efficiency of GSC, we also evaluated our framework without using the ground truth labels as an input. For this setting, we restrict the comparison of the proposed methods to {{formula:ea2f271f-26e9-46ab-99d8-aed0abdd4736}} . Since our framework constructs a list of graph partitions, we use the Calinski-Harabasz (CH) {{cite:b645a5e73878cfc115866e7c02d877678e6046aa}} as a measure of the quality of a partition of a dataset corresponding to the normalized ratio between the overall inter-cluster variance and the overall intra-cluster variance. We estimate the parameters {{formula:fc7c33b3-cd4c-4539-b201-8f269b6ebd76}} and {{formula:1c70510d-d8b2-4447-926f-54d08d40ca14}} that maximize the CH index, to select a solution among all the obtained partitions. The results of the comparison are shown in Tab. REF . As noticed, {{formula:a762d242-b0b0-4d57-a56b-1b1b92f0215c}} outperforms significantly the other methods in nearly all cases, and on average outperforms significantly the other methods. Compared to the unsupervised evaluation reported in Tab. REF , here the NMI of {{formula:1e370ac4-e1b1-4d9d-9911-09ac57f475bf}} stays lower by few percent. This indicates that the fully unsupervised version offers us comparable graph partition qualities to the case where we have the ground truth.
| r | a77f90aa6148dda69380d43bd03fe086 |
We will first briefly summarize SpRAy from {{cite:9ec631f188947385409b965dfeaed11fcb0834e6}} (see Figure REF for a procedural overview),
emphasizing and motivating where and how we go beyond {{cite:9ec631f188947385409b965dfeaed11fcb0834e6}}. An algorithmic summary of the technique can be found in Algorithm REF .
| m | 996423ab9f56ec58e4401cee690bf663 |
Logistic regression {{cite:61b2611923fda00c18acc39d290d55ff28892b66}}, {{cite:dcb9cefbb2aa91515ffa5b2a997a4ebfeadd693f}}, {{cite:3a221f8792b1819f674bc3e0ea79256f2473139e}} is one of
the most commonly used tools for binary classification. Although the
logistic function has been known since the early 19th century, the
logistic regression model was developed in the second half of the 20th
century {{cite:665c10b5bccda3e87d804784b69abdac3ddd3c5a}}. Adaptions of logistic regression models have
been developed to make it more flexible, through basis expansion,
or less flexible, by means of regularization. For an overview, see
{{cite:f0fb3a34d66853684c8557590db57a1e2390c6bb}}.
| i | 004cd58383b7a666f14d4d4b9f31bd60 |
Compared with unsupervised domain adaptation.
To evidence the effectiveness of our method on unsupervised vehicle Re-ID, we further evaluate our method in the domain adaption fashion.
Following the protocol in {{cite:362bd40b765a33e58526e12b96f5224398e290c0}}, we use VehicleID {{cite:9b49158a419c4deeb3db69166e0a60e0b1d1337b}} as the source domain and employ repelled loss {{cite:26a82d383238b7e328a60b0495eec3413df71d3a}} for supervised training, replacing the recognition stage in REF .
We compare our method in the domain adaption fashion (VAPC_DA) with three state-of-the-art unsupervised domain adaptation methods, including SPGAN {{cite:9cb2a447c16fc6cda3a463012fb7347e1d4b0839}}, ECN {{cite:32ab59bee75279fbb9595388ae80270e08716d93}} and UDAP {{cite:362bd40b765a33e58526e12b96f5224398e290c0}}, as shown in the lower half part in TABLE REF .
| m | 9f7b98b51da6d9a93dfb773526c36b87 |
Lemma 16 (Hoeffding inequality {{cite:3d8526122ecdf407118a0e7752d22a47755d37f7}})
Let {{formula:e1a5667e-9c73-416a-b7ad-33b882d7b51c}} be independent random variables such that {{formula:3c86d2c1-2e2c-4000-9492-df8ad9d1f204}} with probability 1 for all {{formula:ce099cad-13f0-4be0-be9e-821733877346}} . Let {{formula:9c67c7f6-9cc6-419c-89e2-e8a7103fa2de}} . Then for any {{formula:b0f6e8fa-5ed1-4669-9c05-4c5ea134ec82}} , with probability at least {{formula:c96b63a8-1e4e-4cad-983e-e278f634de47}} , there holds {{formula:14c59eeb-4136-4e43-a221-fd264de25e22}}
| r | baf8f12df108745b996f51ddca62f087 |
With the introduction of the focal loss, an effective approach to mitigating the issues of long-tailed distribution, the performances of Retinanet {{cite:f3eeef6780af4d10c02024b6aeff52d33e602a90}} and GFL {{cite:de8e6d451af3fbaa08cdab49f11905cb06dbac27}} show significant improvement compared to other two-stage algorithms. We notice that ARs are larger than APs for these algorithms, indicating the existence of error detections, given the small inter-class gap. However, with an increasing number of iterations, Retinanet {{cite:f3eeef6780af4d10c02024b6aeff52d33e602a90}} is able to achieve a great performance (up to 4.8% gains) in small object detection. Therefore, the solution to deal with the long-tailed distribution is the key for models to reach satisfactory performance in GLSD.
| r | 4ae24fe88084d2cb98765ca79e07b1de |
Lastly, note also that while unitary is a common description of a matrix, we use the term in the following theorem to refer to a unit scalar {{formula:e3028c4d-c8ae-4dd7-9ab6-5f51c84e1804}} with the special property that {{formula:302612d0-32a8-4fd8-bccd-652fc7924a8a}} ; later we will use {{formula:0f0d6bf4-986e-4cea-a87e-b1aa7feeedd5}} to refer to the set of these unitary scalars. Note that since {{formula:b4cc0d92-bc87-4db6-affa-c30bc52b5d1a}} has {{formula:4facd165-1f84-44c8-a443-eb8d71ff50e4}} solutions ({{cite:4de432ac411a38385d2340a7ffa7797ef8435f36}} and {{cite:d63ddea9f57dd493101aef2d11792d44f213d918}}), we have {{formula:04a4b1e1-31a7-4e61-bbfd-99313c6f735a}} .
| i | d6bd19745514a12cfc21d672abe0ed51 |
We have reported the results of the Office-31 dataset based on AlexNet in Table REF . Our method TDMDA has outperformed all of the mentioned methods by a significant margin of 4.35{{formula:85608d2a-81cd-4095-a34b-a5175ddc0bc0}} . Entro {{cite:9849ba13bb3e920950f91fd9ec99af752d2a0a1d}} proposed to match the predictive uncertainty for adapting the classifier in the target domain. However, our method exceeds Entro by a large margin of 7.42{{formula:737150f0-f150-43d2-94be-77a0dfbc78ab}} on the average accuracy. We have also reported our model performance for Office-31 dataset based on ResNet-50 in Table REF with an improvement of 0.3{{formula:296f5f98-a70e-47e6-a031-ae1a215d8a51}} . TDMDA outperforms methods based on classifier adaptation such as TAT{{cite:f334e3732862c1944a87c8d06054e93ec66845b9}}, CAT{{cite:3cdf7cca450789daa158a7add980bd44e2ce4061}} and DAAA{{cite:0015baeba72f1d4642b543a086241bbf34402f1c}} by margin greater than 2{{formula:44da3667-21ba-4397-8528-754d14a7e888}} . The proposed method even performs better than the recent approaches {{cite:49a3e8c4dfa2c032126898ef952ef83598cfa677}}, {{cite:8e02f2014054466d4ea470c47bb809d473985205}}, {{cite:766b63c712418933d82530e70e4181fc3fb61970}}, {{cite:c54c392e73f75c53302b19b1cf6ba7431885f589}} and it also incorporates complex attention mechanisms such as {{cite:13c64f367cfaeb8fc45ed495c06658748ee49931}}, {{cite:b79c133877c78d5dc9a94f2edafbda815d98a1dc}}. This shows that the adaptation of classifiers in the target domain is necessary for effective domain adaptation.
| r | 3a7eb582fbecc5f0b5dd07f2d0d60c44 |
Data Collection.
We detail the procedures to build Spoken-CoQA as follows. First, we select the conversational question-answering dataset CoQA {{cite:a44275b2a5f8054c6c81b589f7568cf375eed525}}Considering that the test set of CoQA {{cite:a44275b2a5f8054c6c81b589f7568cf375eed525}} idoes not publicly availablesh the test set, we follow the widely used setting in the spoken question answering task {{cite:7dd5f0cd6f8d685c8cbd1d2755cfa4ea004c06bb}}, where we divide Spoken-CoQA dataset into train and test set. as our basis data since it is one of the largest public CQA datasets. CoQA contains around 8k stories (documents) and over 120k questions with answers. The average dialogue length of CoQA is about 15 turns, and the answers areis in free-form texts. In CoQA, the training set and the development set contain 7,199 and 500 conversations over the given stories, respectively. Therefore, we use the CoQA training set as our reference text of the training set and the CoQA development set as the test set in Spoken-CoQA.
| r | bf7748738fe1f384c79e8b24272522bb |
As shown in tab:resultoflinemod,tab:resultofocclusionlinemod, RePOSE achieves the state of the art ADD(-S) scores on the Occlusion LineMOD dataset. In comparison to PVNet {{cite:867f9c5355723dc8f25f39636fd723c4c12deeb7}}, RePOSE successfully refines the initial pose estimate in all the object categories, achieving an improvement of {{formula:4b9f2006-c33a-40c3-97eb-8896918c4604}} % and {{formula:2aef0a38-e43f-47ee-991b-0b4656790493}} % on the LineMOD and Occlusion LineMOD dataset respectively. On the LineMOD dataset, our score is comparable to the state-of-the art EfficientPose {{cite:fcd4faa84bf126da54b95320019322dae9f2d916}} which is slower than RePOSE. The key difference is mainly on the ape and duck categories where our initial pose estimator PVNet {{cite:867f9c5355723dc8f25f39636fd723c4c12deeb7}} performs poorly. Interestingly, for small objects like ape and duck in the Occlusion LineMOD dataset, we show a significant improvement of {{formula:2c4e1a7c-4759-4610-b31c-39b12a58647a}} and {{formula:1fa1dd23-2355-45cf-8c15-4d3e08dbdd76}} respectively over the prior art HybridPose {{cite:679f653a02f7674e0a20df466a2392c8263f1bc8}}. RePOSE is also effective on the challening symmetric object categories like eggbox and glue.
| r | ee0117b7de3cba9dd1df9a935fb00ee6 |
For our experiments we have adopted a self/coordinated segregating multi-modular multi-layer co-attention model consisting of self-attention and guided-attention for generating the fine-grained features space for VQA. This is based on the scaled dot-product attention based work {{cite:480e36c6d3e4aa83386a2557c6d5f3f9ad90fe6b}}.
Though transformer is related to queried attentions, we interpret the attention model as maximization of the related {{formula:74e42c08-9be3-48bc-9685-b7edb497bbc1}} feature sub-space for visual reasoning. It establishes the relational coherence of the image features and learn-able natural languages. Here, the tensors with minimum cosine distance will establish very high amplitude and a softmax reduces the influences of any other than maximization. The tensor is divided by {{formula:95ea806a-6d7f-4909-bc70-660a7609a09a}} (dimension of query) and prevents the overexposure of the closeness of the distances between image features and languages.
Though the original transformer is proposed to have a query of dimension {{formula:114f03e1-7e55-4fb7-b256-6b983182576f}} , query can be assumed to be {{formula:afbf17d7-0f27-45e6-93a0-7d0ca13f01c3}} , though in realty self-attention concept {{cite:480e36c6d3e4aa83386a2557c6d5f3f9ad90fe6b}}, {{cite:b4bce2d746c12fffc0a6138051b43407841791eb}} has no physical meaning and utilization is limited. Our approach of segregation of information is much more justified and realistic. For state-of-the-art performance, it is always better to adopt a multi-modular co-attention model where self-attention {{formula:0007c04d-73d6-4fd0-ba7a-d1de31edc3aa}} is the relationship between the different objects present in the image. More chances of a relationship brings two segments very close to each other, like "white-trousers" image and "trousers" image will be close to each-other and their inner product will be maximum. Similarly for guided-attention, it is assumed that the inner product of tensors for image of "white-trousers" will be same for word embedding of "trousers", but it is very hard to control such operations. It can excite other false establishments. But our segregation approach will create a check on that through learning the feature space and emphasizing on usability and relevancy.
Attention creates representation that converges for both regional image set and the word embeddings of the texts. However, in absence of concrete usability formulation (like segregation), multi-head attention creates several possibility and then encodes to the final tensor. These approaches are either encoding of events or stacking of events. Both the features are combinations of all possibilities.
| d | 20045fcef0cf3fd7df9578a9c19f98cd |
The initial motivation to introduce disorder in the original Sachdev-Ye model {{cite:84339dac42140b2b192d6e4f52e9cfc7ebd15a35}}, {{cite:ae002d9d2a7e7564ddd1d6a16ae76da9238faaef}}, {{cite:a8b2e7f06757e8cd25f8c58aff6fdef9f658465b}}
was to simulate zero-temperature quantum phase transition between the quantum disordered spin-liquid phase and the magnetically ordered spin-glass phase.
It would be interesting to study whether our partially disorder-averaged model brings any further insights into this aspect.
| d | d62cea2e2606b676e8bbba8072c2fdc6 |
In order to benchmark the proposed methods, we apply them to a set of medical image classification tasks; datasets from MedMNIST were used. This is a collection of 12 pre-processed open medical image datasets {{cite:20b0074da92960defa6b2a5eb682808788c8eb1c}}. The collection has been standardized for classification tasks on 12 different imaging modalities, each with medical images of {{formula:a4db7a8f-9726-4568-be17-fdf542c5df29}} pixels. The quantum transformers were trained on all 12 MedMNIST datasets, and achieved a very competitive level of accuracy while demonstrating a significant reduction in the number of model parameters with respect to the current benchmarks {{cite:20b0074da92960defa6b2a5eb682808788c8eb1c}}. Detailed results can be found in Section REF .
| r | 26e5067ac4bb8b6fc1700af6447815b4 |
The outbreak of COVID-19 epidemic has resulted in over millions of confirmed and death cases, evoking fear locally and internationally. It has a huge impact on global economy as well as everyone's daily life. Numerous mathematical models are being produced to forecast the spread of COVID-19 in the US and worldwide {{cite:395b4e2d2d9adcba6afa4c38159e86294f44ac0c}}, {{cite:c7d4f8c2d84aa5e5d95d040b9acfaf4ea97f9971}}, {{cite:c3224d1b028166dab9873f22e86b49e9bec26624}}, {{cite:34d2ad80a1a00a17d0baaf1dd619a5d5666cb81a}}. These predictions have far-reaching consequences regarding how quickly and how strongly governments move to curb the epidemic. We aim to exploit the abundance of available data and integrate existing data with disease dynamics based on epidemiological knowledge.
| i | 6c93aff2ec6228490ff4b6ebb30b3fa6 |
Possible mechanisms behind the QPOs in AGN have been widely discussed
(see, e.g., {{cite:e4159dec4a78bc30281055ce8623c4e0248cdfc2}}, {{cite:7ca0fcc45e68966ad484ad991db2b000728e9a53}} and references therein).
Since QPOs in stellar-mass black-hole
binary systems have been well studied, and in most cases they were interpreted
to be related to
the accretion in the innermost stable circular orbit around
black holes {{cite:45209a2a289842c4dc246691c057edfd4558b9a6}}, the similar scenario was considered
for AGN QPOs. The most notable case is the X-ray
QPO found in RE J1034+396 {{cite:8a60b117870f7e61fa20b7a69167bd13c9b80ba4}}, which had 1-hr period.
Scaling different frequencies of QPOs in stellar-mass black holes to
the 1-hr periodicity,
a mass range of 4{{formula:8b943de0-0bab-4c60-844e-c8a83ddd12ee}} –10{{formula:ad47da72-d52c-4a03-bfbf-c2f761e45233}} {{formula:46135677-732d-4351-aaf0-b75e5091a15e}} was found for the SMBH
in RE J1034+396. We note that this source is also a NLSy1 galaxy, the same
as J0849+5108. If we apply the same scenario to the target, an extremely
massive black hole ({{formula:517f0395-c166-4b10-a66d-233ee115bc64}} ) would be implied, in contradiction to
the general {{formula:c70953e8-22dc-41f7-8cff-50b87c009b39}} range of NLSy1 and the estimated mass of the SMBH
in J0849+5108. A related scenario is that the jet is precessing, caused by
the Lense-Thirring precession of the accretion disk around the black hole. This
scenario has been widely considered for QPOs seen in stellar-mass black hole
systems (see {{cite:c1826207b84b4cdee9784ccdea0c13dc7e4cbc34}} and references therein). However the above
mass-mismatch problem remains as
the timescales of the periodicities are expected to scale with {{formula:a4a7081a-7abe-4a65-bece-876782075750}} .
| d | 9f41d61312347d5ccd37736d5a67acc0 |
We compute the parameters {{formula:fac740f7-f9f1-4994-8bc4-3f0ce11b1d35}} for the {{formula:9909d976-2b98-452a-b55a-643f6a642919}} -bi-fractal
and {{formula:13f126cf-f141-459c-82b2-54d7d11ff510}} for the {{formula:faf3d67e-1c4b-41d0-a285-8a9cffc81f92}} Cantor nest. For ten different
values of {{formula:5b80a44a-ada3-4ef3-87d0-f3d6a1b1d1c2}} ranging from {{formula:682b3974-a737-4db8-a5f3-642cd7c475ee}} to {{formula:30871102-6fb6-4502-81fa-259792e4832c}} we count the
number {{formula:9e755bd9-86a5-4da9-9eaa-658fccb3ed63}} of points necessary to draw the fractals. Using
Python's scikit-learn library {{cite:956b90b626d326fe6ec90c62495c8120c77764dd}}, we find the slope
for the linear regression of {{formula:7795a25d-cb6f-4884-a315-5175cdff6e74}} against {{formula:4f53c3f7-2fe7-494b-9f78-6e70d30b9ff9}} .
| r | 030a49193d19f2331c23e27177626803 |
Moreover, inspired by the successful applications of deep generative models in many fields {{cite:94fb0c6c2bf3f51f795130746cf506421a161d5b}}, recently, a new perspective of CS has emerged, for which the sparsity assumption is replaced by a generative model assumption. That is, rather than being sparse, the signal is instead assumed to be close to the range of a generative model {{cite:8f09ace606b24c7e7c1431d20ba1e37ce29f0a15}}. In addition to the theoretical developments in {{cite:8f09ace606b24c7e7c1431d20ba1e37ce29f0a15}}, the authors also provide impressive numerical results showing that for some imaging applications, using generative priors can significantly reduce the required number of measurements (e.g., by a factor of 5 to 10) for recovering the signal up to a given accuracy. Follow-up works of {{cite:8f09ace606b24c7e7c1431d20ba1e37ce29f0a15}} include {{cite:83860eeb9fee334915beb0bf585a7514d6906f91}}, {{cite:acb001fbe4b9532b320085e3ea68090d8880f7d7}}, {{cite:ff4b4f9f7a284a1502aa552d7fd6ff50b4bc7e16}}, {{cite:1d3d05905ec84b6c64f787a6a13d19cf3765d5bc}}, {{cite:11fb471e14e9fd0a52c1a115d8a526044ead1083}}, {{cite:bad7a87da05d1ced58c6a7f2493f3987a52c502c}}, just to name a few.
| i | de17e9994a936d3722b77c9c84b83919 |
Further reasons to understand why results based on the AIT coding theorem should work in real-world applications can be found in information theory, research developed largely independently of Levin's work. The fundamental connection between probability and data compression has also been studied by Cover {{cite:f5c783513ca06e30005e6af597c5a70da4b3969e}}, Langdon {{cite:521a3c0dcb9fbf9a4e1d1acbf27db56f49c2d324}} and Rissansen {{cite:d93a0a8046400c12e58051a8e62e242d44f392c7}}. Since then, different communities — eg information theory {{cite:5e6d0bd261b1b1b2cb8b6cf8a4094baa06ee6174}}, {{cite:ed684f24c20ee2b599992158051d1356c466d8ef}}, optimal gambling strategies {{cite:f2915acd77e6f9fc9e7aab805ac7c8b5db2d5338}}, and password guessing {{cite:24841d2a90f3a78ff6b25005c67d70c669343a0f}} — have studied and exploited the probability-compression connection without utilising Kolmogorov complexity per se, but instead Lempel-Ziv style compression approaches. In a review, Merhav and Feder {{cite:3a341586c1a360174cb57d5e0a5b821c337215a9}} surveyed results in the area known as universal prediction and explicitly point to {{formula:87b658c0-2f84-4c0b-a597-16027bea9845}} as an effective universal probability assignment for prediction based on the results of ref. {{cite:bc7041ea05b5cf5291b513136e64db022a2e9bda}} and others, where {{formula:6c192cdc-551d-47fc-9602-5e37e305d097}} is the Lempel-Ziv compression complexity measure, essentially the same as we use here. These results, all support the use of these abstract information theoretic arguments in practical predictions contexts.
| d | a6c69269f33310fb916fc42e1ae156fe |
Given the multi-modal nature of the proposed method, we compare the performance of our model to the following state-of-the-art approaches: Deep Stochastic IOC RNN Encoder-decoder framework (DESIRE) {{cite:f481042a1732353b2a9b889e16f880c02464d0be}}, Multi-Agent Tensor Fusion (MATF) {{cite:7e334b8e5a863c2275a9c7ff5417db859979c82f}}, Prediction Conditioned on Goals (PRECOG) {{cite:cefa3620e63524e4332b5877c3ac103f6cfc109e}}, and Diverse and Admissible Trajectory Forecasting (DATF) {{cite:49d3ccb24881f0edbccc6d9ddf61e76190d87b54}}. We refer to our proposed model as LatentFormer throughout the text.
| m | f3b53cb0ec698d5292c5b011a566eb0b |
Forgetting the parity: We mentioned in Section the simple fact that a shortest {{formula:2b911b0b-1e51-487d-b6f3-3fae7b515ad1}} -path in a conservative graph can be determined by finding an inclusionwise minimal shortest {{formula:d5a513b2-1402-489a-8506-67f1387bb9b1}} -join, which is in fact equivalent to the minimum-weight {{formula:d56634a0-759e-41d4-8b7f-5d4954d57a72}} -join problem for arbitrary, not necessarily conservative weights. The first, well-known solution of the latter problem reduces it to non-negative weights, and then solves it as a weighted matching problem on {{formula:acb5a69c-066a-4573-8875-021ce43f0d5e}} {{cite:fe0d5776cd8dbe67c4b3b886d254f13b121dffad}}, {{cite:3b3dfc70ba1038548e58d22c39fc3c23ce107a4b}}.
| r | 0f18e4b9eebddaefbb62aaefaf76aa2c |
Adversarial machine learning {{cite:2d2a4c3b5f68ecc584f32d786e95b1420bd98e15}} offers a natural inroad into this question. It is widely used within NLP to probe model robustness and defend against adversarial attacks {{cite:15e357e0cde2ef8e6413b1c853ce7c22ac98108d}}, {{cite:bdad62ba4a6fa66689d09b56c9691dba549b5fd0}}. A successful attack minimally perturbs an input such that: 1) its semantics is preserved, and 2) the model alters its prediction.
The first challenge is usually addressed by assuming that “small” perturbations will not meaningfully impact semantics.
However, previous works focus on lexical semantics while we are interested in logical semantics.
The
assumption that “small” perturbations will not impact logical semantics is unduly strong as logical entailment is sensitive to any perturbations, causing standard methods to generate inconsistent attacks by inadvertently flipping the label.
Considering Fig. REF , existing methods may perturb “Mark is smart” to “Mark is intelligent”, but
this attack is inconsistent for Q1 as it is no longer entailed after the perturbation. blackTo solve this logical inconsistency problem, our contributions are as follows:
| i | 3015a21b56f68be73bc0d7f3848cff8b |
with a constant {{formula:0951dd08-387c-45e1-a781-988fffac114d}} depending on {{formula:fbda2879-61f3-47c2-b4f0-b451b01235cc}} , especially, the determinant of the Hessian matrix of {{formula:995e674f-bab0-4e38-a7ef-1d2d95a7268d}} . See, for example, Hörmander {{cite:d0fbdcfebd7c0ca3e3c2b18297fcaa6ebfe1ccf0}} or Stein {{cite:e4c1d63108ee3d92c86b3d87530f3e31ad1d85c4}}.
The stationary phase method is an important tool in analysis which has wide range of applications, especially, to the time decay estimate for the dispersive equation, which is known as `dispersive estimate'.
| i | 2d2f2c133c0f0711b41e7fc4b3236fca |
is SOS {{cite:c1419264584298f027cd41937c9e2a4dd2abf9fc}}. While this approach can be formulated as a convex optimization problem and gives a sufficient condition, it is not complete as shown by the following.
| r | 7c45fb828230964f257d8d7034f8d1e1 |
We also note that there are many other techniques to measure structural density and velocity fluctuations as a function of spatial scale, e.g. by computing structure functions {{cite:463ec5f058e1c61616a94393c12c495eb1810e54}}, {{cite:d5e4de197468cd4d63dfc5cb47c769d279f3b748}} or the spectral correlation function {{cite:44b9600e3d5dc1c29bb1a14e1e4b1f284ee2942f}}, by using the velocity channel analysis {{cite:46ff097237f7de16e0b961dd13e761be44c92142}}, {{cite:0498fcc452cbf66f748a5ff5602a8a640af8888d}} or the Principal Component Analysis {{cite:5411a9a4e98b6624ea770297e6da8a1c7c6159a2}}, {{cite:fceab41497723e0fa3f287f2b8a952edcb45f5f3}}, {{cite:b403083282d8e6d64ec9a46c14048a4bba9eb047}}.
| m | 63bcd24a4e304397b0a4d68cd7a74047 |
Remark 3.1 When {{formula:66e45b81-eb24-4c05-91b5-fddfb66087fa}} in our proposed Algorithm REF , our method reduces to the inertial proximal point algorithm studied in
{{cite:12b73cd848270026ef9cbd3171a479bdd1866ff4}}, {{cite:1d8ea2419204948e24e0cec0bca507178c1b328e}}, {{cite:b892d12fcfc943f923c503cc0d40d5a4f65030b8}}, {{cite:52bf75c5d2eee4c86f6c93d91aad8d62d901ac59}}, {{cite:e4b9ae23b874d9090e0f43f716cef5b3cafa08bf}}, {{cite:cc38c9c295547e257f1b971d45b72c24c95eee79}}, {{cite:4bae34ef278045feb5a8417fda91677770a4b31f}}, {{cite:3505923647edca08f2faebeb3ef193178c02b717}}, {{cite:2d4368a30ae6ec73101534920e0f5e1fde42f64b}} to mention but a few. Our method is an extension of the inertial proximal point algorithm in
{{cite:12b73cd848270026ef9cbd3171a479bdd1866ff4}}, {{cite:1d8ea2419204948e24e0cec0bca507178c1b328e}}, {{cite:b892d12fcfc943f923c503cc0d40d5a4f65030b8}}, {{cite:52bf75c5d2eee4c86f6c93d91aad8d62d901ac59}}, {{cite:e4b9ae23b874d9090e0f43f716cef5b3cafa08bf}}, {{cite:cc38c9c295547e257f1b971d45b72c24c95eee79}}, {{cite:4bae34ef278045feb5a8417fda91677770a4b31f}}, {{cite:3505923647edca08f2faebeb3ef193178c02b717}}, {{cite:2d4368a30ae6ec73101534920e0f5e1fde42f64b}}. We will show the advantage gained with the introduction of {{formula:7ead53fc-7bf5-4612-9ca4-5241202e577a}} in the numerical experiments in Section .
| m | 6b913afff89b6f3072067298efe5de5c |
[leftmargin=*]
NCF, Neural Collaborative Filtering {{cite:94b76606197ada1fe5b6dbee7d29113c9c23760f}}, which combines the matrix factorization (MF) model with a multi-layer perceptron (MLP) to learn the user-item interaction function.
DeepAE, the deep autoencoder {{cite:f9d1399f30ebe1f8b918d9e28f10bacbd8f85831}}, which utilizes a three-hidden-layer autoencoder with a weighted loss function.
| m | ac05acc239e438e567e3e6e4175feb54 |
With these benefits in mind, we propose a neural network model tailored for such data with singly labeled crowdsourced annotations. It computes a latent truth for each sample and the correct bias of every annotator while also considering input feature distribution during training. We modify the loss function such that the annotator bias converges towards the actual confusion matrix of the regarding annotator and thus models the annotator biases correctly.
This is novel, as previous methods either require a multi-labeled crowdsourcing setting {{cite:4a4d17b0c19cf27ce70c9aeec5397efd8cf2bfde}}, {{cite:9e2425e73a4deefb810d9041dd2a783e5c4e1519}} or do not produce a correct annotator bias during training which would equal the confusion matrix, see {{cite:2405d72f53916a01a161eb000e972f985bc2cc35}} and {{cite:61b5e1c1c15c0aedfdf5f93140b7ab7ef70a5314}}.
A correct annotator- or annotator-group bias, however, is necessary to derive correct conclusions about the respective annotator behavior. This is especially important for highly unreliable annotators who label a high number of samples randomly – a setting, in which our proposed approach maintains its correctness, too.
| i | ef32cf19a25a9fd977116c5b06c4895d |
We start by stating the convergence guarantees for the single-pass NSEG method. This is obtained as a direct corollary of
{{cite:952289b0b8e54dde701a5dd877ccd92ed5a06c9b}}, where we use an explicit bound on the oracle error with the variance of the Gaussian.
| m | b8f634d95f719846e61083ba441fcc65 |
Our method (DFPA) showed a lower MOS than VITS (VSPA) when the proposed alignment method was used: DFPA had a MOS of {{formula:9a350269-d410-424b-95b6-14c6d2499ff6}} , whereas VSPA had a MOS of {{formula:e3afb8ad-98c2-4db9-8b59-d9feaa02e67f}} . In contrast, when ground truth alignments were used, our proposed method (DFVA) achieved a higher score than VITS: DFVA had a higher MOS of {{formula:ce644237-f316-4e83-92c9-4fafd754f6a6}} than VSPA, which showed a MOS of {{formula:cb470c57-423b-4d5a-8eda-52d0d8366721}} .
We suspected that one of the reasons for the poor performance of our alignment method was the lack of acoustic distribution modeling unlike VITS where its flow-based acoustic model also worked as the alignment model, but further investigation of the alignment method including VITS is requiredIt is suspected that duration derived from VITS are not similar to manually annotated duration: https://github.com/jaywalnut310/vits/issues/9..
Our proposed method benefited from the high naturalness of the waveform model: ABS for our methods (ABSD) had a MOS of {{formula:6c74d815-50fe-486f-b771-ad8818500b5d}} , whereas that for VITS (ABSV) had a MOS of {{formula:ddb5ca8b-707f-4927-ad36-673bca675e0c}} . We inferred that the low naturalness of the VITS waveform model was caused by joint training, as originally reported in their paper {{cite:fd262fdae529cdf892fc9312100fb6b2c814a007}}, and the independent training of our waveform model gave more consistent quality. VITS showed a MOS closer to that of ABS than our method, which indicated that the acoustic model of VITS performed well or latent features from the VITS waveform model were more predictable.
| r | f99fef6c8a51eb5d3f509ebeaf6a19d3 |
A 0.5–wide 6.9 absorption feature is of interest in a wide
astronomical context because the feature is seen in young stellar
objects and molecular clouds which have a complex inventory of dust
features due to ices, carbonaceous materials and silicates. The
feature also seems to underly narrower PAH absorption in 2005–2008 Spitzer
spectra of dust obscuring the carbon-rich born again post-AGB star
known as Sakurai's Object (V4334 Sgr) {{cite:f3445a786d5a52abc16f73f5f81fbb249db42433}}.
| i | 7ed5c7c448b9d9bbfe31bc7c9b5ded79 |
This convergence forms the basis of our consistency argument.
We first introduce a result of {{cite:df5238a79699f516c09528b39f3f2e312b58675b}}, which shows that
with probability 1, as {{formula:e44d2852-13a1-493f-8cfb-e5d2e78b4be5}} , the system (REF ) with
{{formula:55ba66fc-4ea5-4ee5-900b-c36ab2c02ea2}} consistently models the Laplace-Beltrami operator in the
{{formula:43f0b90e-70a5-4cd6-b18b-f31b50ddd6bc}} sense.
| r | ea2d9c04da05a8f255ee2687af28567d |
and it was firstly conjectured by Dalitz {{cite:644548b8220ff0df3490e461ef4d0f0a1951bdcd}},
| m | b92169d62f6a08f587565f24513ec4ca |
The analysis of individual calibration methods shows that isotonic regression ({{formula:e0a485ee-57c0-42c7-a24f-8c251bc8da1f}} ) and Platt calibration ({{formula:340054db-40c8-4a72-add6-5d265cd269c8}} ) have detrimental effect for both {{formula:73a34309-5c3e-4fd0-a999-2083966b9aef}} and {{formula:59ca272a-eb49-45c7-b5e8-4d55ac773513}} imbalance configurations.
Both methods rely heavily on the number of available class samples.
The negative influence of {{formula:8469e5b3-c1ae-4419-b6f7-ab5ed25d6adf}} and {{formula:7be39db7-4fc9-4f32-8521-0fd446d5cd5b}} is larger for lower memory sizes and, within each {{formula:56f23be6-d328-480a-a175-60800a1d9b33}} size, for later incremental states.
This is probably an effect of lack of sufficient data in order to obtain a stable parametrization of the methods.
The behavior of {{formula:f4041a37-10b0-409e-95ce-0ff51ab91b66}} and {{formula:2ffd86f7-2709-4add-b7d9-4b3e3d59852a}} in imbalanced incremental learning is different from the one previously reported in {{cite:f127c9e550f6e917fc19edf91c9a364e9a57ae95}}.
There are two main differences between the two studies: (1) the algorithms used are different (deep models here and shallow models in {{cite:f127c9e550f6e917fc19edf91c9a364e9a57ae95}}) and (2) the amount of data available for calibration which is much smaller here.
| m | 4fd338407c7d0aa296c211e9b404d835 |
This latter bound can be recast in terms of other commonly-used density parameters and fractions: {{formula:2185b2aa-de61-4734-8761-8feb882dcae8}} , {{formula:a72f9fcc-3539-44c5-9e6f-c895125a27a1}} or {{formula:1f711611-0959-4973-ac95-4555b18851d9}} . It is interesting to note that the constraint on the non-relativistic energy density is only an upper bound. This is another way to illustrate the fact that Planck CMB data is compatible with neutrinos being massless {{cite:f1f001e07ab983b864b019fb488f756d27740198}}. Also shown in Fig. REF is the posterior distribution for the sum of neutrino masses {{formula:5d46e379-d2eb-4157-a7c5-db014bee1eaa}} . We see that within our setup, where we allow the number density of neutrinos to vary, the bound on the neutrino masses is naturally relaxed as compared to the case where they have a Fermi-Dirac distribution with the expected temperature in the Standard Model. Indeed, we see that in our case the upper limit on the sum of the neutrino masses extends all the way to the end of the prior range and is therefore essentially unbounded. For comparison, in the case of the Fermi-Dirac distribution with {{formula:00d90fb3-a335-4286-bfb7-47e4ae5f0758}} , this limit instead reads {{formula:149d1c0e-dadf-485b-acab-c21c9125ad0f}} {{cite:f1f001e07ab983b864b019fb488f756d27740198}}. Importantly, however, we can see that this limit is nonetheless fully compatible with our bound on the non-relativistic energy density by considering {{formula:565d7636-d4ed-489c-bd70-88594fd355f7}} . A larger triangle plot with more parameters can be found in App. .
| r | f023288c3f42e24bafd96125e36601e3 |
where {{formula:9a168fa4-5b90-4a0b-aa19-bf0d38971ce0}} and {{formula:f25e5ecd-cda4-4246-9d98-ad9d54ccf036}} (and additional dimensionful
constant may be present if {{formula:58aa6aa7-d7fa-4d77-bffa-4ff34063f6aa}} and {{formula:09f3b938-a04e-4bf5-9ed4-c5bcc0887e22}} have different units). By
considering a real {{formula:98be8067-a5eb-4567-9b92-825b2ff8729c}} , we can restrict to the
Heisenberg-Robertson inequality {{cite:7e9fc351324752c7b35aefb50b10c2271e510e26}} {{formula:4547a167-73b3-4ae8-a1ee-5e2b59e206f7}} and squeezed states: the best metrological
advantage is obtained in this case (this can be easily shown by
repeating the derivation below with complex {{formula:451a8ec3-ec09-4936-a1c3-d176a5a53735}} ), so we will
consider only real positive {{formula:c39e5619-8a81-4982-a464-06fc24c7a806}} .
| r | 045657002747f773e887b354af4aa37e |
It is shown in {{cite:70f67ff230eaa2e27d7e28155053c178f7c3b5f5}} (p.185) that there is an infinite Blaschke product that has an angular derivative at no point of {{formula:7179e07d-776c-4c61-82f7-3aa844d36111}} or, equivalently, there is a sequance {{formula:0ecc0e3d-f878-440d-b40b-2aa3dbd0535e}} such that
{{formula:57e43ca5-44d0-4922-ac26-5800ca2a661b}}
| r | 155a5bba72dbd57417684d9f1d5e3e10 |
In fact, efficient schemes
of this class up to order {{formula:d4014297-3767-45b1-8b8f-b6bf59e4f376}}
have been designed along the years (see e.g. {{cite:31dd465f7211609fba1f079000b8ee68deec661d}} and references therein).
In addition, they preserve qualitative properties of the continuous system and
show a very good behavior with respect to the propagation of errors, especially for long time integrations {{cite:a9c7527a5fb1822e2c8189085a00f75ab10ff311}}.
| i | 2740d44cbb4b12052bee2268c23a2cf4 |
The rapid emergence of the global pandemic followed by deployment of pervasive mitigation policy had unevenly impacts across society {{cite:c3bbdcab23fd2d6c46575e2afb57217e5a059e51}}, {{cite:8f48f18890ffd8d14a3d8fa93f8e78ea6e070b8d}}, {{cite:e14a32ccfada216dbcf224d83c00cb9c4230a60e}}, {{cite:fc16ec613e5e2c7cc05a462f2f26b388fb63f7d4}}, {{cite:45a3a5a754ee07125727e968d38275bee4d0f2cd}}, {{cite:4c32bc327d0d817d673bc6a7d732cbf6a9abb105}}, {{cite:790a6edc43f2fcf15a71e23ece89b02589fc8851}}.
Against this backdrop, we contribute to a rich literature emerging from this global crisis {{cite:6a9c60362f750675fdd8897d43e4019dbf05732f}}.
One pertinent analysis separated US counties according to when and wether they featured government-mandated stay-at-home orders, representing a significant interruption to everyday life and perceptions of uncertainty,
and found relatively higher house sale prices in the shut-down counties {{cite:cd6c2cf56af7f14db45fdb6c7307ab679cef1cec}}. Our results are also consistent with economic theory and empirical analysis identifying reduced supply elasticity attributable to COVID-19 supply constraints {{cite:9025def50463e3cf5c68b926d42abb82ed2187c7}} that negatively impacted building costs and other factors that exacerbated supply inelasticity {{cite:6b3ab1a75846b8f0488c37ac54c174424d3eb03f}}, two features that are central to the theory of emergent housing bubbles {{cite:be17c4431b1ad91f965191acb0e7ff0e44a18273}}, {{cite:c9a8fe68a65f286c39993e7c3d8da96c2a10c382}}.
These supply factors were complemented by remote-work options that expanded the effective radius of search accompanied by a decreasing demand for amenity density {{cite:acb3dc1ae439b9d05b8d8b76a8679fa5bdc2cf81}}.
California is also characterized by prominent regulation on new home construction, which was further exacerbated by supply-chain disruptions {{cite:c5f04a82ee7bd1aa288b418416f5fa1832a38ef1}}.
| d | 1bf6cd535efa9c498900cf863a90d0a2 |
This paper considers distributed estimation under heterogeneous distributions among the data blocks, which is closely related to the Federated Learning and especially the multi-task learning (MTL) {{cite:ba044a5bbff400edf89842e162701b83676b6ac2}}.
We consider distributed M-estimation where there is a common parameter shared by the distributions of the data blocks and data-block specific heterogeneous parameters. Our treatment of the heterogeneity is made by explicit parameterization, which is different from the MTL where the heterogeneity is regularized by penalty terms.
It is noted that {{cite:2ba35e89a34ebb1191cc5eec6b78b6aff0793f67}} considered a heterogeneous setting, but under a fully parametric likelihood framework.
Our study reveals that in the presence of the heterogeneity
the full sample M-estimator of the common parameter obtained by requiring full data communication, can be less efficient than the SaC estimator. However,
this phenomenon disappears if the objective function of the M-estimation satisfies a generalized second-order Bartlett's identity,
which are satisfied by the parametric and quasi likelihoods, and the least square estimation in the parametric regression.
| i | 75b99d6a4fac84721a5a1d533d998a4f |
{{formula:b4a29a51-169c-4c25-9486-499b3785b55a}} Weakness. Although our model can achieve good results for the one-shot affordance detection task, more efforts should be made in the future work to address the limitations. First, there is still large amount of parameters in the proposed model, which can be further reduced by designing efficient and light-weight backbone networks and modules {{cite:07333163ee8ed351acaf4bdd42b6b59d157d716e}}, {{cite:7dc0971f4eb86b10f17b41225bd40b34748e395f}}. Second, it is difficult for the proposed model to segment fine edges or complete object regions when dealing with objects with complex structures (e.g., “bicycle”) or slender objects (e.g., “chopsticks”), as shown in Fig. REF . In future work, we can explore a refinement module {{cite:3170f9518b6671cd81f9410b9a6c3e746214f37b}}, extract high-resolution feature {{cite:fff978c43e6dd00d84d3adbe3f6f2e0257a911d9}}, as well as model long-range dependency using transformer block {{cite:6c4c1d6ad9387e4871a3d156cd0b4d70d9a654ab}} to improve the performance.
| d | 431399e13bd6f918abda79250d9e5045 |
The results lead to several ideas of improving secure scan chains to protect against such algebraic attacks. For example, obfuscating the structure of the LFSR or using non-linear LFSRs {{cite:530dd01e537ac548d749a1d44ca3a8e026f59998}}, {{cite:afc2af97dc10c6936e32d52155a3d818dd1a8ab6}} may
make anticipating the scan output vectors more challenging. Studying whether such additional defenses can be circumvented with more sophisticated algebraic attacks becomes an important and interesting area of future work.
| d | 75c26b8531aa7437f2c438a1d659f08b |
Comparison on clean datasets. The results for the original Cars196, CUB-200-2011, and Stanford Online Products datasets are summarized in Table REF . Note that bold numbers represent the improved results of original metric learning approaches by our proposed method. We observe that the proposed adaptive hierarchical metric learning boosts the performance of original metric learning approaches on all the benchmark datasets. Moreover, combined with Multi-Similarity loss {{cite:72df129e39350422cc34523f3a1b57e282c2775d}}, our method can also achieve comparable results, especially on CUB-200-2011, our method has the best Recall@1 value {{formula:6aa6c147-ee31-48c8-96ec-7c95263d00fb}} . This demonstrates the effectiveness of our method. Due to the adaptive hierarchical similarity, our approach can learn more intrinsic semantic information for training.
| m | 077f7cfad7da2f7c232e1341faf54375 |
Over the past three decades, there has been a vivid interest in the area of robot navigation in pedestrian environments {{cite:ef7255637a16addc956a24a5d68813a8bca6a946}}, {{cite:9f187a63810cc82ae159e18f70573281ee3d8619}}, {{cite:e8cd987dd9fc1062b369393720954c92a47c2251}}, {{cite:886d694923ec141b5ca51dabd15d89e62bcde2c0}}, {{cite:180726b695be8829a6344028322947db4ff1a7a3}}. Planning robot motion in such environments can be challenging due to the lack of rules regulating traffic, the close proximity of agents and the complex emerging multiagent interactions. Further, accounting for human safety and comfort as well as robot efficiency add to the complexity of the problem.
| i | 332dfc50a2ee0bf7cad4acd0374b7ab0 |
The conductivity {{formula:af3b9d3c-f166-45ee-aff7-0b97a3f8f457}} and Fano factor in ABC-TLG through {{formula:6c68df5e-2715-479a-ae51-0b5f1e0a3b2f}} and {{formula:6b74a4a0-3ed5-43b6-b7b5-0502427310f1}} junctions of height {{formula:40cda763-c5de-4cbe-ba6d-db1da79fc120}} and width {{formula:efc8ae9b-5f84-46f8-94ed-c781b997a2a5}} nm with and without {{formula:8c288289-331c-45c5-9eb4-ecf330f02dc4}} as a function of the Fermi energy for different values of the applied magnetic field {{formula:e06e88e0-9c68-44cf-94a3-932e90449028}} are shown in Figs. REF and REF .
When {{formula:20eb7c0b-49c1-44e5-8900-cdc88f126fc8}} is less than 1400T, the conductivity and Fano factor show several sharp peaks in the {{formula:73c26af9-946e-4f42-afcf-ec0bb8d45fdd}} junction, while for {{formula:98231dbb-ceaf-44f5-bec8-b46b4b642f69}} T, both quantities behave like those in SLG{{cite:1b7e1578e89b8aa68a8301049fe8bfe88028cfb3}}, {{cite:8b54ec225d0b01746e2f1bbef920662e1c36292c}} as shown in Figs. REF (a,c) for {{formula:53800592-4971-4f3f-a598-20221092582d}} and in Figs. REF (b,d) for {{formula:684abf96-1e22-4f67-8e58-8322e723b39e}} .
Furthermore, it has been shown that, similarly to without a magnetic field, the ABC-TLG scattered by a single barrier with a parallel magnetic field also has a minimum conductivity associated with a maximum Fano factor. The conductivity minimum {{formula:23f22479-74dc-4dd6-9de8-bb5bf7ebfe92}} and Fano factor {{formula:db072d16-684d-4507-9fd0-cddb564d73e5}} observed in SLG{{cite:1b7e1578e89b8aa68a8301049fe8bfe88028cfb3}}, {{cite:8b54ec225d0b01746e2f1bbef920662e1c36292c}} through the {{formula:fb46c841-c90a-49b8-a8ca-bd389b8f04d3}} junction are remarkably reproduced at a high magnetic field of {{formula:caf0e11c-5ceb-48c7-9f18-6d9847027c2d}} T applied to ABC-TLG, with {{formula:8f866436-016a-4a3d-9008-42cc28614412}} .
The induced gap in transmission, as shown in Fig. REF (f) at high magnetic field with inter-layer bias, has resulted in the formation of a conductivity other than zero at {{formula:5ee48671-0168-4bcf-aee2-b7281b0bc2cd}} , as illustrated in Fig. REF (b).
| r | f834b444b5f5c6f9a81a6587861fae8c |
Since Greengard and Rokhlin invented FMM, the topic has attracted researchers from many different fields, including physics, math, and computer science {{cite:c6285d714aec513567c02db20950a0dc77cabe81}}, {{cite:501d8b04183bf5d845fa8cd7a3076adfc3618113}}, {{cite:a22469ad19a4e30739e478a9cc6cf2574c3a5497}}, {{cite:91691ab3c69c3b15889e2ef84705f970a91ebbe5}}, {{cite:35e1cd62d32351f0f6c4cebe22ec6734a4897b38}}, {{cite:320901ee2cc8db8753f9fdf65b8106645be61625}}, {{cite:6a23fb93399d3d393eedf27138621e9ec563d89a}}, {{cite:52c85c44a7e18aba93bcc2c6c363a6c42f7eda69}}, {{cite:f3f590f10e105696315234be3c19463d18423cbf}}, {{cite:ceb8112e7313398ec85909962409ba2f8e08d4d3}}, {{cite:40c05216dd88f9c5902cf0ee4c00bac940455ec9}}, {{cite:c8240077288fbca7d715d10079d26aee21a6aaf0}}, {{cite:7a6d7e1f31a4a1f86861c1f70b8481a927755fbb}}, {{cite:1dbadb1a723a71fa1b24d75dd0c18a381b1ed7a1}}.
| m | 3cb5a529d86dda9c28287dedc7cd0eb8 |
The fractional radii were found to be {{formula:3ba1ef19-0cce-4b68-a8e7-874c798a01b2}} for the primary component and {{formula:a8d4a98e-583d-4092-af03-a88e3da20eb0}} for the secondary one. In this case, the sum of fractional radii was computed as {{formula:b83fc022-5a2d-460a-b20a-73eeae02670e}} . Thus, V1464 Aql seems to be in agreement with {{cite:5132404a9bb75d63956646fe35917b2401190eb1}}'s criteria for overcontact systems. The period analysis indicates that the orbital period as {{formula:394d748d-f9f6-4d98-89e5-59b720279f93}} . In addition, the temperature of the primary component is 7420{{formula:b802d38b-036a-4a59-b368-7f1ad034221e}} 192 K, while the secondary one is 6232{{formula:f58bfc2b-4982-484d-b78e-db1a8f7dce21}} 161 K. Although some contact binaries have components with some different surface temperature, they generally have the same surface temperature. Here, the primary component of V1464 Aql is hotter than the secondary one. Considering some characteristics of the system such as the short orbital period, small mass ratio, hotter primary component and ect., V1464 Aql seems to be in agreement with those of A-type W UMa binaries {{cite:526366f299a6a4594b930465c567bddaa2814337}}, {{cite:bb8ea48f7c634eb816265af8237460803a5dc3d3}}. The period analyses reveal that the period of the short-term variation was found to be 58.482{{formula:b3a017e2-a619-4967-adda-51d1a0e1a2c7}} 0.002, 58.482{{formula:554a9433-cf22-47b8-98cd-799c8f81cac6}} 0.001, 60.966{{formula:ad590214-efe8-47f3-bd34-79b24f0f1b0f}} 0.002, 60.964{{formula:672fe234-9098-4a98-ad81-30dfe3bd6c19}} 0.003 minutes in BVRI bands, respectively. However, it must be noted that we did not find any secondary frequency for the short-term variation. The period differences between each band should be caused by the different sensitivity of each band. The sensitivity is decreasing from B band to I band, because the amplitude of the pulsation is decreasing from B band to I band. As it is seen from the standard deviations given in Table 4 and also from the light curves shown in Figure 6, the scattering in the light curves is increasing from B band to I band. The period analysing methods we used are depend on the statistical method. The analyses gave the best period statistically for the pulsation in each band. In this case, the most reliable periods are ones found from B and V bands.
| r | 1130c976695b12d5271bb1587a77cfb0 |
Selection cuts are similar to publications from LHC {{cite:b997f24c7ac86ba83ce060004a874dcbd6212262}}, {{cite:15936efcdaf47a2ccedbb149962f69f7674ffd49}}. The dijet mass, {{formula:4a83cfe7-6ec9-4076-9c36-28dbb7392e80}} , is calculated for the two final state partons with pseudo-rapidity {{formula:f83a0ae7-c969-401f-953c-9aee99cbcb63}} . To suppress the large background from QCD t-channel scattering of the partons, we require the pseudorapidity separation of the the two partons to satisfy {{formula:89697ab5-1a83-429f-a305-732b0a15c2eb}} , which is equivalent to selecting events with a center-of-momentum frame scattering angle {{formula:0565a075-0407-48e6-9b17-c45dd83dfcb1}} .
| m | 378e0dc631f2412d12ae2f72f1f05ea0 |
tab:spatialsampler shows the quantitative results of our spatial sampler on EPIC-KITCHENS. The first two rows are TBN {{cite:e39e8127d437b25f738277129b137e795a12aa51}} and SAN19-baseline with FC classifier, both use high-res RGB (224x224) and Spectrogram (256x256). Since TBN relies on Inception backbone, its model complexity is not directly comparable with our experiments, with use SAN19 backbone. However, our baseline model provides comparable accuracy. Since the main objective of the paper is to increase efficiency of a given model, we focus on comparing performance and complexity with the baseline SAN19.
| r | 5e69f85c086f531779282eb5054e0296 |
A remarkable application of fine-grained entropy formulas {{cite:c3144d9d1738f7b8dd824211f1a24b2332e84181}}, {{cite:742c122cd441f83d915c5715a0d441c69729aacc}}, {{cite:226a39c34c91fbf82e5df3e14630a12682728e10}}, {{cite:2038f56c9f4147681ea6de311acbef9775c68cf0}} obtained by refining or generalizing the Ryu-Takayanagi formula {{cite:354a7620be63954439a614b689654ea7116042e6}} is the derivation of the Page curve {{cite:9984188cecc38725dfb2b219ff0d591f94b822ab}} from gravity side based on the entanglement island proposal {{cite:e03971849ebe10d426598ff46bddb51ae32a01c7}}, {{cite:a751b7104832a29a4b762ea5cf3497ec3d9f2607}}.
Interestingly, and a little bit frustratingly, such a derivation does not use microscopic details of quantum gravity. In fact, it is believed that these entropy formulas are generic features of quantum systems coupled to gravity, which does not require holography {{cite:2d85fdd96a13ab1253933c5fc068c80d0cec67ec}}.
To make progress in the understanding of quantum gravity, we have to learn how such formulas can be understood in terms of microscopic degrees of freedom. In the holographic approach, the matrix model can likely provide us with a dual description of the evaporating black hole {{cite:0faa64cd3065ca8675c2b8a6fa44a1d68bf995de}}, {{cite:1b0927ea89feef2a3d49e31bb0ccf14666c67cd7}}, {{cite:0de20a6aaf33e149eab23165eaf70abaffae9520}}, {{cite:3b04962ca4b6878300d39cf1239f88480fd5c22b}}, and hence, entanglement between matrix degrees of freedom should play the key role in such a direction of study. It would be natural to expect that the same situation whether we consider matrix model or QFT.
| i | 08990d7f06080508397b20f958ccc57a |
Reference {{cite:f7c894444c9527514b19cd4eaf704fa068e48572}} investigates many potentials with constraints from the late Universe data. In this work we performed the autonomous system method and showed that under certain circumstances the DST includes stable late-time attractors. The asymptotic solution approaches {{formula:d273570b-6869-4c7a-9e33-3fcd66f4c1ec}} CDM model for those potentials. The observational constraints regarding the Hubble constant are in agreement (within {{formula:0e8a1b44-65a6-4550-ae5b-9533c1912095}} ) with those of Planck {{cite:53f80c8cd064e61d301803296d9c0f8735003bdb}}. In addition, the results are compatible at {{formula:c624ab2a-a3f4-4ce8-aef3-cf2a00850017}} level with the {{formula:136657cf-c3f3-43d9-a464-7868fd23560b}} measurement obtained from Cepheids {{cite:d5e4634b4734f7e9d16cc82295e5e72f6d458c27}}, {{cite:403e40a6bece4ec45df20571233107cf09143653}}, {{cite:a190a5fa55c94c9ce71ff6bf4c49a7bd912dd39c}}, {{cite:945909d3579c575d39c39c76411ad105ef08f1a3}}, {{cite:d2c8b7b496484006f49c004ce92d69ec4c62d6ab}}. On top of that {{cite:f7c894444c9527514b19cd4eaf704fa068e48572}} finds that one of the models with constant potentials have the smallest deviation from {{formula:227013e3-1a9d-45f6-9cc4-c6e478880288}} CDM, where the confidence level is close to {{formula:a64622c1-0616-4e3a-bb24-1c3e10b59142}} . In addition, {{cite:f7c894444c9527514b19cd4eaf704fa068e48572}} explicitly checked the compatibility of DST within the standard BBN using the average bound on the possible variation of the BBN speed-up factor. Reference {{cite:f7c894444c9527514b19cd4eaf704fa068e48572}} shows that the deviation from the Hubble rate of {{formula:b9255bb4-e78b-4a00-851b-fc0b1d2a17e7}} CDM for the radiation dominant era is not larger then {{formula:a0fcb5a9-8717-4b08-a41c-f6fe77591199}} . Therefore, the BBN production is still applicable with those potentials.
| d | 57e5f37caf53f83b0d6c6977ee2a04a9 |
In terms of the use-cases explored with the TERM framework, we relied on benchmark datasets that have been commonly explored in prior work {{cite:317319dae14e60520b935192f8ebc77f1ea4f4af}}, {{cite:7a02026ada8c3dc8c834bb8d74ba124421fd2a67}}, {{cite:6c07567772cb240629406a48e0b4258b879fed28}}, {{cite:e0cb51a2dd712d18350eedbc88bd5a0bf28c5f37}}. However, we note that some of these common benchmarks, such as cal-housing {{cite:77e56c54f5bd202bfe30a04b62f8600a47064034}} and Credit {{cite:f2bb926515d1ca531a7139c15966955158e87e02}}, contain potentially sensitive information.
While the goal of our experiments was to showcase that the TERM framework could be useful in learning fair representations that suppress membership bias and hence promote fairer performance, developing an understanding for—and removing—such membership biases requires a more comprehensive treatment of the problem that is outside the scope of this work.
| d | e32ff32b141fcc6138f0a2ad5e2e19ee |
Ensemble is arguably the most trustworthy technique or concept to improve the performance of a given machine learning model {{cite:76300a53ead80d1302545051912ed66c8fabebbb}}, {{cite:78446e3edc62cdc28837b95ffbfbaa0bc81500cc}}. The ensemble method gives room to appropriately control the trade-off between bias and variance of the model.
The effect of ensemble is largely associated with the expertise of individual models. That is, a diversity of models is a one of essential factor for a success of ensemble {{cite:ebc4f1751bff81722c5e9b9964be512a728eb664}}.
Due to the reason, many ensemble methods seek to promote diversity among the models they combine {{cite:fda48851b3ad0234ba35499c9d637c7a7452271a}}.
As a standard choice, multiple predictive models trained independently are averaged in order to obtain a ensemble effect.
Recently, several attempts have been made to improve the performance by directly learning the (collaborative) loss of ensemble models {{cite:cadb6d92c39098b6f7c3f25cc2c9cc0eda9672bf}}.
While the preceding methods on this line consider the standard classification tasks,
our proposed method targets a more complex task of auto-regressive generation that
each token is generated by ensembles of each decoder's logit.
| m | 725bb48e21cc401e34a0045b914513b3 |
Difference-based UDA alleviates domain differences by minimizing statistical differences. {{cite:f0d214fbeef1ae2330708c02dccc11cc7d27eb43}} minimized the maximum mean difference for task-specific layers to explicitly narrow the domain gap. {{cite:971df6a3c8ba1e91ac368141e456d241aee95478}} introduced joint maximum mean difference to enforce joint distribution alignment between domains. {{cite:2e8b6f4e708af2391a09e40ecf8dd894eef4ce80}} and {{cite:bacf3475d60a26cb71289214ce7ba4a1c8147b89}} minimized inter-domain differences by aligning the second-order statistics of the source and target distributions. Based on the optimal transfer distance, {{cite:1230abdc772de813a9db72befd67aa5a4c0e306d}}, {{cite:a8f1a34e9348a320f69e8c7a378b170ddeb1beb1}}, {{cite:c8536459269c9cc37e0ca8b98bde9cd99f87316d}}, {{cite:ec1175bb4e6da7d086c88e28df822b5b8a8f73fa}} designed optimal transfer models to perform feature alignment in source and target domains. Regularizes such as entropy constraints {{cite:fa9c06abdd0c66ecc18727b484a98baf14b9dad2}} or maximum prediction rank {{cite:ea56ff135caefc5ed596d9f479ee9451a089ae9e}} can be used to implicitly constrain the cross-domain feature space. Although such methods help to narrow the distance of similar features between domains, due to the lack of labels in the target domain, the feature extractor may incorrectly draw the distance of similar but non-same-class features, resulting in a decrease in discriminability.
| m | 8f36cc0d66c91864c55db1f02329a2be |
Bastings et al. {{cite:7b0d1f53b0199938d3ef4c956e579275a2631b01}} argue that post-hoc methods should be privileged over attention models when it comes to faithfulness, as post-hoc methods take the whole computation path into account whereas attention maps only reflect input importance at one point in the computation.
However, we showed here that depending on the way the map is evaluated (Highlight or Mask), attention models can provide superior faithfulness.
| d | 9db91b88c5d9623e8af46f464dab1574 |
Similarly, from {{cite:54f353fc139ea6dee990dfd0af878ea62a661fce}}, see also {{cite:0bef102eaddd46109577c434413f6c0dc7bf7618}} and {{cite:aa1bb69b8f2669f061fcef8b500976889a1fa944}}, for {{formula:d96fff70-bc1c-4482-a77a-3f37fb118b51}} , the point process {{formula:1f050255-79f8-497e-8319-4fc35dfaf1e3}} follows the law of {{formula:43ea2498-c77f-4ff9-9cf0-706e63dcdc8f}} , where {{formula:d3278fa4-ab0e-4d51-8f94-5b3cd05660c9}} is a family of independent random variables taking values in {{formula:bff78f6d-8396-4e21-8059-e551c748de35}} such that
{{formula:514cac93-00ac-499a-bda1-3f0398e085fd}}
| r | e1b26c6e7c17d8b615250c64c0e8c360 |
Note that Eq. (REF ) differs from the free KG theory ({{formula:3268a97c-a492-4d61-a3e3-3e55cb6cadb9}} ) by some {{formula:e0464025-c5dc-4c26-84ff-68008fdd99cf}} -dependent terms.
Since the wavepacket is more or less arbitrary, we can consider a static homogeneous wavepacket. In
this setting, {{formula:8427eeae-4c35-40cd-b5fc-cb7e22c39303}} . The positivity of the energy {{formula:d5138fb4-87f6-4d75-88c2-a960e85dc491}} immediately leads to {{formula:bd14db0d-260b-4027-8dc2-94d0b3d05c0e}} , which has been speculated for some time as the necessary condition for the stability of matter {{cite:054a93116e782400e976d6c66d9f20cc74aef487}}.
Unfortunately, this argument does not constitute a proof, since the terms associated with {{formula:5d7c2b13-8a17-4f1d-b082-54f56ca9af94}} are total derivatives.
| d | c703acc76c9ed0b71893c710c68794e5 |
Currently the best constraints on primordial non-Gaussianity come from the CMB bispectrum. In the near future, several CMB experiments will improve on these constraints, predominantlyIn addition the sensitivity to polarization will improve which will double the number of modes and, as our analysis shows, when combined with temperature measurements can result in non-trivial improvements on shapes with {{formula:7fca0a70-bc6d-46c6-8faf-fa3f04e3273a}} . by increasing the spatial resolution, reaching to higher {{formula:b647b780-627c-421f-bb15-9daee73364ae}} {{cite:38edb8a03fe191be1235507e29b110936d23b5cb}}, {{cite:64438ca43f041f9939fb1af93f097ee638999f16}}, {{cite:f4de864e354b4d993112ee0b681681921e2b9fda}}. An often quoted threshold of any type of non-Gaussianity is to reach {{formula:1a30f471-3885-44f8-8c70-8af951502d2a}} {{cite:bcdf86e81ffd6d75e6734837fbf3da8acc8abc80}}. Given the current bounds on orthogonal and equilateral non-Gaussianity {{formula:83bbf9a0-28f3-47ea-b009-42237cfc4e47}} , reaching that threshold will be challenging if not downright impossibleAn estimate suggests that this would require {{formula:c11a7e28-a4cc-4e11-ae14-c7241713ea10}} for orthogonal and {{formula:9037dc0a-8fb6-4926-af77-78030925ecf1}} for equilateral, requiring a CMB dish of {{formula:08db8273-ea55-4404-a648-6997a92dfce2}} meters in diameter with a focal plane loaded with detectors. This assumes we would have unconstrained access to primary modes, which we know is not the case already for {{formula:f70f3372-0745-43cd-83d1-e8be32260fff}} few thousand. We should note that this extrapolation relies on the orthogonal template of Eq. (REF ). The correct orthogonal shape should have {{formula:87869520-d794-4613-bfce-4d0fe6f28512}} (see App. B of Ref. {{cite:a43338682c90e2934e846c7984737f48fbbf8b6a}}) and the expected scaling would be similar to the equilateral shape. with CMB measurements alone. While this is a somewhat pessimistic reading of our analysis, the fact that the bispectrum is not the only measure of non-Gaussianity, and that spectra with {{formula:a9616b1b-91c6-460f-9753-ed24c06a5b1c}} can exhibit very favorable scaling with resolution, implies that the CMB can certainly contribute to the search for primordial non-Gaussianity in general. For example, graviton exchange trispectra are qualitatively similar to {{formula:14deff9a-0c5f-4dd1-9ab1-778cbad681f2}} -like non-Gaussianity, while the corresponding bispectra are equilateral-like. Of course, this example poses a challenging detection regardless, due to the overall coefficient being Planck suppressed. It would be interesting to consider other examples displaying a similar mismatch of scaling behavior of the collapsed trispectrum vs. squeezed bispectrum.
| d | ef18636e96424406fd406a16be5f07c3 |
In conclusion, we propose a method to control spin wave
transport by weak magnetic fields based on the theory of chiral pumping of
spin waves. By exploiting two nanowires that communicate by unidirectional
spin waves, we achieve new functionalities such as magnon trapping,
amplification and a valve/transistor effect. The spin pumping by active and
passive magnets is different from conventional situation as it gives quite
different behavior of pumped current. The spatial distribution of magnons can
be detected inductively via microwave emission of a third magnetic wire
(supposing weak disturbance on the magnonic cavity) {{cite:3a784ef3f777ca81f5fb5b9a8465a4c8fb7a8f49}}, NV center
magnetometry {{cite:c2e4d2b1f05db970378b6a830e4a39a2bb349592}}, Brillouin light scattering {{cite:5a360901c7eccea445efd1c5f7f932b121ad28b4}}, and
electrically by the inverse spin Hall effect with a normal metallic wire such
as Pt {{cite:bfbef326fbb911130546bfe7bf9b783f04653cca}}. Replacing the nanowires by other objects such as
magnetic spheres or qubits, and the unidirectional spin waves by other
propagating quasiparticles such as waveguide photons, surface plasmons,
electrons or phonons, we envision our mechanism to be extended to other fields
including optomagnonics, nanooptics {{cite:b0110331e4d43dac14f0a172122e2a14a8d401fb}}, quantum optics,
plasmonics {{cite:27f6918edd2c8eb4438b8ebc7d5c79a618108d7f}}, {{cite:0aa55408fbd1bacb88fb91bcad120990fc0a75a6}}, spintronics, and spin mechanics.
| d | 2a3bddd9c6d85355af6d4311ae73687f |
paragraph4
0.1em plus0.5ex minus0.2ex-1emFurther results on AANets {{cite:4518de037dd87b0aa64c42baa446b349828bcccc}} and DER {{cite:ead9c1d412d4253590a3594a314c41c90414b01e}} We report results for
AANets (based on LUCIR) on Shuffle LT-CIL scenario (CIFAR-100 with 10-task setting). AANets outperforms LUCIR by a large margin achieving 38.53 in average incremental accuracy, and adding our method still improves over it by
about 1%. We found that DER does not
work well on long-tail scenarios ( with only 29.54 in average accuracy), but our method
improves it by about 4%.
{{figure:39a3b6f9-a82e-4155-8ad9-10fb9c504dac}} | m | d0e34f11c6009396277b25f18f444825 |
Moreover, as in Table REF and Sec REF , we found that more powerful backbones can further boost the accuracy, even with fewer parameters. Specifically, AET-EFN can achieve an impressive 91.95% accuracy on the N-Caltech101 dataset using an EfficientNet backbone {{cite:3b1aeccf5d6eabce898a14440f352f1cacd1d8c6}}.
| r | 0c108f2bde7fe702dae1337fb67a062e |
Such a strategy might be valuable in higher dimensional BO settings.
Continuous optimization theory tells us that gradient-based methods have local
convergence rates independent of dimension, which has made L-BFGS-B and
similar optimizers the tool of choice in solving for acquisitions. This,
together with the exponential growth of input space volume, might at first
blush suggest that tricands' performance is limited to low dimension. However,
in practice, the highly nonconvex nature of the acquisition surface
significantly cheapens the theoretical results associated with gradient-based
optimization. Indeed, recent work has achieved state-of-the-art performance
in high {{formula:0cdc4d5a-b9fb-43c1-b3a2-59a0744736ac}} using candidate sets focused within a certain region of the
input space {{cite:efe0ab6ac2166e7a512298981f3a95461af0b2c2}}, {{cite:e398e62d5f0b8f42320a03da93df197ef01719d1}}, though
still built on traditional space-filling points such as LHS. It would be
interesting to see whether replacing these space-filling points with tricands
would be as beneficial in that setting as we have found it to be in low {{formula:75de7888-c392-4762-b78d-d6b0d6f52073}} .
Furthermore, a popular approach in scaling BO to high dimension is to reduce
the input space by screening input variables or finding linear or nonlinear
embedding spaces, rendering the problem a low dimensional one (see
{{cite:caec1e9e3daa1097379c257db223697ec6710704}} for an overview). The aim of such an approach is in
part to make solving the acquisition problem easier, and there's no reason to
believe this wouldn't extend to tricands.
| d | bb154d9dc3d19b4eae059e266f27e9fa |
The design of our anatomy-guided registration framework by learning segmentation without ground truth also suggests several interesting topics for future studies. First of all, our method could be adapted to other multimodal registration tasks that conventional registration techniques are not applicable, such as MR-Ultrasound (US) registration for neurosurgery {{cite:62fc6777732999204213bfbd122ec646fa0b5f11}} and prostate interventions {{cite:202807ef2bc6c435d29efd0fe1619290c9cf23ed}}, where occluded FOV and intensity inhomogeneity is often observed in US. There are several public datasets containing MR brain tumor and MR prostate with ground truth segmentation {{cite:167e08d51d4204be4ece9a2a0e9ccefa8cad95c5}}, which make it possible to adapt our method to these applications. More specifically, we could use APA2Seg-Net to obtain the US and MR segmenters, which are then embedded into our anatomy-guided registration pipeline for real-time MR-US alignments. Similar to our idea in registration, {{cite:8830958fc38df6a036ebfb4c2f651c0f6708aeac}} recently proposed a prostate US-PET/CT registration algorithm based on segmentation for dose planning {{cite:7a7cf07e430637def9499de728ca4421b2de5e6e}}, in which our APA2Seg-Net could potentially provide the US and PET/CT prostate segmenter as well. Secondly, our method could also be adapted to landmark-based registration tasks. While anatomy-guided registration framework based on segmentation is demonstrated in this work, the segmenter in APA2Seg-Net could be replaced with a detector for learning keypoint detection without ground truth on target domain. Then, the keypoint detector could also be embedded into our anatomy-guided registration pipeline for keypoint based alignments.
| d | 9f085bd000aa0babbcc9b81d6f001531 |
While inference on the suite of IMIFA related models has been demonstrated to be efficient and practically feasible, there is scope for further finessing. Implementation of the third label switching move of {{cite:366940f561827f6ca674b43c32ca38e65e5492dd}}, exploration of the utility of the collapsed Gibbs sampler for DPMMs {{cite:f7b6bd77a50ab774ed6b0fbca6bc310875d3a5b3}} or of posterior tempering to encourage better early mixing are all of potential interest. Further, as proposed in {{cite:004701e0a9435bb37dbbcbe33f02fe47740b80f8}}, the hyperparameters {{formula:34f06193-100c-43f0-b724-b614ccc79099}} and {{formula:8119789b-df7d-438a-b0ee-81b58d649d15}} of the multiplicative Gamma process shrinkage prior could be learned from the data rather than fixed as in the suite of IMIFA related models considered here. However, such learning requires introducing extra Metropolis-Hastings steps which would be computationally limiting.
| d | a9164c420f49dfd3271c176b86a40b13 |
If so, there would be immediate advantages. Models are highly compressed compared to the datasets they represent and therefore easier to share and store. Synthetic data also circumvents some of the concerns around privacy and usage rights that limit the distribution of real datasets {{cite:3da7807c962d3798528fd7115a7fe316705ab76e}}, and models can be edited to censor sensitive attributes {{cite:80bf05bf2333e3df666c88a90df3b2c091f417cc}}, remove biases that exist in real datasets {{cite:d73fb007a104062311406e69172e8dbb6e7ad511}}, {{cite:4b55830529de6e5f8da9704f5c1a69941a24c65c}}, or steer toward other task-specific needs {{cite:e08caa0b1ee82fd7e7692146ec613e9dd7397e67}}, {{cite:785ba57a6c971713776ae8dde4a4545980c6a8da}}, {{cite:6ddc94994ceaef530f4236953728a5ca3b7b0fe3}}. Perhaps because of these advantages, it is becoming increasingly common for pre-trained generative models to be shared online without their original training data being made easily accessible. This approach has been taken by hobbyists who may not have the resources or intellectual property rights to release the original dataE.g., see https://github.com/justinpinkney/awesome-pretrained-stylegan, hosting many pretrained models, without datasets. and in the case of large-scale models such as GPT-3 {{cite:42b06adb0b38e16f3ba9f32a0812e3dd49b93c42}}, where the training data has been kept private but model samples are available through an API.
{{figure:79e60a0c-10f8-414a-9702-dcdf9d9c458f}} | i | e598eecf7d2fc4f986d1899f93ab4fec |
the HJI PDE is augmented so that trajectories that enter the target set get frozen before the final verification time,
our equations share a facsimile with those of Jacobson and Mayne {{cite:206b64ad31aa5d80cc029049b7823207c8b398c4}}, and Jacobson {{cite:922d801b6c74212723e2c208d410e42f8912ae13}}: the equations are evaluated at {{formula:6cce9e3f-1161-4ab1-894a-63d30dd57f10}} without the presence of the third-order value functional derivative, {{formula:bc11cbd2-9738-4c19-8f1f-5077dc336922}} . Standard step size adjustment mechanisms characteristic of second-order gradient methods {{cite:a0b08c8d5c6c45f67841f493d314168491c3ab7f}}, {{cite:0af6e1cc6ee6f90cf56e708327d18d8b2904cbf1}}, {{cite:206b64ad31aa5d80cc029049b7823207c8b398c4}}, {{cite:59e8ce95a7b5ecce2735e75d74bd37fee9fb5c15}} are applicable here in ensuring a strict positivity of the Hessian and a monotone propagation of gradients during the optimization scheme throughout the state space.
Conclusion.
In this letter, we have presented a second order state-feedback local approximation scheme for computing the minimizing disturbance and maximizing controller that constitutes the optimal value function in a minimax reachability game setting. The iterative method presented is a direct extension of second-order gradient methods {{cite:59e8ce95a7b5ecce2735e75d74bd37fee9fb5c15}}, {{cite:0af6e1cc6ee6f90cf56e708327d18d8b2904cbf1}}, {{cite:a0b08c8d5c6c45f67841f493d314168491c3ab7f}}, {{cite:922d801b6c74212723e2c208d410e42f8912ae13}} and has quadratic convergence guarantees similar to the methods mentioned in the foregoing. Future work will rigorously numerically evaluate this scheme on different physical phenomena.
| d | 02b29620c5cd2fdda9dd8b7df30a9745 |
where {{formula:6ad8c0d5-559e-42af-9a9f-71980816bc7e}} is a coefficient of the random force's correlation and {{formula:da4394bc-a045-4e25-95a1-bf644923b5ed}} is the friction coefficient {{cite:e4405e5b94cd25480981a0d1c262710056b83467}}.
In the right-hand side, in addition to the BGK collision term, there are the diffusion term and drifting term from the Fokker-Planck (FP) equation.
When the tracer particle's dynamics is much slower than the solvent particles' dynamics due to the differences of particles' mass and size, the timescale of the BGK collision term {{formula:6abaa933-425e-4982-b0e3-8aa695b09d3f}} is much larger than the timescale of the last two terms, the FP terms. Then the long-range interaction force becomes dominant and Eq. (REF ) can be approximated by the FP equation.
If these two-time scales are comparable, all terms in Eq. (REF ) should be considered.
In that case, if {{formula:3ac33809-afa5-4211-b50b-e7e031f35269}} is applied to the FP term at the incompressible limit {{formula:0aeee1c5-8dcf-43d9-becf-8afe5daaf962}} , we obtain,
{{formula:6ff57436-3a08-4a9f-88a0-f40f45fdb089}}
| d | c4a5ef5ed09f397daea50ab2be9acbd2 |
Perhaps one of the greatest advantages of our SSE-based approach is transparent description of entanglement which is, in most cases, obvious from the explicit analytic form of the state vector. In contrast, the characterization of entanglement within the master equation formalism is a separate problem, since convenient universal figures of merit exist only for simplest systems; see, e.g., {{cite:e46686040013eef992959227efc7c70e34b10c0a}} for a review and {{cite:64186bc80f8ee7f2d73177835a61ac46bb074a60}}, {{cite:ead85672dc5d56a8725adc9c50353907c48d0f52}} for recent studies of entanglement in strongly coupled nanocavity QED systems based on the master equation. A variety of entangled state control scenarios were studied for microwave cavities within superconducting circuit QED; see {{cite:7a6f90c6495ef7e358c6572e64fce12bc34bbe2d}} and references therein.
| i | b74618d6712c0e236fe849b7987c9b1d |
We also noted the extension of calibration to naturally shifted data. Akin to the observations made by {{cite:1736fa463b5040491315748828d5bdc326cc841c}} on their evaluation on synthetically shifted datasets, we observed that existing solutions provide calibration on naturally shifted datasets as well. However, this calibration comes at a cost and as a result refinement aspect of the models is comparably poorer than their uncalibrated counterparts. An important point to note was the failure of Mixup under datashit. {{cite:9c13ca33527595c59408f05c0c6fff72225c0017}} has demonstrated Mixup's ability to distinguish ood samples however, we believe that natural shift is a weaker notion of data shift than ood evaluation and MX fails to provide any benefit in this regard. We also noted the varying impact of this degradation across datasets. We suspect that the lack of evident over-fitting on ImageNet is the root cause behind the visibly lower calibration-refinement impact on it.
| d | 526908389f49e56a44e4924fa453da66 |
With this paper, we aim to make researchers aware of the value and complexity of challenges and their design and provide a framework to put challenges in perspective and determine next steps in challenge design. Most challenges organized in the field of medical image analysis thus far, are insight challenges. The first challenges often used a form of convenience sampling, i.e., evaluation data that happened to be available in the organizers' group (e.g., from another study) was used for organizing a challenge. This already led to new insights into algorithm performance on this specific dataset. It was a huge step forward in comparison to researchers writing papers claiming that their algorithm performed best on their private dataset {{cite:8cb9d2442fc15842892c698392851dd1cb7fb2aa}}, which did not give other researchers the opportunity to test their algorithm against the proposed algorithm. Later, a form of snowball sampling was used (i.e., groups, that knew each other or participated in a previous challenge, teamed up to share their data in a new challenge on the same topic), which led to insight into the performance of algorithms on data from different hospitals {{cite:cb99a32bb34ed464d03ff9cb56b1bddb2662f648}}, {{cite:8e2f0731b2571281bd87cc3f591c2e0c03127707}}, {{cite:dc60e81e9f804a3691301fe67d5768ee7b7468b9}}.
| d | fb2086fecdda82216cd2eb10632605ef |
Five-dimensional {{formula:21e7acde-7af5-4e16-b176-6a23652cd215}} SCFTs are inherently strongly coupled {{cite:94dc68f6cdb5359146fd04de65b61426de6d3632}} and do not admit a Lagrangian description. They therefore pose a challenge to traditional approaches to calculating CFT observables. A fruitful strategy to obtain information about their strongly coupled dynamics, is to deform the theory away from the conformal point, where it may admit a quiver gauge theory description, and compute their partition functions, possibly decorated with Wilson loops.Complementary approaches that address directly the conformal point are geometric engineering of the SCFTs in M-theory {{cite:b5cc49db60ffe258d2fc7f7ec7a718f9000d1551}}, {{cite:cdc762e2fd36936bf4c6e6e94db35df4472bdddf}} or using fivebrane webs in Type IIB string theory {{cite:d8cfa129b3f78e83cda0f9748f452704a8e29d8f}}. Three-dimensional {{formula:78ac490d-b5f2-40cb-b7ad-b94aed85aa26}} SCFTs also enjoy many rich properties, chief amongst them the infrared symmetry enhancement and mirror symmetry {{cite:9da5354f9a3db9212ee58f6f0b7f849359899d4b}}. The method just outlined applies to three-dimensional {{formula:7d0306f8-3ad7-47ce-903a-581b0255ca81}} theories as well. These, though, have the advantage that the gauge kinetic term is {{formula:09541691-a9d0-4104-ab55-2ad3c9188005}} -exact, thus the {{formula:131c8c1b-255c-41ee-80c6-acaa6f7e372a}} partition function can be evaluated directly in the SCFT.
| i | 47e7846786bb25f50d7a854c2b078519 |
Random xor-sat in a nutshell. Random {{formula:9c443536-4450-4771-948d-8e9818073ccc}} -xor-sat {{cite:78dcb0d0228fa095074abcda64067b55b9725e60}} searches for assignments to a Boolean vector {{formula:d55c389f-e78d-41a6-b1a8-8c5610b9a0cd}} satisfying {{formula:a648087d-e25d-463d-a221-1e7554471e18}} , where {{formula:2ffbeeae-7878-4df5-b4f4-c3a5fc16ba36}} is randomly given and {{formula:4add1c90-21c7-4d38-b919-615ee38fea1f}} is an {{formula:463593e0-93ef-489d-a757-89d24a8764b0}} matrix underlying the structure of the {{formula:ef9495c7-2c77-4564-94f3-a39427fe72da}} randomly chosen {{formula:dfe2115e-6c27-4a21-a335-283576da1c18}} -tuple logical constraints.
For {{formula:1d56f0ee-d30f-4cf3-879f-54702e14cc2c}} , e.g., this corresponds to finding solutions to {{formula:52f2e795-c747-4b71-a371-5e3026b34095}} , where
| m | afc8be7099b6f3b127883622051e1684 |
The implementation of the gradient descent method requires the specification
of {{formula:92079fa6-a935-45a1-8e5e-c993fe7b5643}} and {{formula:ab702ef2-6247-4050-b168-e4b93d34e344}} , both of which depend on unknown problem-specific parameters
{{formula:59037b3b-3a41-4e39-988c-ded41b5d4fb3}} and {{formula:3f2c05f7-c424-48e7-a717-71a517f1a9a6}} (see (REF )). In practice, we implement
the gradient method using a backtracking line search {{cite:c7ea8d38b0c27fbe4884b87b8b62f72d48fbc3c3}} and
terminates the gradient descent method at {{formula:4629cf28-453a-45e8-a1b3-4aa7b83520f5}} when
{{formula:96765fa3-a302-47c0-8a9d-37f6b20984b4}}
| m | 2eeea771f7b33439ca3943f54f229a69 |
The setup of the recent 2018 FICO Explainable ML Challenge exemplified the blind belief in the myth of the accuracy/interpretability tradeoff for a specific domain, namely credit scoring. Entrants were instructed to create a black box to predict credit default and explain the model afterwards. However, there was no performance difference between interpretable models and explainable models for the FICO data. A globally interpretable model {{cite:51be7293d47654a3e269fe58cef0af5bad45ef05}} won the FICO Recognition Prize for the competition. This is a case where the organizers and judges had not expected an interpretable model to be able to be constructed and thus did not ask entrants to try to construct such a model. The model of {{cite:51be7293d47654a3e269fe58cef0af5bad45ef05}} was an additive model, which is a known form of interpretable model {{cite:b4ede0e43f3ca9bf90e242fc78d7e0e6f7f12a2d}}, {{cite:96d25bc5e530a8f3332e1cd5b6d4b6850edd7174}}. Additive models could be optimized using similar techniques to those introduced in Challenge 2 above.
| d | 22815f2bc118611ab48a2d83081fdb82 |
{{formula:da38f431-4ba6-4865-99e4-c3d95ad4f21c}} See {{cite:ec9ed3eed97e71ac388d023d4318a4e944ddeb77}}, {{cite:69466ff6ffc948667cfced7952bd10a73afdf4d4}} and {{cite:e1aefc3f164909887a35ee9009cd43bc9bb82ebb}}.
| d | 4966b14d538451b6b16f90923eb22e26 |
In Theorem REF , the linear convergence of ARock is given in terms of the expected quadratic distance from the iterates to the fixed point. Note however that the literature on coordinate descent algorithms (e.g., {{cite:fa28e6d19887b22dfd38077e5acb8ea2447a1c9f}}, {{cite:9044c84fde3529b7e27b3db2e489783b6d5a3ea8}}, {{cite:ebe47674184f15026dea8464d5b7f7b19601c1fc}}) usually establishes convergence results using coordinate-wise Lipschitz constants of the function {{formula:51874c28-7e16-4eaa-b2ce-c26962c0c406}} . This allows to provide larger step-sizes which can lead to potentially better convergence bounds, especially in terms of the function values {{formula:5ec388ed-d0bc-417d-a7c0-a1dbc5517452}} .
| m | 61770b1471cb1ec6a759ca2100568c18 |
By Theorem REF , solving minimization problem SLQ{{formula:4377568c-e175-460a-a201-31136a810560}} is
equivalent to solving the system of coupled forward-backward difference equations
(REF ) and (REF ). We may exploit the variational
character of problem SLQ{{formula:4bb9222c-f1da-48e4-a61a-7571d964866d}} to construct a gradient descent method
SLQ{{formula:68207189-3e83-4240-802d-5a363d5fd77f}}
where approximate iterates of the optimal control {{formula:c4309bb9-026b-4f1e-bec0-5aab2249436f}} in the
Hilbert space {{formula:7ba2089c-d64d-413b-a739-ae7f9c632766}} are obtained; see also {{cite:4c3d505f1542f17a18c5da2faa725b47fafa0def}}, {{cite:51e596dae14a08f5d78d773d8a5f25f25a647a34}}.
| m | 7d57fe125895e332fa57ed02037593c2 |
In the dyadic stochastic blockmodel, it has been proven that, for the stochastic blockmodel on graphs with two clusters, the regime in which nonbacktracking spectral clustering fails coincides precisely with the regime in which the regime in which no algorithm can detect clusters {{cite:49b56bf4c570edd99f0377b563944b3ae4ddc415}}, {{cite:990a66c14ce486ea16eac4bfb887a1d28e988795}}.
We pose a similar conjecture for the hypergraph blockmodel: within the ellipsoid described by eq:ellipse, the cluster detection problem cannot be solved by any algorithm.
We propose a proof of this conjecture as a direction of future work.
| d | e05e80d7f7272fa15a9189107c675fb8 |
is the “pseudoinverse” of {{formula:5310ce4a-59de-4324-b0ab-89bf68179e1a}} (assuming {{formula:375d24d2-63ce-40f4-b688-55f5234d6bff}} has full column rank,
which we will typically do for simplicity). This classic solution
takes {{formula:677656cc-0231-4af6-8397-073dd6702a8f}} time, where {{formula:f5040d88-318d-40d4-9899-d73e58801213}} is
the matrix multiplication constant {{cite:ce79680e5ccfdb3bc68c64f49f2fa2a97218738b}}, {{cite:558640395b800e0d006c41e9d2a99819605d8441}}, {{cite:4868129d999a952081f855b986b58fa0e97c4ccd}}: {{formula:f5c98093-df2e-4765-a366-5843ee125bc3}} time to compute {{formula:d6a0ea69-ce42-4bc5-a6da-74d36790248a}} and {{formula:97eb18e6-4350-4a6d-8934-65dd637c2875}} time to compute the inverse.
| i | 4753e644743aac774be327b58e2595c1 |
Continuous subset sampling: Our objective function(REF ) requires sampling of subsets which is a non-differentiable operation. Similar to {{cite:c11a568e812de08e8069a348d457e90631edc579}} we use the Gumbel Softmax trick {{cite:772af876cdccdf97081236447c63eef68dce2a55}}, {{cite:30fb5557b0b566ac9cb66b9ba2bf2c3261a1618e}} for continuous subset sampling.
| m | b6a11e1e9b4505ef3b3beeec512e9104 |
We also submitted our results to the official evaluation benchmark for evaluating our performance on the test set of the KITTI dataset. For BEV performance, we surpasses PLUME-Middle {{cite:5bf907402649710a0923e1330cbdaf8e07746bf8}} by 10.48% mAP. For 3D detection performance, we surpasses CDN {{cite:09d9af09effb0127181b97cbe6c39f4c32abb79a}} by 10.44% mAP. Compared with PL++ (SDN+GDC), which is Pseudo-LIDAR++ {{cite:cb068168f3e4370646cac1d82df97a69f1f58b5d}} using 4-beam LiDAR for refinement, we even achieves much better results. Our method significantly reduces the gap between LiDAR-based methods and stereo-based 3D detection methods. For pedestrian and cyclist categories, our method exceeds CG-stereo by 5.69% and 5.97% mAP for 3D detection respectively. Please see Sec. 3 of the supplementary material for visualization results.
| r | 31f60d7456a051017e9c677579b5fad3 |
Although both BERT and GPT-2 employ the Transformer {{cite:76e1c0e69a0d1c39248f555392b6ac462f600d0b}} architecture, they have very different ways and locations for storing knowledge in their internal representations {{cite:1270937f9ac141a0569f7fa396bc213aa8ca2526}}, {{cite:087a0fe672da7e4c30d7d321c1c034df6d6b05f3}}, {{cite:a4b42df8c61337ffe3a5851c7c01efdb9447c096}}, {{cite:970d25f0da9cef83c562986ee786087bb6c046e3}}, {{cite:c8d0c23f4a19e94f877ef2504d2bf77951509aa3}}, {{cite:459afb1bdb83dcd25015ec6c5a41ddb1e8be7851}}.
The CLS representations outperform the mean representations in only a few cases.
This is expected since without fine-tuning the CLS token in BERT is trained to be used for the next sentence classification tasks.
{{figure:d567e730-cc47-4b81-89ae-10db0205bd22}}{{figure:1237d9e0-749c-420b-8839-d82de0a23236}} | d | aa3414baaa77aee2ac9315a4d54177b1 |
More analysis of stability problem in the CoSOD/CoSEG task.
In this paper, we delve into the stability problem in the CoSOD/CoSEG task. In the most recent review article {{cite:2da3d54e6e3dee440fcea5050b8959cb1d5af0a8}}, stability is highlighted as one of the most important issues currently unresolved in the CoSOD task. In fact, as an inherent problem, instability exists widely in CoSOD methods. Although many CNN-based methods {{cite:2da3d54e6e3dee440fcea5050b8959cb1d5af0a8}}, {{cite:8295aa27f4e1bb6ecd258801b27a9afd061fb1b1}}, {{cite:60cdf3885ec0a4875a244a0298ecc7be5f13efb2}} and Graph-based methods {{cite:264dea70df396a71c1c17ed95c35ba6b271e41d8}}, {{cite:b98a27e2b53166d6b727e4cc1cfcafedbcd3fbf3}} have greatly advanced the development of CoSOD task in recent years, before our previous study {{cite:acec790f19383bfaf0e7707b7b3bcf4b4c668af2}}, the stability problem remained untouched. When dealing with image groups containing a variable number of images, these CNN-based methods and Graph-based methods detect co-salient objects by dividing the image group into image pairs or image sub-groups. Since there is no principle way of dividing image groups, this strategy inevitably makes the overall training as well as testing process unstable, which influences the application of the co-salient object detection. We conduct the experiment on CNN-based method CoADNet {{cite:8295aa27f4e1bb6ecd258801b27a9afd061fb1b1}} and Graph-based method GCAGC {{cite:b98a27e2b53166d6b727e4cc1cfcafedbcd3fbf3}}. In the red boxes of Fig.REF , the same image results in the different detecting results when they are in the different sub-groups. Moreover, we repeatedly test these two methods in different sub-groups of Cosal2015 for 5 times, and the performance is shown in Table.REF . It can be seen that different sub-groups heavily affect the performance and we call this problem sub-group instability. To
address the sub-group instability, in our previous work RCAN {{cite:acec790f19383bfaf0e7707b7b3bcf4b4c668af2}}, we propose the RNN-based framework to make use of all available information in an image group. However, as an RNN framework, when the images in the image group are assigned with different orders, RCAN faces another instability. We call this order-sensitivity problem order instability. As can be seen in the Fig.REF , and the co-salient results of RCAN are variant under different orders. Hence, in this paper, we further explore how to alleviate the order instability of the RNN-based framework. In the Fig.REF , when detecting the different orders, our proposed network can consistently achieve good results. Comparing the Table.REF to Table.REF , we find that introducing an RNN-based network already improves the CoSOD stability by addressing the sub-group instability. Finally, through the MSRU and COCL, the stability is further enhanced by a large margin, since the order instability is addressed. We believe that a sustained and in-depth exploration of stability issues is of great significance in advancing the CoSOD/CoSEG field.
| d | eec2e183c835905a2fda497a302d80f1 |
where {{formula:740ff24a-aded-4389-bdf8-0115410aaa5b}} are generalized Laguerre polynomials,{{cite:e1043683dabb13ddc5f504b1fa21c855ecb2bfb6}}, {{cite:811f8248978e05612dc46d19819c64ac003c73cd}}, {{cite:b46f1b88d7ab9fa968c2f8a9325714fe9549cd64}}.
The condition (REF ) is equivalent to that there exists
a positive integer {{formula:a5aa4e1d-6b50-45bc-8f90-a03d14d45910}} ,
{{formula:4b81d252-b237-4e8b-bb14-6de0390499b6}}
| r | 3b970c63c96fbc74b403302528955c4a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.