text stringlengths 54 548k | label stringclasses 4 values | id_ stringlengths 32 32 |
|---|---|---|
[!t]
The OpenMax {{cite:029adcafac3b2738655829e1d97b9683e8e205b4}} algorithm finds a Weibull distribution through Meta-Recognition calibration for each class {{formula:0ed5107b-fc26-4707-ac42-a714125e5c83}} , with {{formula:f0c47cb0-cfd5-45eb-8d86-8eb079c6381d}} largest accepted distance to mean {{formula:cc605c30-445a-4b03-8af4-ccd87f357456}} . The rejection criterion first revises {{formula:1f2d6b65-20b3-4fe5-a13a-c2cf4ca0210d}} top classes, while adding up distance as an “unknown unknown” probability. The sample is rejected if the most likely class is the unknown unknown or below threshold {{formula:5114eac6-a237-4eb7-89c2-91b7b51eaf55}} .
[tbh]
Output vectors of the penultimate layer from the training set {{formula:4c94532d-af9f-4a5e-9d3d-48cee90b2aa4}}
Hyperparameter {{formula:20661e34-0736-4565-b7df-4bef1bbce325}} : Largest accepted distance from sample to mean.
Meta Recognition toolbox {{formula:912cbe76-fdf8-4d6c-bdfd-1b5dd4a4be8d}} for Weibull model fitting
| m | bbe36d5fd402f6b8c5e9f469f15a8530 |
Finally, based on previous research {{cite:acb67aae26b7d4def0649f8aed3e8845ed48e034}}, {{cite:124b2981ef31164e94ac6e03ca8adef730e973f0}}, {{cite:394fe18d2208da92ef01c1397f6c5f04aee0bfb2}}, we excite and inhibit relevant and non-relevant features in the original input by applying a convolution operation to increase the number of channels from {{formula:c3639d41-318e-4d7d-83f4-c815d4126771}} back to {{formula:589ee2e2-123e-452a-8ec6-d76cc3fe40e8}} , followed by the {{formula:33dc17ca-1aaa-4fbc-8fd9-41d0bfcc1e14}} activation function, and an element-wise multiplication on the original input, followed by a skip connection.
| m | d20bb0b9881c676459ec7d4e9eebd10a |
In this manuscript, we explore the behaviour of the spectrum of the Lax matrices of various relevant integrable systems
when the number of degrees of freedom {{formula:21a2c3ff-1745-44e2-92a4-b76894750b31}} , and
the initial data is sampled according to a properly chosen Gibbs measure.
Our main result is that we can compute the density of states for the
Lax matrix of the exponential Toda lattice and the Volterra lattice.
This is done through a one-to-one correspondence with the Laguerre
{{formula:4bfd923c-7390-45cd-b2e3-5640c35ea7dc}} -ensemble at high temperature and with the antisymmetric Gaussian
{{formula:4da44f82-78d7-4b06-a943-312940644cb4}} -ensemble at high temperature, respectively. These are two
known classes of random matrix ensembles, see {{cite:7c02ae37a5b693c2b2923318740c87b15d5c6ca8}}
and {{cite:90644f29f946a87a1ad2fc7331aaf1953f8fd6aa}}, {{cite:a3a7966455455f662cd8721d7b971acf9bff2e09}} respectively. We consider other
relevant cases of integrable systems, namely the focusing Ablowitz–Ladik lattice {{cite:fb95546af80d8351d7cfda419d9961aa9b44992f}}, {{cite:9faa59615867b2cbd5adc099d9e7c66f8ce29032}}, the focusing Schur flow, and a class of integrable generalization of the Volterra lattice
to short range interactions, called the Itoh–Narita–Bogoyavleskii (INB) additive and multiplicative lattices {{cite:75c999cbb27fecf6d990509b354b309e91e9fad6}}. In these cases the corresponding random
Lax matrices are not symmetric nor self-adjoint and
we derive numerically their density of states that has support in the
complex plane. Interesting patterns of the density of states emerge as we
vary the parameters of the system. Finally, for all the
integrable systems analysed in this manuscript, we are able to compute the density of states in the low-temperature limit, namely in the ground state.
| i | 428f547c78d4d4408b4cc57ab8664acb |
The second harmonic, {{formula:a8a74aaa-5ba1-41d5-ad02-9a7ff906010b}} , typically referred to as elliptic flow,
is the most thoroughly investigated one, for review see
{{cite:9cca0f7cb5d1313cbf0914df865a18e00495b61d}} and references therein. The reason is obvious:
it directly relates the anisotropic shape of the overlap
region of the colliding nuclei to the corresponding anisotropy of the
outgoing momentum distribution. At relatively low transverse momenta,
{{formula:95f669f5-7165-4ee6-9f15-dc293dd0568a}} {{formula:12728f1d-c41b-4e6c-b452-559369f8ae63}} , the azimuthal anisotropy results from a
pressure-driven anisotropic expansion of the created matter, with more
particles emitted in the direction of the largest pressure gradients
{{cite:a222fd0abb148d4c1a591bf21f241538aef7b9a6}}. At higher transverse momenta, this anisotropy
is understood to arise from the path-length dependent
energy loss of partonic jets as they traverse the matter, with more jet
particles emitted in the direction of shortest path-length
{{cite:8f0ad04b204549e228b891cd2eb7ccc58a471399}}. The correlations between soft and hard
contributions to anisotropic flow has attracted a lot of attention, see,
e.g., {{cite:1b9e363eeb6f17a5583152e3deecd6007ff9a3e7}}, {{cite:cd5a281a8ed556822ce61b9fdf9291b19485f71b}}, {{cite:95d9f47bf0e387af3745ce46a3e467a26b8fd88a}} and references therein.
| i | 65ad23d18197f89f4539d5da52c9ae8b |
Known decoding algorithms for MRD codes can be generally classified in two different approaches: syndrome-based decoding as in {{cite:fafc35a86a20af55fec70cf8a03cf31b1ef17d10}}, {{cite:ae1dcb64e926b9ffb564112f1bc167ba6fed98c7}}, {{cite:a275b538d357f617787ad3f72a1e4cf79cae228b}}, {{cite:9d1fd2b798eb34da3bc0a45f351cff6343600ac4}} and interpolation-based decoding as in {{cite:da447e524339c94cfe01fefb3129c01643372d4a}}, {{cite:74f7270061f3f451531918d0477860ebe769c5d7}}, {{cite:be605dda09e9947b3ab74f9181034bbf919f5d40}}, {{cite:94620f02807ed9d8bd98b018f69d8e9de077d031}}, {{cite:ac7a85d87ea847c0a7a745eb5e1b6e2faf73647a}}, {{cite:4f47c30f42699547269f9eb6812a0325b75055b0}}.
Gabidulin in {{cite:fafc35a86a20af55fec70cf8a03cf31b1ef17d10}} solves the key equation in the decoding process by employing the linearized version of extended Euclidean (LEE) algorithm, while in {{cite:9d1fd2b798eb34da3bc0a45f351cff6343600ac4}}, the key equation was solved by a linearized version of Berlekamp-Massey (BM) algorithm. The error values in both decoding algorithms in {{cite:fafc35a86a20af55fec70cf8a03cf31b1ef17d10}} and {{cite:9d1fd2b798eb34da3bc0a45f351cff6343600ac4}} are computed by an algorithm called Gabidulin algorithm. Loidreau in {{cite:da447e524339c94cfe01fefb3129c01643372d4a}} proposed the first interpolation-based decoding approach for MRD codes and considered the analogue of Welch-Berlekamp (WB) algorithm, which was originally used to decode Reed-Solomon codes {{cite:694f95c8f9abcad063a1fb798224bda0b2aaf505}}. The algorithm directly gives the code's interpolation polynomial and computing the error vector is not required in the decoding process.
| i | 9172c9e9cd2b48d26d33ced4a4f7243b |
The parameter estimation in Param depends on a computationally efficient method which is based on MPLE {{cite:6279dad4429d75d473067fc9bc47bc7ed432c397}} in Eq.(REF ). AgraSSt takes longer to compute
mainly due to the computation of graph kernels, e.g. Weisfeiler-Lehman kernel {{cite:7c33e522a27d28503581c7262937a6cd3c281105}}.
We
note that for implicit models, the estimation step in AgraSSt relies on generating samples from the model so that the
the computational advantageThese results on gKSS are shown in Supplementary Material D in {{cite:922ff4d1a24e0c9208ed23989c2e97f4cff30f4f}}. of the Stein based test over
graphical goodness-of-fit testsThe graphical test {{cite:f2bc5371f1a2301973f73a74aaee6062f0c36a72}} is computed based on generating a large amount of samples from the null distribution. reduces compared to gKSS.
MDdeg is computationally expensive due to the estimation of an inverse covariance matrix.
While providing fast computation and estimation, Deg and Param sacrifice test power through a large variance of the test statistics. Estimating the full degree distribution, the total variation distance method TV_deg, based only on degrees, is competitive with AgraSSt; we recall that in our simulation results from subsec:simresults TV_deg was less powerful than AgraSSt.
Here MDdeg is outperformed by the other test statistics.
{{table:165ced35-2b0e-421d-9ca9-eac7a2bb5be8}} | r | a8a809cd6d6e68b7920d460cc1cf390f |
The plasma in question, the quark-gluon plasma (QGP), is created by colliding lead and gold nuclei at high energies in the Large Hadron Collider (LHC) at CERN; and the Relativistic Heavy Ion Collider (RHIC) in Brookhaven National Laboratory (BNL). Before the first results from RHIC in 2000 it was expected that at high energy densities QCD asymptotic freedom would result in a weakly-coupled system exhibiting gas-like behaviour. However, the experimental results {{cite:ca1662a7876483fbfedd5363423e9f90daab1a57}} indicated that the produced QGP (a deconfined state of mater consisting of quarks and gluons) does not expand isotropically and behaves, in-fact, as a strongly-coupled medium {{cite:91cd1a399ee80debe5be9815e333d48cc9146f9a}}, {{cite:4228f9bd3a4a35db718096ddd5865d6e186922cb}}. The thermal plasma expands anisotropically in its azimuthal direction. The momentum anisotropy of the measured particles is known as elliptic flowFor two comprehensive reviews on discoveries relating to the hydrodynamic description of relativistic
heavy-ion collisions (specifically collective flow and viscosity), see {{cite:5ebe713447ec8fcd9104e114831a1ca671a18c3d}}, {{cite:9ea9aa20c80e5cd00d30d16f6694cc0ba8bb1f20}}.. This discovery did not definitively settle the question as to whether the QGP is weakly- or strongly-coupled. There is evidence to support both. Weak coupling techniques from perturbative quantum chromodynamics (pQCD) have been successful in predicting the distributions of high transverse momentum observables {{cite:fe7c97b1a758c5493c529259a5d33f4dc95986ce}}, {{cite:bba4a108efdf234cc0ba8e8fd3b9813810566d8c}}, {{cite:e5aedc05bee768b54a461e5e1ed5ac5e1ab29fea}}; while low transverse momentum observables described by near-ideal relativistic hydrodynamics {{cite:3cb47a7334aa6e3d6e5a52a75b44e0fc9d89937d}}, {{cite:d98208e4582f9b92c65cb8749aef89b63a461b5e}}, {{cite:24368f3f2e3efa2d422203869e8e95544f1d20d7}}, {{cite:89b3c6f8147089893d939ca9c96e7a08f6bc537e}} can be understood within a strong coupling paradigm {{cite:1979703d97ae278ebfa5f33324a29da8628f503b}}, {{cite:2c462f04d8c6179fc178c6090865b49177a92e9c}}. Further, jet suppression {{cite:4ca03e58bf945f84b16d77fb069ae20d7ad20886}}, {{cite:872cd31c5f959fee99be51d433a5fce56ab284b8}} and heavy quark energy loss studies {{cite:89dbd3cd9566ac7656925ffb4cc16734546b7c76}} support the theory of a strongly-coupled thermal plasma.
| i | cff99145a6df5089cc08ae638b6212f1 |
We start with with a `toy' Hamiltonian, {{formula:5ecbfdb4-be25-441a-9a50-b6cff78fde44}} , for which the relationship
{{formula:512c04b1-7dec-46b7-a02a-e84303a2ac4b}} is known
analytically,In this paper we refer to variables such as the
angles and actions in the toy Hamiltonian as {{formula:113a2527-8339-4f86-b57a-9083167c1f92}} , and those in
the target Hamiltonian as {{formula:a19fc051-3381-479b-a6df-9a3b34485b6b}} . This notation differs from that of
{{cite:8de624d4438f97d8c38d9e9b9f78a02bda2a54dd}} and {{cite:3bb10ea6c02cde4f8ba273df147e67b35ee742f6}}, in which the toy angles and actions
are referred to as {{formula:5dc1eadd-3a12-4787-a96e-098019e4e274}} , and those in the target potential as
{{formula:43a8acb1-fa07-4dbf-8ec0-76598989cd18}} . We make this change in notation as our focus is primarily
on the application of this machinery, rather than on how it works.
namely that of a generalised effective isochrone potential
{{formula:047a22ec-36ce-4416-a8c5-8c7ab755a1e5}}
| m | 4dd0191685d0d57aae975f9be0672a1d |
(1)
Scalabilty of annealers.
Annealers have worked well in many practical applications, but the challenge is how to scale up the problem size without excessive resource consumption. Though minor embedding method is helpful as well as Chimera graph, it still stands a limitation of sizes. The time-division multiplexing architecture seems to be helpful, but more actual experiments are needed. For many large complex problems, the process of finding an optimal embedding graph is itself NP-hard, restricting the generalization and miniaturization vastly. Heuristic algorithms might help find an efficient embedding{{cite:f869034091e6b2436cf56c5da157e9df8a9110b5}}, but at the expense of reducing acuracy. Hence, it is significant to find more efficient embedding algorithm to scale up and generalize annealers.
(2)
Limitation of CIMs from materials.
Photonics computers requires sufficient space to generate optical pulses, which is the base of coupling. Methods such as time-multiplexing and measurement and feedback couldn't do anything to miniaturization of optical generation. Ising machine based on LC oscillators seems suitable for miniaturized solvers, but facing the same limitation of scalability. As for new nonlinear oscillators, how to couple spins is still an obstacle as well as the coupling performance.
(3)
The reduction of MILP problems.
BILP scheme can be reduced into Ising model directly. However, out of BILP, other MILP problems can't be embedded into Ising machine straightforwardly since Ising model is not capable for continuous variables. Although it is operable to split or convert MILP problems into BILP problems, more efforts have to be made and the quantum superior is then diminished. A potential direction is to reduce the MILP problem into XY model, whose Hamiltonian is given as
{{formula:2df0cefd-4dd8-4d89-b553-90a5ca470cb1}}
The MILP problem then can be converted into finding the minimum value of the mixture of XY Hamiltonian and Ising Hamiltonian. Nevertheless, the solvers on XY model such as Born machines is still under theory{{cite:cd26e8d67301ef6ec5830477a790557ebc3bf5c5}}. For further studies, the reduction of MILP will be annoying.
(4)
More possible solvers.
An alternative for physical solver is simulating software. The work principle of FPGA and LC oscillators can be simulated on classical computers, meaning it is possible to design and implement a virtual Ising machine on classical computer. The evolution can be speeded up by CPU, GPU, and TPU. Another miniaturization paradigm is the Nuclear Magnetic Resonance (NMR) quantum computer, which uses particles in resonance excitation in a magnetic field to produce excited state {{formula:94fb22cd-bad5-4500-ba7a-dfda643cbd9a}} and ground state {{formula:a3a01778-ab77-41b0-bfa0-58600924f778}} respectively, solving MILP problems in general quantum computing paradigm. General quantum computers, such as superconducting computer and optical computer, survives on a harsh physical environment which is available only in the laboratory now, while NMR computers can run fine at room temperature. The prototype of desktop quantum computer has been realized by SpinQ, though with a size no more than 3-bits{{cite:1e8613a3e13989c17450eb503f029fd52860aa34}}, {{cite:7c20ab41e2c95dd899873379055c2e4ba46411ae}}. Obviously, it still needs more efforts to be used in practical applications.
| d | 26d7c9c79f33b7d99e706a22be4e8979 |
Predicting on learned predictions.
We observe that the highest mAP score on the Kaggle competition was about 85%, with over thirty teams achieving 83%.
The state-of-the-art mAP of the next largest video classification task–ActivityNet–is comparatively poor at 77.6% when model training is assisted with the YouTube-8M dataset {{cite:418156b90a065ee931d6e7928400aba08711b859}}.
ActivityNet {{cite:19532f5fd9561b90950581c106695eaeb67898d1}} is a comparatively easier task, with only 203 classes.
| d | aa409c668ec188890b510d46e8fc0342 |
A central goal of heavy-ion reaction experiments over a broad beam energy range from the Fermi energy all the way to LHC energies is to investigate the equation of state (EOS) of dense matter formed in these reactions. In realizing this goal,
comparisons of hydrodynamics and/or transport model predictions with the experimental data of various components and/or forms of nuclear collective flow have been found very fruitful {{cite:07f586528d91f544dc332a999fcd69e89c69c0b3}}, {{cite:b7753112a4d20f10ec6cbdd40e34344e765a0c2d}}, {{cite:7d6dc7c7d39183de631ae24f8d1dd25e5c80b043}}. In particular, analyses of single-particle azimuthal angle {{formula:fbfcc626-9f80-40a6-8ddc-eafac800f4df}} distribution {{formula:d53ba63c-8fe0-4f58-aacf-7afac9e14048}} with respect to the reaction plane have played an important role. Usually, a Fourier decomposition of the {{formula:72e012f5-6d6e-4c97-82c5-3e6521abf456}} is performed according to
{{formula:a4ce1467-78ec-4d05-9a68-ae122e4be8c8}}
| i | 665535fbf8f8953e8e8d72ec30aeb0d7 |
In fact, as we further illustrate in Section A.1 of the Appendix, a large number of causal inference methods can be written in the form of an M-estimator {{formula:5d07b80f-6ec2-45b6-81e5-e84372bad68a}} which solves {{formula:2a2a2948-b212-479e-a0b1-ad4a5a31a565}} {{cite:b7d96a7e8b2d233bae1a60bc7c7f57fb870834aa}}.
The M-estimation is first introduced in {{cite:77184e81ed1b4c119b51f318f0239737f9e0466d}}, {{cite:e0e47a0d6a0bdaf193f6d69ebf37440495252ff4}} and {{cite:843af3f75c3ff65540a1f8fb0d4bb148f33cc968}}, and has been extended to longitudinal setting as generalized estimating equations (GEE) by {{cite:c2a4ded2aea8525267d15d80d6eba12daedab547}}. A key advantage of formulating causal inference methods as M-estimators is to allow for convenient computation and inference via a unified large sample approximation. In particular, under suitable regularity conditions, the M-estimator {{formula:42d692c6-615d-4a9b-9bdf-c02bf87e4971}} is consistent and asymptotically normal {{cite:77184e81ed1b4c119b51f318f0239737f9e0466d}}, {{cite:e0e47a0d6a0bdaf193f6d69ebf37440495252ff4}}, {{cite:843af3f75c3ff65540a1f8fb0d4bb148f33cc968}}
{{formula:6f1a7b7c-4aff-45d8-95e3-f93da4e5745d}}
| m | 341a8caa9d0a2224513e8e404ed9c867 |
Gravitational wave (GW) astronomy has been a rapidly growing subject, with the development of new GW detectors providing new probes for understanding the fundamental properties of gravity from both an astrophysical and cosmological perspective {{cite:e342269df0b01820705f56521137e8dc6b4561f4}}. Gravitational waves are the propagation of spacetime distortions generated by massive objects as they accelerate through the Universe. GW astronomy provides a new probe for observing astrophysical phenomena in our Universe which, in parallel with electromagnetic observations, opens a new avenue of answering some of the most compelling mysteries about our Universe such as the Hubble tension {{cite:d890352758753815a3b14db07afd5991fd537787}}, black hole spacetime dynamics {{cite:7688ca3a7ce1e2fd2d58c01825394678e40936bc}}, {{cite:64d4ab0de43a748c680ccec336a6ba04c42fd3b3}}, and inflation models {{cite:834c4d5a5b90a18906c6d8024d7d9e821e3decbe}}, just to name a few. The current and future ground-based GW detectors, such as LIGO {{cite:f79d388b67491e62396e8e1f802496a8458bf191}}, VIRGO {{cite:90dd3dcaf0179fc4be0ef1045f9231fe2c1691c2}}, KAGRA {{cite:b8dbe1bc1384e832b7cf31754f5e878c3323a02a}}, and Cosmic Explorer {{cite:f79d388b67491e62396e8e1f802496a8458bf191}} can probe the {{formula:424378b7-e0ba-4253-82eb-1c8417511d68}} Hz frequency range, corresponding to emission of GWs from compact binary inspirals {{cite:6b57f0dbfa18cd01d374a2ecb925171cca211b2d}}. The next generation of detectors, such as LISA {{cite:e342269df0b01820705f56521137e8dc6b4561f4}}, will open the possibility to observe in the lower frequency range ({{formula:64c51434-1f5f-40ba-a042-2e5b770b0a8f}} Hz), sourced by inspirals near super massive black holes and the stochastic GW background {{cite:035d2d87228c958548bca4a955992c6007984d08}}. Thus, in order to extrapolate physical information in this lower range, we will need analytic/computational methods for constructing the appropriate waveforms in this regime.
| i | 38158d5b0870118fab76ee2f8e638357 |
A core theme of evolutionary game theory is to explore which strategies sustain cooperation in repeated social dilemmas {{cite:8eb3fdad2c88fba7df3b942930f6886ffb89f0b3}}, {{cite:fbcd86b3a47073e75cbcf8527a482db25da52b99}}, {{cite:9006a55f8dc6ddf9c38ea77ce5a6ee79e63d49fd}}, {{cite:72dafb9c04a9ac870b30a2b43bc63fa6902f61f3}}, {{cite:47a05359f9efd490df4bf18b0ec0097f8ff47e6f}}, {{cite:5b0f56ce570cad138c208d1f5f539e303e531d92}}, {{cite:0df30c196a452208a783f3d4267283479d9fe2a7}}, {{cite:2ea46ca37ef3372ec62e325257f673d2be37929b}}, {{cite:794b1d8755cf3b3d7723305371e9ab0347488e32}}, {{cite:97cbacaf125d80488126e2079e7ebb34d6a7b0d4}}, {{cite:0e23108e3a70d7340c8a550354921395e2e04961}}, {{cite:c7b0fd0f547bd658fca5f9268c15fcbbcf10dd47}}, {{cite:8629f343151efd89fc43a2242613ec2993480b68}}, {{cite:75c629bec05b92963b3721f02688523aed5714df}}. Given this extensive effort, it seems quite remarkable that very little is known about how such strategies with desirable properties can be learned most effectively. Instead, existing work tends to take the way in which people learn as given. While details vary between studies {{cite:2a2c6953c067b14cdb72f7162a135113a0590cfb}}, {{cite:f64f92960662270651ca222e3fdf9dde01102177}}, {{cite:928c9145b1498a781be7bdd00e49691a9c2eb0d7}}, {{cite:39d36f42e5a0687b60784ef53d9859178ce5cf2b}}, {{cite:13df00141ad388aaa09a11496feabf001468d04b}}, {{cite:21a5a49c5105b228542d57b8d5624407684743b8}}, {{cite:ab8214f027d6b3c999e8f05c62b6b9c94b30a221}}, {{cite:a49d34e4c425dc452a75de47d835ba04ebdf568c}}, most often it is assumed that individuals adopt strategies that enhance their payoff, and that they abandon strategies that are personally disadvantageous. This modeling assumption could be justified on theoretical grounds if selfish payoff maximization were indeed an optimal learning policy. Here, we have thus explored under which conditions selfish learning can be expected to succeed. We show that selfish learning performs well when individuals wish to optimize their short run performance. However, if individuals are motivated by how well they fare eventually, selfish learning can be detrimental, even in populations that otherwise consist entirely of selfish learners.
| d | e5c608a63503cba444c4821e448ad839 |
{{cite:5196ccd2c728c1964e0106ba748bedc15692fb85}} (hereafter Paper I) presented the spectroscopy of a
large fraction of the objects selected by G20, which,
together with results taken from the literature, covered {{formula:bc0e0052-9a8f-4a6f-8005-5393bd7019c1}} per cent
of that sample. Paper I is the first publication of the project entitled
“The spectra of
IceCube Neutrino (SIN) candidate sources” whose aim is threefold:
(1) determine the nature of the sources; (2) model their SEDs using all
available multi-wavelength data and subsequently the expected neutrino
emission from each blazar; (3) determine the likelihood of a connection
between the neutrino and the blazar using a physical model for the
blazar multi-messenger emissions, as done, for example, by
some of us in {{cite:845db04a4c2f0acf23b4cf4daf48b1c44f28eb0f}}, {{cite:86a445d56c8980bbfe08fe5a356abc54ac3a083e}}.
| i | a8065a9718bdc7e02fb02ab8c8db854c |
To address RQ 1a, we will look at convolutional neural
networks (CNNs) and recurrent neural networks (RNNs), which are both
highly prominent approaches in the recent natural language processing
(NLP) literature (see Chapter ). A recent development is the emergence of deep
residual networks (ResNets), a building block for CNNs (see Section ).
In short, ResNets consist of several stacked residual units, which can be thought of as
a collection of convolutional layers coupled with a shortcut which aids the propagation of the signal in a neural network. This
allows for the construction of much deeper networks, since keeping a
relatively clean information path in the network facilitates optimisation
{{cite:220ce27d603fccdbbba932034f061f5c3e9ff710}}. ResNets have recently shown state-of-the-art
performance for image classification tasks {{cite:a4413de8fc4f4da8b111c46e0c1124a9a99361c7}}, {{cite:220ce27d603fccdbbba932034f061f5c3e9ff710}}, and have also seen some recent use
in NLP {{cite:a376ede326a84ee08236ba66e889a512e32ceb0b}}, {{cite:da595ccc1cb3ccfa8331fdf90410d6386f925823}}, {{cite:8c11f10d26eb55a410f89df88c8e107373acc0c5}}, {{cite:f8c37c8561db9287452a641e392facbb90ce943b}}.
However, no previous work has attempted to apply ResNets to NLP tagging tasks.
| m | e1e27f98ee03af40e987c30e587a6296 |
A variety of tuning-free methods have been proposed to tackle high-dimensional linear regression. The seminal work {{cite:5cf448ae633613977fc8d2ba6cf2929c7d12da1d}}
proposes the square-root Lasso estimator which does not rely on knowing the size of the noise and is also statistically optimal. {{cite:8a0e00a548deb1ff9f90bf292762fdf48e4fe0fc}} proposes an equivalent method named scaled sparse linear regression, which originates from the concomitant scale estimation {{cite:af7e63ad7778ce9817e6cbe9bf8a6b7249072b4f}}, {{cite:de1d48ccd2bd8f048ee149754b131fa51fb41aab}}. {{cite:e44900e80043c58fea618ed27234b5cd5bb2775f}} proposes TREX, a method similar to square-root Lasso and
is completely parameter-free. {{cite:f76a3941b55d38c6fa5d565ba7ab14733eda4dc8}} borrows ideas from non-parametric statistics and proposes Rank Lasso, whose optimal choice of tuning parameter can be simulated easily in the
case with unknown variance of the noise. See {{cite:6cb3f07fd31fc295afc92b16e8f74b59ba1e3107}}
for a survey on the selection of tuning-parameters for high-dimensional
regression and {{cite:4dc4d0c1be8370630def255803f18331158dde7b}} for a survey on regression with
unknown variance of noise.
| m | f095dbcc4ab29f7465c742de75649f2b |
To achieve better accuracy and stability, our system is based on generalized coordinates algorithms instead of these maximal coordinates methods.
The idea of generalized coordinates originated in analytical mechanics {{cite:e3c3084af448ee732179e0417fc7f89e014ff57b}}.
Robotics and graphics have developed a variety of algorithms based on the idea of generalized coordinates {{cite:a94712d011e16ba56390f26be67ff2d283e759b3}}, {{cite:b14675f989e07c47c8b772c83d1ee0e38cddffaa}}.
Accordingly, the mathematics of rigid body dynamics also vary {{cite:f9d7e9ffde8fe023fab1256ae68346ad017829cb}}.
Our approach is rigid multibody dynamics based on spatial algebra using 6-D vectors {{cite:637c04ca9c3271ad45f6c4f17956f04e2cccd9fc}}. Compared with traditional coordinate systems, our method is more concise. The dynamics algorithms are applied to the simulation of roadheader robot. In the same simulation process, our system is more accurate and stable than commercial game engine.
| m | 0c108f96a41124f042ea8ac9b9876f7a |
A framework for low-resource video domain adaptation using a supervised disentangled learning strategy that is particularly well-suited to keystroke inference attacks.
Background
Keystroke Inference Attacks
Much of the early works in vision-based keystroke inference attacks have focused on direct line of sight and reflective surfaces
{{cite:58e9c491566d0cbd66680413a9b58810af419991}}, {{cite:695719ad5e8acb5ca9d683288a9f6a11506e7cf8}}, {{cite:29dea2b37ff9239451c667994490921662d175dc}}, {{cite:06ead70b00f1f8c08536f590843433915f401748}}, {{cite:2f57c5eff603a1d805195ffcb6b2eb07dcddd5c0}} to infer sensitive data.
These data driven approaches are necessary because the attacker can not recover the text using off-the-shelf optical character recognition software (OCR) at low resolutions {{cite:06ead70b00f1f8c08536f590843433915f401748}}.
Loosely speaking, the attackers train models that account for various capture angles by aligning the user's mobile phone to a template keyboard.
Collectively, these works showed that attackers are able to successfully recover pins and, sometimes, even full sentences.
In this paper, we advance the state-of-the-art under the direct line of sight model wherein the attacker uses a mobile camera to record a victim's mobile phone usage. Most germane is the work of
Lim et al. {{cite:2f57c5eff603a1d805195ffcb6b2eb07dcddd5c0}} that created a simulator that generates synthetic data for keystroke inference attacks. The authors showed that training with both synthetic and real data, in a supervised domain adaptation framework, yielded a CNN that generalized to a real-life test set. Unfortunately, that work is limited in scope due to the restricted threat scenario they target: analyzing single keypresses.
By contrast, we assess the ability of an attacker (armed with deep-learning techniques) to recover complete sequences in more demanding settings.
Style and Content Disentanglement in Videos
Tenenbaum and Freeman ({{cite:c76fed8fb03ac956b257d318084223879b3a28a0}} {{cite:c9a06698f1663f3b038f8e29c4b85c3831614181}}) observe that by learning to factor observations of data into two independent factors of variation, style and content, models learn separate representations that can extrapolate style into novel content, classify content in different styles, and translate new content into new styles.
Others have disentangled videos into a time-dependent style representation and time-independent content with adversarial training {{cite:fd3db8b5b0690942d01aaab60785160052c86367}} or with variational autoencoders {{cite:b4753e0334d0e2f4e5f4d2c1361b05769ab28608}}, {{cite:96297d5dd3ac015ee466fa191013bf106a6be03b}}. These methods are all unsupervised methods to disentangle style and content.
In our setting, style and content are both time-dependent.
Style encapsulates the trajectory of the finger in between keys or speed of the user typing.
The difference in texture on a per-frame basis is also encapsulated by style.
Content represents the entire trajectory as that determines the sentence that was typed.
Since we have labels, we take heed of statements made by Locatello et al. {{cite:ae2c46e057d975fd42e9683c7b0c3ebb19f28c1a}}): learning disentangled representations is impossible without supervision, and unsupervised methods using temporal inductive biases do not lead to improved disentangled representations.
Low Resource Domain Adaptation
We operate in a low resource setting in which we have abundant labels in the source domain and have very few, albeit labeled, data points in the target domain.
{{cite:aca64c26a471e4596cba06219f8a8dea8a8e63a1}} extend the CyCada {{cite:e7cf5e857d5387248a7c2e584dad92ab43c2ea15}} and CycleGAN {{cite:f697e29d8bb994f0d7dc45cff9d90676a80f7f3b}} frameworks to the low resource domain adaptation setting by adding a semantic consistency loss.
{{cite:70bad19423ef12aa1b27698817b621b5cce13bf4}} addresses this problem by learning a feature space that is domain invariant, but is semantically aligned across both domains by introducing a pairing process that augments the datapoints in the target domain.
Video Domain Adaptation
Domain adaptation for videos has been under explored relative to images, with nearly all methods being limited to human action recognition {{cite:c988f8bc7d279b241000dd83214e95052e5ae0b4}}, {{cite:a0b7c5d80e89fc097d645f2ea3577b7a8ab6ab39}}.
Domain adaptation techniques used for action recognition are not easily transferable to our setting because action recognition methods typically only need a small fraction of frames from the entire video {{cite:886e73114295b92d42c7c23a320ecc0f126691d0}}.
However, in our settings, we need to process every frame in order to predict the entire sequence that a user typed.
Video translation methods such as {{cite:ba8ef026f9987dfcb7bf50e0138de2cd2e44b0d5}}, {{cite:e254741f813b2292a924a8b14d8487ae26406d0b}}, and {{cite:67cc307948da4f92e5277b854ad8ddc4755dc94d}} show some potential for video domain adaptation tasks, but these methods require that the two domains (RGB images to semantic labels, for example) have the same temporal dynamics.
Our Approach
{{figure:e230576d-d489-4f63-a1d2-25ca2a34d713}}We first provide a brief introduction to keystroke inference attacks and then describe our framework to disentangle the style and content latent spaces to train on all style-content pairs.
An overview of our method is shown in Figure REF .
Keystroke Inference Attacks
We model the keystroke inference attack as a Seq2Seq {{cite:a5c8e52c6c8a7653309a61def3e1e79c30b28790}} problem where the input {{formula:eddbbaf6-b64d-4b1a-93d5-03d4a66b59df}} is a video with {{formula:50732136-aae8-4d6a-85b7-93e3f4169e81}} frames and {{formula:8b155401-437c-4cc9-ae43-ce0ef938ab0b}} is a sequence of {{formula:452d6c76-e8a1-4ecc-a607-2a3f833b56a1}} characters.
The videos are of users typing on their mobile phones that are cropped and aligned to a template image.
The tokens are a sequence of characters of the sentence the user typed.
We do not use any paired data (i.e. the synthetic and real-life datasets do not contain the same sentences), and do not have access to any auxiliary labels such as the exact frame in which a key was pressed.
Our goal is to learn the parameters of a model that maximizes the conditional probability of {{formula:4a8d8abe-a454-4e65-9256-adcf63582f4d}} given {{formula:c45e09cd-d85d-43a2-88ee-3971e0082534}} .
We use a Transformer {{cite:602bcf9b3064c917c0c298ee9f0b9b498b7c6891}} encoder-decoder as our model.
In our setting, we have a dataset of synthetic videos, {{formula:9b362cee-072c-4d48-890d-d39ba8b08e57}} , and a dataset of real-life videos {{formula:e5b26ad3-4b23-4b7a-a842-282b9c2249b0}} , where the number of real-life videos is significantly less than the synthetic.
While a large synthetic dataset can be easily generated, there is a distribution shift between the two domains (Figure REF ).
Moreover, when the amount of labeled data is scarce, it can be challenging to train neural networks that generalize to samples outside of the training set.
Disentangling Style and Content
To address the lack of real-life data, we train on combinations of style and content representation pairs from the synthetic and real domains. Additionally,
we introduce auxiliary losses to enforce disentanglement of style and content, ensuring that the style latent space does not contain any information about the content, and vice versa.
Our training framework consists of a Content Encoder, a Style Encoder, a Decoder, a Feature Aggregation Module, a Style Discriminator, a Content Discriminator, and a Domain-Class Discriminator (see Fig REF ). In what follows, we only discuss the intuition and higher level details necessary for understanding how our method works. The loss functions and low-level training specifics are given in the Appendix.
Pretraining Synthetic Model
We first pretrain an Encoder-Decoder Transformer on synthetic data only.
We train this network with a multi-class cross entropy loss where the goal is to predict the correct sentence for a given video.
Then the Content Encoder, Style Encoder, and Content Discriminator are initialized with the weights of the pretrained Encoder, and the Decoder is initialized with the weights of the pretrained Decoder.
Style Disentanglement
Style disentanglement ensures that style information is removed from the content latent space.
The content latent space is defined as the output of the content encoder given a synthetic or real video.
The content encoder is trained to produce content feature representations that are domain invariant.
For example, encoding synthetic and real videos of the sentence “hello, how are you?” should be close together in the feature space since they have the same semantic information.
To achieve this, we train this network in an adversarial fashion {{cite:94d89526ad18dd65698e156b04adbd4dd3168498}}.
Specifically, the Style Discriminator is trained to classify whether a content embedding is real or synthetic, and the Content Encoder is trained to spoof the Style Discriminator.
Further information can be found in Section REF of the Appendix.
Content Disentanglement
Content disentanglement ensures that content information is removed from the style latent space.
The style latent space is defined as the output of the Style Encoder given a real or synthetic video.
The Content Discriminator is a Transformer Decoder that is trained to predict the correct sentence given the input style representation.
The Style Encoder is trained to spoof the Content Discriminator.
We do so by producing a style feature representation such that the Content Discriminator can not predict the correct sentence.
We achieve this by maximizing the entropy, {{formula:73c91b00-e875-4918-b608-054438c308ad}} , of the predictions of the Content Discriminator.
Feature Aggregation
A Feature Aggregation Module combines the disentangled representations from the previous two steps.
The aggregation module combines any given pair of style and content embeddings to produce one embedding.
For the experiments that follow, we use the LayerNorm {{cite:01b71bc413c59d13bfec779157837e8f09bbc4a8}} operation with learnable parameters as our feature aggregation module.
There are four different possible pairs that can be the input to our model, since there are two factors of variation (style and content) and two domains (synthetic and real-life).
For any given input pair, the output feature representation can be thought as the content in the style of the specified domain.
Prediction
The Decoder takes in the output of the feature aggregation module and outputs the predicted sentence, and is trained with cross-entropy loss.
At test time, the model outputs the most likely sentence given a real-life video.
Semantic Alignment
Lastly, to further encourage style and content separation, we extend the framework of {{cite:70bad19423ef12aa1b27698817b621b5cce13bf4}} to create training pairs to compensate for limited data in one domain.
We create four pairs {{formula:4ead394b-fac5-413b-bd88-1423217c0342}} .
{{formula:ef5032ae-805a-4fff-bfba-a509ba946bab}} and {{formula:797686cd-01d7-414d-8e0e-ec99b76cb615}} are outputs of {{formula:f3e26f8f-cb96-4805-84b6-873803a1ec26}} that share synthetic content: (Synthetic Style, Synthetic Content) and (Real Style, Synthetic Content).
{{formula:4ac6b7be-2fff-4aa6-af16-b35815230b23}} and {{formula:cedad7f7-483e-4920-bd7d-c1a11bd5bed9}} share real content: (Synthetic Style, Real Content) and (Real Style, Real Content).
A multi-class discriminator is trained to correctly identify which group every output of {{formula:b3b0536f-544e-4155-8adf-59f746842862}} belongs to.
The Content Encoder, Style Encoder, and Feature Aggregation module are trained adversarially such that the multi-class discriminator can not distinguish outputs of the feature aggregation module that are in {{formula:d4ec333e-0db2-4b2c-a98a-125df175575a}} and {{formula:7bc5c709-bdee-4233-9d69-1073c853cf82}} and outputs of {{formula:8c2ac8a5-3cfd-41f3-81aa-15a5f40c561a}} that are in {{formula:d35d6b82-a45d-4683-9808-bd70cdaa586d}} and {{formula:e57196f3-b479-407f-a415-f023bf30f261}} .
The high level overview is given in Algorithm REF .The revised loss function used to train our model is given in the appendix.
Content Encoder, Style Encoder, Feature Aggregation Module, Style Discriminator, Content Discriminator, Decoder, Multi-Class Discriminator, Synthetic Dataset, Real-life Dataset.
Not Converged
sample mini-batch of b synthetic samples, {{{formula:710f478b-ac9a-462d-ad36-ea66f22bc4d0}} , {{formula:852bbc96-a3f8-4615-b939-d0f6d10c67df}} , {{formula:c4d2c59e-1d1a-488b-9eee-3fe66275db4b}} } from the synthetic dataset.
sample mini-batch of b real-life samples, {{{formula:719794cb-1337-4fc7-8270-fd9bff7c14f1}} , {{formula:f8405fae-ff41-41d1-a4da-e2fe73180a8f}} , {{formula:7b00355e-a3e9-4dc8-8efd-5834d87b682e}} } from the real-life dataset.
Style Disentanglement: Remove Style information from the Content Space
update the Style Discriminator and Content Encoder
Content Disentanglement: Remove Content information from the Style Space
update Content Decoder
update Style Encoder
Sequence Prediction
update Content Encoder, Style Encoder, Decoder, and Feature Aggregation Module
Semantic Alignment
update Multi Class Discriminator
update Feature Aggregation Module, Content Encoder, and Style Encoder
Learning Algorithm for Disentangling Style and Content.
Experiments
Next, we describe the datasets we used, the motivation and interpretation of our chosen evaluation metrics, and our experimental results. To support reproducible research, additional details regarding our data collection methodology and network architectures are given in the Appendix.
{{figure:95c121c2-3332-4602-8956-6e3262323f0a}}Datasets
Figure REF shows different statistics for the synthetic and real datasets.
We set aside 10% of the training set as a validation set.
The real-life dataset was collected by recording participants typing sentences into a mobile phone.
Three participants were asked to type conversational text messages into their mobile devices while we recorded them in both indoor and outdoor settings, with the screen brightness varying according to the environment.
We asked the participants to type only with their right thumb.
We used a mobile camera and captured from a distance of 3 meters.
The synthetic data was generated using a simulator {{cite:2f57c5eff603a1d805195ffcb6b2eb07dcddd5c0}} we built.
The simulator outputs aligned videos of a synthetic thumb typing a given set of sentences.
We generated sentences from the “A Million News Headlines” datasethttps://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/SYBGZL
We added a START and STOP token to the beginning and end of a sentence, respectively.
In total, there are 30 tokens in which the decoder can predict: 26 letters and 4 special tokens (START, STOP, SPACE, PAD).
In both settings, we used a QWERTY keyboard layout.
Evaluation Metrics
We use a variety of metrics to quantify the amount of information the attacker is able to recover from the user because there is no single, agreed-upon, metric for keystroke inference attacks.
First, we postprocess the outputs of our model with a language model, similar to that done elsewhere {{cite:695719ad5e8acb5ca9d683288a9f6a11506e7cf8}}, {{cite:29dea2b37ff9239451c667994490921662d175dc}}, {{cite:c7570319e014d78d84529f510c1961a7f4d9b3af}}, {{cite:f7389d612d63ea7f39282d6cf11e1bd006495084}}.
Appropriate metrics for this scenario are Bleu-n {{cite:def8e9961713321677d04514ec7ac8944eb0ef9e}}, ROUGE {{cite:ab9009031e663b3284a02f0d08fd1c86623fa38a}}, and METEOR {{cite:4605fb3ee97176cfc2ba881f5a4c61f18b2d8255}}.
Bleu-n scores are scored on n-gram precision, i.e., the n-grams in the predicted sentence that are also in the ground truth sentence.
ROUGE scores are scored on n-gram recall, i.e., the n-grams in the ground truth that are also in the predicted sentence.
METEOR is a metric that is scored on the harmonic mean of unigram precision and recall and was developed to address some of the drawbacks of ROUGE and Bleu-n.
METEOR scores range from 0 to 1.
Scores above 0.5 reflect understandable translations and scores above 0.7 reflect fluent ones {{cite:9641f2ab2b9947c892ab182088e41dac50fa0d79}}.
While these scores have merit in the context of keystroke inference attacks, they are not without shortcomings.
These scores are especially harsh for predictions that contain slight typographical errors (e.g., “hello” vs. “hellp”), and there is no guarantee that the previously mentioned postprocessing steps will address every error.
Also, there are settings in which the applicability of these metrics does not make sense — e.g., recovering alphanumeric passwords.
Thus, we also need evaluation metrics for the raw outputs of our model.
Two appropriate metrics are Translation Edit Rate (TER) {{cite:402ee24c96dfbe0e10df42d1cf809626454c5aa8}} and a QWERTY-keyboard-based edit distance.
Both metrics measure the number of edits required for a hypothesis sentence to be translated to the ground truth.
The latter is a form of the Damerau–Levenshtein (DL) distance {{cite:53e5f28dc24cee62dbe7fd440e142f995e9bf473}} that penalizes the edit operations (i,e., insertions, deletions, substitutions, character swapping) conditioned on the QWERTY keyboard layout.
For example, if "hello" was the ground truth word, "hellp" should be less penalized than "hellv" as the former is a more likely output than the latter given the assumed keyboard layout.
Network Architectures
For all experiments, the Encoders ({{formula:bc046465-2c03-455a-8afb-0330ddfcb00f}} , {{formula:0ba1a574-b5de-4cee-ac73-fe573c1d6fc2}} ) and Decoders ({{formula:78384659-f9ac-4fce-9726-066f1bf87a5c}} , {{formula:cb9040dd-d556-4582-ba36-b68c4a207964}} ) are both Transformers with 4 layers, 4 attention heads, an embedding size of 128 and a hidden size of 256.
{{formula:1341eb50-6aff-40b4-be35-c3f5978adfa0}} and {{formula:74d97017-2786-40af-9ab9-d0a112e46186}} are both 1-layer fully connected layers.
Since the output of the Encoder is a sequence of {{formula:28639965-e458-4b96-b108-ee5b4ddab6a8}} continuous representations, where {{formula:091d20d5-fae5-44f4-aad9-346601b7c405}} is the input sequence length, we do a max pooling operation along the temporal dimension so that we have a fixed vector representation.
These fixed vector representations are the direct inputs to {{formula:b691b64c-3010-4c80-9066-d3c80420992a}} and {{formula:34963846-7dc9-4e2c-bd6e-ac44604c45e2}} .
The max sequence length is set at 300, and the max phrase length is set at 70.
If an input sequence has more than 300 frames, we randomly sample 300 frames at each epoch.
If a video in the testing or validation set has more than 300 frames, we fix the indices of the sampled frames to remove any randomness for evaluation.
For input sequences that are shorter than 300 frames, we zero-pad the remaining sequence.
Table REF shows the results for a model trained and tested on synthetic data.
The model performs very well on the synthetic test set across all proposed evaluation metrics.
To lessen the compute cost of processing over 45k raw videos, we extract a fixed 128-dimensional feature representation as a preprocessing step by training a CNN for single key press classification.
We use the simulator to generate single key press images and train a CNN to predict the correct key.
{{table:51a09ee4-19c1-490c-87d2-932872d78cf3}}
Baselines
To evaluate our approach, we compare against several alternative ideas: finetuning, ADDA, {{cite:7d9d1414fd217f72f393050ff76d473172fc9f4b}}, CyCADA {{cite:e7cf5e857d5387248a7c2e584dad92ab43c2ea15}}, and Vid2Vid {{cite:ba8ef026f9987dfcb7bf50e0138de2cd2e44b0d5}}.
All methods are evaluated on the real-life test set and use the model trained on synthetic data.
[noitemsep,topsep=0pt,leftmargin=*]
Finetuning. We finetune a model trained only on synthetic with the real-life training set.
ADDA. We use ADDA to generate Encoder output feature representations that are domain invariant to a Discriminator, but are also discriminative for the Decoder.
CyCADA.
We learn a pixel-wise transformation that transforms data from one domain to another.
We apply this transformation to every real frame to a synthetic one.
Then, we finetune the synthetic model with the transformed real training set and test on the transformed real test set.
Finetuning is needed because {{cite:e7cf5e857d5387248a7c2e584dad92ab43c2ea15}} do not address the temporal shift as the transformations are conducted on a per-frame basis.
Vid2Vid
We leverage a video translation framework that learns to map videos from one domain to another (e.g., labels to RGB images).
Our aim is to translate the real-life videos in our training set into synthetic versions, finetune the model trained only on synthetic data, translate the real-life videos in our testing set into real-life versions, and test on the transformed real-life test set.
Findings
Although we were successful in applying ADDA to the task of single key press classification {{cite:2f57c5eff603a1d805195ffcb6b2eb07dcddd5c0}} when the number of labeled data is scarce,
we found that simply applying ADDA to our sequence prediction task leads to severe overfitting due to the limited real-life data, indicating that this task is more challenging than the single keypress classification task.
While CyCADA and ADDA are common approaches for visual domain adaptation, we did not find them suitable for our sequence prediction problem.
This is because these approaches are tailored to domains in which the domain shift is limited to textures.
Recall that in our case, we are facing both a kinematic and texture domain shift.
This is especially true for CyCADA and other pixel-wise transformation methods.
We carried out numerous experiments to tune our baselines and maximize their performance, but despite an extensive search of hyperparameters, the models still overfit.
We report the best results in Table REF .
Another important fact is that Vid2Vid is trained with pairs that are composed of a video in one domain (RGB) paired with the same with video in another domain (Semantic Labels). This provides both global and local supervision.
By global supervision, we mean that the video for the two domains are of the same event.
By local supervision, we mean that there is supervision on a per-frame basis.
For example, every RGB frame corresponds to a semantic label frame.
While such supervision exists for the datasets ({{cite:47be85587aed1d2ffdc1bb4dc482dd657471da72}}, {{cite:84379f38216bfd30a6804f0fdcef331f4327d7c3}}, {{cite:10a9c2e495347827678642199d7975acb699e0e7}}) used by Wang et al. {{cite:ba8ef026f9987dfcb7bf50e0138de2cd2e44b0d5}}, we are unable to simulate such supervision.
We can generate synthetic versions of any real-life video (global supervision), but we are unable to simulate the synthetic thumb trajectories such that the temporal dynamics are the same (local supervision).
Despite this, we still attempt to generate realistic videos with only global supervision.
Our aim is to transform every real-life video into a synthetic one.
For Vid2Vid, we used the official implementation provided by the authors.https://github.com/NVIDIA/vid2vid
First, we generate a synthetic video for each of the real-life videos in our training and test sets.
Next, we clip the lengths of the videos so that the number of frames is equal for a given (synthetic, real) video pair.
Then, we train using this set of real and synthetic pairs using the default hyperparameters used in {{cite:ba8ef026f9987dfcb7bf50e0138de2cd2e44b0d5}}.
Once the generator is trained, we transform each real-life training video to a synthetic version.
We finetune the pretrained synthetic model using the transformed real-life training set.
Finally, we test on transformed real-life videos. Even so, this method failed to yield any results as the generator was unable to generate plausible looking synthetic videos – underscoring the need for a new approach like ours.
Adapting to Real-Life Videos
{{table:bb1074b7-5f5c-4d31-b1cc-53d1387e1305}}{{figure:82680bb9-af5d-494f-a828-3f2fb3d3162d}}Our method, unlike the above baselines, does not overfit to the real-life training set.
Our results show that training with our pairing mechanism with disentangled representations across domains is an effective form of data augmentation.
We outperform the baselines in both raw output evaluations and post-processed evaluations as shown in Table REF .
We found that our training was not sensitive to the hyperparameters and weightings of the loss terms (in Equation REF in the Appendix), and use the same hyperparameters for all experiments.
While a direct comparison to the state of the art in direct line of sight attacks is difficult due to the differences in datasets, it is worth noting how our model performs relative to others.
Raguram et al. {{cite:695719ad5e8acb5ca9d683288a9f6a11506e7cf8}} achieve a METEOR score of 0.89 whereas Xu et al. {{cite:29dea2b37ff9239451c667994490921662d175dc}} achieve a score of 0.71, albeit with recordings taken from much farther distances.
To measure an attacker's ability to recover passwords, Raguram et al. {{cite:695719ad5e8acb5ca9d683288a9f6a11506e7cf8}} report precision and recall for individual word units and characters.
They achieve word-level precision and recall of 75% and 78%, respectively, and character-level scores of 94% and 98%.
We achieve a word-level precision and recall of 78% and 79%, respectively, and a precision and recall of 96% and 95%, respectively, for characters.
{{cite:695719ad5e8acb5ca9d683288a9f6a11506e7cf8}} does not report METEOR scores for this scenario.
In their experiments, Raguram et al. {{cite:695719ad5e8acb5ca9d683288a9f6a11506e7cf8}} use three different cameras in their experiments: Canon VIXIA HG21 Camcorder, Kodak PlayTouch
and Sanyo VPC-CG20.
Xu et al. {{cite:29dea2b37ff9239451c667994490921662d175dc}}, on the other hand, use a Canon 60D DSLR with 400mm lens and a Canon VIXIA HG21 Camcorder.
{{figure:8f9a5a34-83f6-4c55-876a-334fbb1d019c}}Feature Visualization
For pedagogical purposes, the t-sne {{cite:7cacdeba65a40ad2869885cf0b5a1b3254931534}} plots in Figure REF show the feature representations of {{formula:c4d30910-57d5-4abf-a705-8ffc686faa01}} , {{formula:c995d3c5-4bc1-422c-a8f6-46731d6d01b3}} , and {{formula:7d53b7ea-c0de-4f78-8455-057c61c8582a}} on synthetic and real test data. Notice how that sentences with different styles have a noticeable separation, whereas the content representations are intertwined.
The last figure on the right shows outputs of our feature aggregation module, {{formula:16a58772-3d54-424c-81fb-80ec83a4c4af}} , and shows the transfer of styles in the feature space.
Notice the clear separation between styles, while the datapoints within one style cluster are mixed.
To obtain inputs suitable for t-sne, we perform a max pooling operation along the temporal dimension of the outputs of the networks.
Ablation
Lastly, we conducted a series ablation studies to explore the effectiveness of our proposed framework.
We introduce seven different models: [label=]
I is our base method without the use of any adversarial losses, just the pairing mechanism.
II uses style disentanglement, i.e., I {{formula:7faa53d7-f08b-4423-b5b2-db6c1c15eba3}} style disentanglement.
III uses content disentanglement, i.e., I {{formula:84ec7844-6fba-4dc3-a9fc-19fc00146b8e}} content disentanglement.
IV uses both style and content disentanglement.
V is the base method with the modified semantic alignment loss.
VI uses style and content disentanglement, along with the semantic alignment. This is our proposed method trained with Algorithm REF , i.e., IV {{formula:34e99fb2-c866-4c9e-b912-504dd7aafd4a}} semantic alignment.
First, we find that our base model (I) achieves competitive results without any losses on the latent spaces.
This indicates that training on paired representations across domains is an effective method for data augmentation.
Second, we find that adding auxiliary losses on the latent spaces to enforce style and content disentanglement improves performance.
The performance for models II and III shows the base model is benefiting from the added loss terms.
The results for Model IV aligns with our hypothesis that explicitly disentangling style and content allows us to overcome the lack of training data in the target domain by training with all combinations of the factors of variation.
Finally, we trained model V to apply the semantic alignment step on our paired outputs without any additional adversarial losses.
This is quite competitive with IV, but we find the greatest performance boost when training model VI using both semantic alignment and disentanglement.
A closer look into the distribution of the scores in Figure REF shows that the distribution of scores for Model VI (Full) indicates higher overall performance compared to Model I (Base).
Our results show that explicitly disentangling style and content by adding the adversarial losses on the latent spaces, supplements the pairing mechanism to achieve the highest performance under the evaluation metrics.
Implications
Taken as a whole, our results show an adversary seeking to deploy keystroke inference attacks can leverage deep learning methods, despite the difficulties in curating training data.
While we are unable to directly show whether such an adversary would outperform those in prior attempts, we show that we can train a deep learning model using the same amount of real-life data used in previous studies.
Thus, the settings for vision-based keystroke inference attacks should be revisited as the realism and threat capacity of these attacks are most likely greater than initially thought. In particular, the lack of training was a significant impediment to many earlier proposals, but having a synthetic-to-real domain adaptation framework like ours — to augment the limited real data — would likely lend itself to a more capable adversary.
Furthermore, research over the past few years has shown that deep learning methods have outperformed shallow methods in most computer vision tasks. Arguably, if we hold all the parameters of a threat scenario constant (distance, camera model, phone model), an attacker using a deep learning method should outperform one with a shallow method. Similarly, our style and content disentanglement techniques can be used to help an attacker thwart certain defenses.
For example, {{cite:c7570319e014d78d84529f510c1961a7f4d9b3af}} proposed the use of random device perturbations as a way to mitigate an attacker's ability to map backside perturbations to keystrokes. While this is an effective defense, an attacker can undermine it by training a model to disentangle the fake perturbations from the real backside perturbations.
Conclusion
Our work provides the important initial step needed to formulate defenses for keystroke inference attacks in the age of deep learning.
Specifically, we provide the first assessment of an attacker's ability to recover sentence level information using deep learning systems.
We address the problem of limited training data by introducing a framework for low resource video domain adaptation, that disentangles the style and content across both domains, and creates representations from all pairs of style and content combinations.
Our results indicate that training with these pairs, along with auxiliary losses to explicitly disentangle style and content, serves as an effective form of data augmentation that prevents overfitting.
We evaluate our method using a number of metrics to quantify the amount of information the attacker is able to recover, and our results show that an attacker armed with a deep learning system is able to recover enough information to pose a significant threat to unsuspecting victims.
Our framework can also be used to assess other keystroke inference attacks such as those that focus on device perturbations {{cite:c7570319e014d78d84529f510c1961a7f4d9b3af}} or eye gaze {{cite:f7389d612d63ea7f39282d6cf11e1bd006495084}}.
Availability
The code and data used in this paper can be found at https://github.com/jlim13/keystroke-inference-attack-deep-learning.
Additional Training Details
Synthetic Single Key Press Classifier
We train a CNN, {{formula:4a3a9285-9743-4c72-add4-421913da1a94}} , for the task of single key press classification in order to learn a {{formula:e48a9a7d-970c-4455-8591-1733354b5764}} dimensional ({{formula:ceb056d0-2141-431c-9338-7cddd99657b0}} ) feature extractor.
Once this network is fully trained for the task of single key press classification, we can extract the features of each video on a per-frame basis. We use the simulator by {{cite:2f57c5eff603a1d805195ffcb6b2eb07dcddd5c0}} to generate 70,000 single key press images.
These images contain the synthetic thumb over one of 27 keys on the QWERTY keyboard (26 letters {{formula:1b877737-ff32-4b4d-aaa0-22611f9dd0ce}} the space bar).
Once generated, these images are preprocessed in a similar fashion to the synthetic video dataset.
We resize the images to size 200 X 100 and crop the phone such that only the keyboard is showing.
We use 50,000 images for training and 10,000 images for testing and validation, respectively. We use a CNN where each layer consists of a Convolution Layer, ReLU activation, and MaxPool operation.
We use 3 layers and 2 fully connected layers.
We train a network to achieve 95% accuracy on a held out test set without much hyperparameter or architecture search, as this is a fairly simple 27-way classification task.
We use this final model to preprocess every frame in our synthetic video dataset.
Every video is a now a sequence of these {{formula:66f957a7-02fc-4849-a3d1-f8d5b656adbd}} dimensional feature representations.
We use the Adam optimizer with a learning rate of 0.0001.
Real-Life Single Key Press Classifier
When extracting the visual features for the real-life videos, we can not use a feature extractor that was trained only on synthetic data.
There is a distribution shift between the synthetic and real-life data, so the features we extract would be not be informative.
Instead of using {{formula:6ace5284-29b9-44ba-8a85-b219687e8317}} that was trained for single keypress classification on just synthetic data, we train {{formula:15372339-8ff0-4b79-81af-592c9e1d0ba7}} with a combination of synthetic and real-life data.
Specifically, we adopt the ADDA {{cite:7d9d1414fd217f72f393050ff76d473172fc9f4b}} framework for unsupervised domain adaptation to train {{formula:fedc0fd1-055d-4be7-8e45-ed700e06188b}} .
We treat the individual frames for all of the videos in our real-life training set as unlabeled data.
Even though we do not have labels for individual keypresses for real-life data, we can leverage the fact that we have abundant labels for synthetic data by adopting the unsupervised domain adaptation technique ADDA.
We use the CNN for single key press classification on synthetic data as our pretrained network.
The Discriminator is a 1 layer, 128-dimensional fully connected layer followed by a sigmoid.
We follow the same guidelines to train ADDA as the original paper {{cite:7d9d1414fd217f72f393050ff76d473172fc9f4b}}, and refer the reader to this work for the full description of their training process.
We use the Adam optimizer and a learning rate of 0.0001 for both the Discriminator and CNN.
Network Architectures
For all experiments, the Encoders ({{formula:a247f888-a3b0-4bc9-8ad6-e651ca0b76a5}} , {{formula:e2a0c6e0-d5d1-4360-aa40-97ccbb690dec}} ) and Decoders ({{formula:340b7268-e1e7-4266-9a9c-94a6f4fc9226}} , {{formula:c26762e8-dab9-471e-8d95-3b4c4648f666}} ) are both Transformers with 4 layers, 4 attention heads, an embedding size of 128 and a hidden size of 256.
{{formula:bb9a0426-4795-429b-ba83-0964f83f76fd}} and {{formula:d0832519-b01f-4516-a6c9-a5f4d15c97a0}} are both 1-layer fully connected layers.
Since the output of the Encoder is a sequence of {{formula:1f6c6e4f-76cc-4612-9808-c65fb99b0190}} continuous representations, where {{formula:e0114b46-604a-422e-a1a5-42c69f38e56c}} is the input sequence length, we do a max pooling operation along the temporal dimension so that we have a fixed vector representation.
These fixed vector representations are the direct inputs to {{formula:e853faee-278c-458d-992f-eff5a043d40f}} and {{formula:5559baaf-7462-48a3-bb25-a33cd1719715}} .
The max sequence length is set at 300, and the max phrase length is set at 70.
If an input sequence has more than 300 frames, we randomly sample 300 frames at each epoch.
If a video in the testing or validation set has more than 300 frames, we fix the indices of the sampled frames to remove any randomness for evaluation.
For input sequences that are shorter than 300 frames, we zero-pad the remaining sequence.
Loss functions
Style Disentanglement
{{formula:c3ba04b1-f4da-4e6d-8d1f-390ea736b841}} is trained using Equation REF .
{{formula:4e2be48f-bf28-40fc-b5fa-1c507d960010}} is trained using the same equation, but the labels are flipped and {{formula:a3d3d65c-11ab-4742-8e4d-735db7f55a3d}} is not updated.
{{formula:d36435e9-d887-48ad-9dd4-258a9308a400}}
Content Disentanglement
{{formula:d883b429-cdb1-469d-85fb-d13bed5bf896}} is trained by minimizing Equation REF .
{{formula:c0901c3c-dc8f-4a46-85f3-5a271f513792}} is trained by maximizing Equation REF with the weights of {{formula:27aea783-fe0f-4c82-bc2c-f066ed18a8a0}} kept frozen.
{{formula:39d69d5d-23f6-46a8-98fd-024f21e31608}}
{{formula:ae545b26-017b-40b6-a3fe-2a51e5743e9d}}
Feature Aggregation
A Feature Aggregation Module, {{formula:e2a6128d-70d3-422e-85ba-2fd2f33a54e1}} , combines the disentangled representations from the previous two steps.
For any given pair of style and content representations we have:
{{formula:133c2d6a-0b52-4cf3-b598-596457a53df7}}
In Equation REF , {{formula:b5e2d078-5fca-4f21-aeca-8082e27b907f}} is the LayerNorm operation {{cite:01b71bc413c59d13bfec779157837e8f09bbc4a8}}, {{formula:381854d5-6de4-4504-bbed-93ed0729ecc8}} and {{formula:236f3185-9840-4b4c-851d-28513593d322}} .
Prediction
The Decoder, {{formula:b018dad2-8a97-4d60-a600-688562c649d8}} , takes in the output of {{formula:d58c0efb-12a4-4d0d-ab7d-b58befe5520d}} , {{formula:91ae04e5-4074-4f22-bf7c-738598e4ca7b}} , and outputs the predicted sentence {{formula:96f56cfb-bc4e-4186-b6ae-183b32f6a608}} , and is trained with the cross-entropy loss using the labels {{formula:bc9be72e-0990-4db3-93f6-4e1c8fe6cae9}} . The objective is:
{{formula:cb02836f-4b58-4107-89d0-822e3a0873d0}}
At test time, the model outputs the most likely sentence given a real-life video:
{{formula:8b53b6cb-3b2a-4adc-bae2-b2691e638097}}
Semantic Alignment
We create four pairs {{formula:4df8e7e5-93f9-4718-a647-67f4d2e676ed}} .
{{formula:abacbed5-c08d-40e2-bf5e-67653a5f7681}} and {{formula:6af5648d-9b8d-4110-99ca-208fa2dbb93f}} are outputs of {{formula:080cf450-2349-449e-9d07-0c8bfbf9e063}} that share synthetic content: (Synthetic Style, Synthetic Content) and (Real Style, Synthetic Content).
{{formula:1398ecd0-c085-4775-9701-29a979c1b419}} and {{formula:7f0db0d0-7b87-4d83-8738-2b24396d767b}} share real content: (Synthetic Style, Real Content) and (Real Style, Real Content).
A multi-class discriminator, {{formula:74065743-523b-4037-a022-55d45b5b5c07}} , is trained using Equation REF to correctly identify which group every output of {{formula:a214cc1e-18f2-4696-94bb-5dc8db3de487}} belongs to.
{{formula:91c29138-a299-44ba-a353-7d851201b159}} is the corresponding label for a given {{formula:4df0f7d8-74eb-4965-a074-b12da2f281aa}} .
{{formula:118c7099-0969-453b-9829-d2eb21b08add}} , {{formula:69bc9775-22db-4914-9cc5-39490b6f0a33}} , and {{formula:c7f9fa74-7d2d-4c6b-8bc3-20e8aba13909}} are updated with Equation REF such that {{formula:38d08840-77f7-4964-a27c-72e95d4c93e8}} can't distinguish outputs of {{formula:598cd63c-df72-4ce1-a5b3-703a9699a988}} that are in {{formula:1b92c07f-325a-48c3-a9ae-fba619226c1f}} and {{formula:fcca76ca-bb8c-44bc-b957-1869e5694d11}} and outputs of {{formula:7d753b38-d3ca-4a78-a964-6373ca08863f}} that are in {{formula:2e5193ca-86d3-4b53-9dd8-d566dd800599}} and {{formula:bff69d91-716e-4be4-b5cc-ffa72290555a}} .
{{formula:67f46e9c-b93d-4218-bac2-97d8f5cdfcf5}}
{{formula:7499a3b5-8f31-4b9d-aeec-226e06db0300}}
The final loss function to train our model is show in Equation REF where the weightings for each term are tuned using a validation set.
An overview of the training procedure is shown in Algorithm REF
{{formula:bb400108-a60d-4db2-8321-9082db0524bf}}
Hyperparameters
We use the Adam optimizer with learning rate of 0.0001 for all of our networks.
We train for 60k iterations with a batch size of 8, and use a dropout value of 0.15.
We use the validation set to tune the different weightings for our loss function.
We set {{formula:edaa5478-1905-456b-96f4-5760e35be2ce}} and {{formula:39e71ced-4b06-4fc3-afc2-c0a6f122495b}} .
| i | e0c66943c32de265482c10685689a858 |
The RF based models (RF and the RF kernel) performed as well or slightly outperformed the XGB based methods (XGB and the XGB kernel) in our experiments. However, there is no free lunch for statistical learning and consequently for a universally optimal kernel {{cite:2672f560b8b306f40e7e3ad3cf7437f74f407e59}}, {{cite:823d6d3163ae009984cfae4b6d62ad3b0b60cc89}}, {{cite:21320ae66389286f3d955f1603dce46c096a6829}}. The success of a particular kernel algorithm depends on how well it adapts to the data geometry {{cite:b72f9293137a8f8ef3df88fd32e3373adc3dffb7}}, i.e. how well it captures the inherent kernel function of a given problem {{cite:10773ba723843fc9d77446f7980e2ad9c8ef2e43}}. The RF/XGB and accordingly the RF/XGB kernel should be competitive in situations when the data generating mechanism is conducive to the recursive partitioning {{cite:bb7f9c9f0627fae13cb99b8a587c1fc8c9f8bc0c}}, e.g. in the presence of feature interactions as frequently found in biomedical applications {{cite:0de9248cd27403877e19b302deeda8c5e2d149dd}}. Another recent examples, where the RF kernel has shown promise are studies of the image classification in hyperspectral imaging {{cite:f0b3b2d29b2e765db4ddf49f92de6b6d1fc5fd0e}} and face alignment from imaging data {{cite:ea18dd0cc859e9833a4ba6d81cb600f5286e4011}}. Moreover, in a large bench-marking study of general purpose classification algorithms {{cite:21320ae66389286f3d955f1603dce46c096a6829}}, RF was found superior to other competitors. Interestingly, kernel methods that used the Gaussian kernel performed also well and were only slightly inferior to the RF. Similarly in a very recent bench-marking study {{cite:cc288a0dfe00cd1bd0e458b394d17d6764ac965b}}, XGB and RF have been found competitive in classification. These results suggest that across broader spectrum of real life problems RF and XGB classifiers adapt well to the underlying data structure {{cite:b72f9293137a8f8ef3df88fd32e3373adc3dffb7}} and in many cases perform better than the classifiers based on the conventional kernels such as the Gaussian kernel.
It would be of interest to conduct more research into how the results from {{cite:21320ae66389286f3d955f1603dce46c096a6829}} and {{cite:cc288a0dfe00cd1bd0e458b394d17d6764ac965b}} extend from classification to regression and potentially survival and what implications they have for the tree ensemble based kernels.
| d | 7f278d083c6cfe67b9bf34bbd0eca229 |
Second, we analyze the user-product interaction journey using supervised and unsupervised methods, such that five distinctive clusters representing specific purchasing behaviors are identified. Next, we use the TPOT AutoML package {{cite:716cf175dd4eb52d899daf2fd4fbb257899d224c}} to fit the best classifiers for predicting a purchasing journey. Our analysis shows that for journey level predictions, product-level features such as variations in product cost, brand etc. represented in the electronics data set are significant for purchase predictions ({{formula:2f6683aa-0813-4ad8-b0ad-1efe3e9f0107}} /{{formula:a53abbe3-45bb-427f-a5d9-8bb8ee050b7c}} of 99/98%). Also, we observe that purchase prediction can vary significantly across clusters. Thus, purchase predictability per customer cluster plays a key role in designing effective strategic marketing campaigns.
| d | 5ad1cbb9b64ebf300799b483e6a622b7 |
Machine learning, especially generative models, made impressive advancements in domains ranging from computer vision {{cite:ca0a6206d17e453d341fa4f3832c88825ee70578}}, {{cite:29f819a7e76e1fe3ff2b19161303db24797ecca2}}, {{cite:4718fda26344fa40335f6cec2f3dbf064cd07d10}} to natural language processing {{cite:86537743e64f6fc37568027e2a7846a6a9019093}}, {{cite:749ea2e0f76d9c6cfab51f3197dfd9e6c6e48595}}, {{cite:7fa5f9db1fd7b20e3101f1f137091309aaaf5a7f}}. In recent years they have also become more important for scientific endeavours such as in physics and chemistry, for example, for molecule generation {{cite:7a97d509400f1d72a5c3e9cdefa70931aeffe57d}}, {{cite:ef8f2ddea1f0b77b489af3795e4d00fb2c56bcbb}} or conformer generation {{cite:c13125312117bd0bcf47e1f67c34288175931e7b}}, {{cite:de1057f35893731f02d487bba6263d1ace16078d}}, {{cite:15809c10b48b8ebeb2f92079555d6b9bdf14c078}}. One of the most interesting models for the physics domain are normalizing flows {{cite:0240998fb505f7b73f026f6c65f922a7aed41207}}, {{cite:eff03076cb796ac4297a02be98fbb2045e044064}}, {{cite:00f5d87f004b5440cae7f6769c7f0961559a38b6}}, {{cite:1829c925b51bbbde89719c5e9a79a3833f381c0c}}, {{cite:832ebb117a174fdfe114e8fc39be27b59643f2c3}}, which start with a prior that is simple to evaluate and sample and then transform this prior into the desired distribution using a coordinate transformation parameterized by a neural network. The big advantage of normalizing flows is that they combine the expressiveness of neural networks with the ability to sample and evaluate the likelihood of samples exactly and efficiently. This was for example used in {{cite:6172bc42acab38dcc8734792f8dc3243897c9ebc}} to generate equilibrium states of many body systems, scaling to systems as big as proteins.
| i | 3e120accd7b4dc744b420e0912389f9e |
In 1941, Turán {{cite:fe64b5728d20e4afa5a65bf0c903e57dbd92d1a1}} posed the natural question of determining
{{formula:cc1a9475-f0d9-40c2-8be2-3f6e8cbfafff}} for {{formula:d251c8e9-9df5-490b-bff7-0688da863066}} .
Let {{formula:3024e3e9-346b-48f3-84d3-24b158e13287}} denote the complete {{formula:95571907-a6c4-4c9e-b3b5-f483bf575d9e}} -partite graph on {{formula:ef5ea5cf-bd41-4a7d-9c87-2d29e4e213fa}} vertices where
its part sizes are as equal as possible.
Turán {{cite:fe64b5728d20e4afa5a65bf0c903e57dbd92d1a1}}
(also see {{cite:21f4b1fc84876fe2795ad3daf8cb9460baa47bb3}}) extended a result of Mantel {{cite:bce5314893be028567c07028fa017e95300b8c74}}
and obtained that if {{formula:20c26a3d-ab65-4c37-8f71-f91eec68a60e}} is an {{formula:59960a1c-4c16-4172-9ef8-53bd8824f0cf}} -vertex graph containing no {{formula:61a11c14-7883-4648-93ec-ccd393c06c2a}} ,
then {{formula:be929d9d-37b2-4a63-9023-7add6b904ecc}} , equality holds if and only if {{formula:20564335-c487-470f-90b6-8bc22ed2feaf}} .
There are many extension and generalization on Turán's result.
The problem of determining {{formula:2173cbe8-b6ff-4a80-b4ea-e107d90f5e46}} is usually called the
Turán-type extremal problem.
The most celebrated extension always attributes to a result of
Erdős, Stone and Simonovits {{cite:9ffa87c8e3fea91f1ee2c6c830f052687b2c7e20}}, {{cite:4f0ff03417d4e71253db8fe7a0ca80240ac9fec7}}, which states that
{{formula:dce5fae1-6c32-4891-9c68-0b5f429c50fc}}
| i | 7885dede7928c39709a707a05d3771c3 |
The ML community has devised several techniques to overcome the limitations of expensive pointillistic labeling such as intelligently choosing the most informative training samples to label {{cite:c6c0072cb128196f702f8538bd2edca92323f4cd}}, combining both labeled and unlabeled data {{cite:565ab96cd31947c1d49e4473b10799578e34dbc8}} and harnessing the power of crowds {{cite:ecea677ebe10fcd7b817a68851cf4b5a3ab71c51}}, {{cite:c42b32a2eeab1f293ee271ad64cb6b80a3b288e5}}. While semi-supervised learning has been successfully applied to improve arrhythmia detection models without patient-specific data {{cite:2840b28824a88b8a3ff98da92d638adbdbf415fc}}, these methods still rely on a significant proportion of labeled training data to start with. On the other hand, crowdsourcing has shown promise in generating ground truth for e.g. medical imaging, but prior research {{cite:ecea677ebe10fcd7b817a68851cf4b5a3ab71c51}} found several limitations such as the lack of trustworthiness, inability of non-expert workers to annotate fine-grained categories and ethical concerns around patient privacy. Active learning, however, has by far been the most commonly utilized technique in settings where annotating large quantities of data en-masse is prohibitively expensive {{cite:bf91e5607e159d3a252861ae6dbb3398503d4c9d}}.
| d | fd8d22c77591983211e60898aacd3725 |
{{formula:675098d0-ba6f-4dec-8589-ac5b67880a04}} Affordance Grounding Results.
We compare our method with the state-of-the-art methods on OPRA {{cite:7ba7cb0582e3379b8ac7437b19baf78875d25cd8}} and EPIC {{cite:b126be848c038f1faabf88bc213f5231e8e44dda}} datasets, the results are summarized in the left part of Table REF . Our method surpasses all other weakly supervised methods in all metrics and is close to the supervised Demo2vec {{cite:7ba7cb0582e3379b8ac7437b19baf78875d25cd8}} and Img2heatmap {{cite:2f944d00070cda43b4e2ee84d5ee6d80ce813cd9}}. It proves that our method utilizes the affordance cues provided by hand position and action can achieve promising results. We also visualize the heatmaps generated by different methods in Fig. REF . Our method generates heatmaps that are closer to ground truth than those of Hotspot and saliency detection models. And there is no large response on object parts that are not related to actions. The results on some objects are even better than those of the supervised Img2heatmap {{cite:2f944d00070cda43b4e2ee84d5ee6d80ce813cd9}} and Demo2vec {{cite:7ba7cb0582e3379b8ac7437b19baf78875d25cd8}} methods. It shows that our method transfers the affordance cues from the hand to the static object, which can make the network pay more attention to the regions related to the affordance while suppressing the regions unrelated to the action.
| r | fa19107961fb7ee1e0e72d505d2c81e2 |
In this subsection, we present the consensus-based AC method. This method is particularly useful in the setting of the MMDP with large state and action spaces that we introduced in Section REF . The method exploits the idea that the team-average advantage function {{formula:352b4df2-caa2-426f-ad84-c8a31906c8f0}} can be estimated by every agent, even though it cannot be directly sampled. We establish an estimator of the team-average objective function by taking inspiration from Algorithm 2 in {{cite:ae81a96b8db81eb7d58387aa1ae243483b658cdf}}, which employs approximations of the team-average reward function and critic. We make the following assumption about these approximations.
| m | d8585bd1a663cdbd39d51e66000354b9 |
While realistic traffic flows can be simulated via a variety of simulators {{cite:b6b32557451cf924e1c7e44562668074f6e3110c}}, {{cite:0f50f0f7eebe3b55f159a34b51b7f91e3844b1ce}}, {{cite:54c9224e76afa7bdc1e42ffa6f8a3737a7c5569e}}, {{cite:300214eeb5bdc5e527dcd2475124a014b1c853fa}}, editing or imposing specific space-time constraints on vehicles is difficult. Meanwhile, the capability of interactively editing vehicle trajectories is needed when traffic scenes need to be simulated with specific vehicles controlled/showing a pre-defined driving behavior. As a result, currently traffic simulation editing has to be based on labor-intensive manual tuning of simulation parameters and exhaustive trial-and-error runs of simulators. Some attempts have been made to address such a limitation, e.g. by allowing users to manually generate desired trajectories or rare traffic events observed less frequently in previous methods or datasets {{cite:630d1dbcf2293c7d84443109219167d900a07a5e}}. However, they do not address the spatio-temporal nature of the editing constraints, e.g. specifying a certain vehicle to arrive a certain position at a pre-defined moment. Recently, traffic reconstruction methods {{cite:05fc519f5dd94d1c24cd99dede16a09ac320b576}}, {{cite:ea43ec80f0419b7f70fa33c59dc482cf114a55f2}} provide a potential solution via optimization with respect to the space-time constraints. But they deteriorate the trajectory quality, e.g. discontinuous and implausible trajectories, and incur large computational costs which renders them unscalable.
| i | 5017ee7ee628f1e95eb5973a61b69062 |
In Table REF , we comprehensively compare the performance of MaskSpec in various downstream tasks with other self-supervised and supervised methods.
Compared with another self-supervised method (SSAST {{cite:5e4f616af01904b554db6ce172224b6b85ff7a05}}), our proposed method has the stronger generalization in all downstream tasks, except that performs slightly worse than SSAST {{cite:5e4f616af01904b554db6ce172224b6b85ff7a05}} on SCV2. This is because SSAST using extra Librispeech for pre-train, which is totally a speech-based dataset.
The proposed method preforms worse than AST {{cite:42186e37d55df9fa9679b024ede9bf672bf9459b}} on AudioSet-20K, which uses extra image data for pre-training.
Besides, by finetuning on AudioSet before applied to downstream tasks,
better performance can be obtained under all downsteam tasks.
Comparing with other supervised methods {{cite:4dd211c059039656b506d4c0a2b061c82c3ff4df}}, {{cite:1b2ad446c7a9129cfe57439477ff7b3a8c2c3c1d}}, {{cite:42186e37d55df9fa9679b024ede9bf672bf9459b}}, {{cite:1667ad4cfabf6a586d13c78998f7d514f0c33b7a}}, we find that MaskSpec can beat them in the downstream tasks without using extra data, indicating that the proposed MaskSpec brings better robustness and generalization.
Among the results achieved by different-scaled models, we found an interesting phenomenon that PaSST-Small achieved excellent results in all the tasks, sometimes even better than PaSST.
Thanks to such a self-supervised learning method, the relative small model can also perform well.
| r | 7ade9941ae148fa31978ec7e88faf86a |
Following the proof of Dirac {{cite:86e076b96453a3ce4df9bf44aed78a83407bb275}}, a hamiltonian cycle can be constructed in polynomial time in {{formula:b931a800-8370-4049-836e-ac7b6a77c825}}
if {{formula:a4016059-5931-4e4f-a4e9-0947a6074d06}} . In fact, there is a polynomial time algorithm that constructs the closure of a graph {{formula:00f6feed-07dd-478e-97c3-4cee42bf072c}} and finds a hamiltonian cycle of {{formula:223746a9-621c-448d-a7a4-8cbc609dfa06}}
if its closure is a complete graph (see {{cite:1f1d934d5eb041832f728b697f8588b1f7b6fd91}}).
| r | f2cf1f042a9ecec31dcb0f5902ce3ad8 |
Stability and passivity are two critical concepts that are being abundantly used in various control systems design tasks {{cite:3dc0e1c9e541672b6677474abfe4c6e28a254f28}}, {{cite:9fb20be8be1727a88276843aa425a81f1893d1d4}}, {{cite:36c3e4be300334ae7226bf99b592119c5faa42e2}}. In particular, quantitative measures of stability and passivity such as the {{formula:b60c417b-241c-4573-bce5-5b189c9d84e1}} -gain (L2G), input-feedforward passivity index (IFP) and output feedback passivity index (OFP) provide convenient avenues for control systems design {{cite:af3ca48d619de91270ae0649021866745ca0d899}}, {{cite:2f78d95d5d63af5ba805cba1d5793e4e2aadb628}}. This is because such quantitative measures of a system can adequately characterize the system in lieu of an accurate theoretical model {{cite:8ae740d48c56e5c91a7e8321e97ea8822105ba86}}. Moreover, with the increasing complexity of systems, the problem of identifying an accurate system model becomes extremely challenging {{cite:5816ae37945a31b5358fdb081bc8cc1103c17599}}.
Thus, for such instances, designing control solutions based on estimated quantitative measures like L2G, IFP and OFP (henceforth, collectively referred to as the “system indices”) is more suitable than designing control solutions based on estimated system models {{cite:0b584927863c212612532778742383d1dcef2ac2}}.
Along the same lines, having an accurate and on-line estimate of such system indices paves the way to device intelligent controller reconfiguration and fault-tolerant control techniques.
Therefore, this paper focuses on the problem of estimating system indices on-line using input-output data of the system.
| i | a2baceb532fa12bd6a1a58c8b8e18390 |
We now discuss the consequences of the presence of a large permanent dipole in CdSe nanoplatelets. Such an important permanent dipole moment will affect the colloidal interactions between NPL and strongly impact their colloidal stability in suspension. {{cite:94cdff2129793d587e04e516709c630fd8f5dc29}} With the particle dimensions and the values of the components of the dipole moment that we derived above, an order of magnitude of this dipolar interaction energy can be estimated for different relative orientations of two platelets (keeping in mind that the finite size of the dipoles may not be neglected in front of their separation). The largest attraction energy, of about -3 kT, is found for two stacked platelets at contact (i.e. at 4 nm separation) when the {{formula:8e8821ce-3908-4e78-8714-787c7f0447e2}} components are in line and the {{formula:4ccd97cb-f532-4c65-b5f7-a487c83fa640}} components are anti-parallel. However, at room temperature, thermal averaging of the relative orientations of the dipoles should also be considered, a process leading to the Keesom interactions for freely-rotating point-like dipoles. Thermal fluctuations will also induce deviations from the ideal stacked configuration, resulting in an increase of the average separation between platelets. This thermal averaging will sharply decrease the magnitude of the interaction energy since the potential strongly depends on the platelet separation. This reasoning may qualitatively explain the marginal colloidal stability of CdSe nanoplatelets in hexane but a more rigorous statistical physics treatment of this question is required to reach a more quantitative description.
| d | ab3b19f01b508e9b65d02b6affde4bda |
Our objective is to train an embodied agent to learn both action and perception by moving around in a physical environment. We assume the agent is given access to a perception (object detection and instance segmentation) model, {{formula:90453824-1ff2-4208-89ee-39dbd9e69d89}} , such as a MaskRCNN {{cite:c686675eb75ecb825016ce62e4d63156a20e835c}} pretrained on static Internet data. The agent needs to learn a policy to move in the environment and use the experience to learn embodied perception in a completely self-supervised manner without having access to the ground-truth semantic annotations or map information.
| m | 9a3d6071e0f3ad01d61d347ed6bda7a1 |
So as to avoid confusion with the vertices of {{formula:cf7b94a7-6dc1-4389-ae63-5269a6c2f806}} , we say that the elements of {{formula:2f978730-369a-444f-a01c-bb7a41dc7f01}} are the nodes of {{formula:543d477f-0571-4e11-b189-a81b03ab933f}} . For a node {{formula:6369f011-65d8-4a9e-bcfd-2d9c56317b76}} we say that the corresponding set {{formula:cfe8fb34-5904-4ff1-b4de-c832666e3678}} is the bag of {{formula:bfbbba7f-7f76-4623-afdf-302fe187f193}} . The width of the tree decomposition {{formula:ce9a1666-984b-480b-b42c-52689b5a125a}}
is {{formula:083ade2e-4805-4cf9-b2bb-0ee2efabe3e1}} . The treewidth of {{formula:fb76e3c0-9054-4cd4-93f1-5cf81384a9f9}}, which we defined in the introduction, can be equivalently defined as the smallest width of a tree decomposition of {{formula:b87b5a08-b0bf-4273-b64d-5e72814728df}} (see e.g. {{cite:0001846ce3933891ae89fa25e5425aa6d9da1d2c}}).
| r | 6c05ba6a5eefe3c988113da2224dbd1b |
Our method v.s. patchmatch stereo. Our method shares high-level ideas with traditional patchmatch stereo works {{cite:b96c5bbd005a1efa816f3225b6afddadab1c6a17}}, {{cite:ec95b82d876faaf40c7f28b1aa8e59fef9944497}} which aim to estimate a slanted plane for each pixel on the stereo reconstruction problem. However, our method differs from them in several aspects. (i) They perform patch matching around a pixel within a squared support window, where the patch size requires to be carefully set, thus not flexible and adaptive across various real-world cases. Instead of explicitly defining a patch, we associate and match the multi-view deep features. This is based on the observation that a pixel's receptive field on the feature map is far beyond itself because of stacked CNNs. The model can automatically learn the appropriate field for matching local features with end-to-end training. (ii) These methods usually first initialize pixels with random slanted plane hypotheses, then undergo sophisticated, multi-stage schemes with iterative optimizations. In contrast, we generate more reliable slanted plane hypotheses based on a data-driven approach (i.e., analyzing the groundtruth plane distribution), and learn the pixel-wise plane parameters in an end-to-end manner, which is much easier to optimize. (iii) They usually adopt the photometric pixel dissimilarity as the matching cost function, which is sensitive to illumination changes and motion blurs across views. In contrast, we apply a feature-metric matching strategy, which is more robust to potential noises compared with applying photometric distance.
| d | d6fcb7afa6082c45cfc210bf5713a246 |
Please note that Formula (REF ) refers to the general case of uncorrelated degree sequences {{cite:ba6c928131a586727ed9b6488acdae88672041f8}}, {{cite:a4dad8ee2321925714c65c66b60c06bcb1555778}}. Networks generally show a degree-degree correlation which can display a positive degree correlation in the case of social networks or a negative one, for instance, in the case of networks related to technological and innovation research projects. In such context, the average degree of a neighbour of node {{formula:6f4c45c9-e1d1-4c7f-8421-d3920b424a5b}} is computed as {{cite:6f9b4b9e28419630ef833a9b940db8c6d49a9b5c}}:
{{formula:da31c152-9544-4a18-9ee3-c30239859355}}
| m | f257361d7fbc92db3925711fdacb6258 |
Action Genome. We additionally report recall, mean recall, and harmonic recall values on the Action Genome dataset {{cite:ae5d8d5f7501c8827ae0917a5806dfffe650b4e6}} in Table REF . Similar to the observations on Visual Genome, we observe {{formula:c1d75687-c5df-4175-b297-aae26feb88ad}} higher R@20 and {{formula:d6d4e78c-b55a-4409-ac52-81590ec69192}} higher mR@20 ([baseline=(char.base)]
shape=regular polygon,regular polygon sides=6,draw,inner sep=1pt] (char) 8;) when compared to the closest baseline in {{cite:c12f1415ded5d85894595830fbacc58b2a9c4de9}} ([baseline=(char.base)]
shape=regular polygon,regular polygon sides=6,draw,inner sep=1pt] (char) 7;) despite using an inferior backbone {{cite:23d094b767df7f38b8f4bce2b608b4b1f797998d}}. Furthermore, by effectively biasing our model towards the tail classes, we are able to achieve {{formula:b1dca57e-30a7-48e2-9ac7-55fe635b8c92}} better mR@20 while having better R@20/50 values ([baseline=(char.base)]
shape=regular polygon,regular polygon sides=6,draw,inner sep=1pt] (char) 10;) compared to [baseline=(char.base)]
shape=regular polygon,regular polygon sides=6,draw,inner sep=1pt] (char) 7;.
| m | 27da1440c8bccfbf985ed3fb50cb72d6 |
The directed power graph {{formula:29a583b3-7f7a-4ad5-808a-e301db181932}} of a group {{formula:74fc3f16-a41c-4c33-af6c-b6b422116f11}} is the digraph with vertex set {{formula:99e0165e-f540-415d-a5c5-f81204aa91b1}} , in which there is an arc from {{formula:821cfe6e-be41-426c-af45-606b244bfddf}} to {{formula:2a8542cf-f49e-4aeb-84e3-8f9e2ea4ea24}} if and only if {{formula:ab8839a2-3099-4f9f-b4c7-8069b9f03783}} and {{formula:8526e242-bb38-4ecf-b68c-127506e2e942}} is a power of {{formula:b8f119ef-8c2d-491f-9521-375fed3e3629}} ; or equivalently {{formula:888f9450-f91a-417b-b6ae-9aec74171c6d}} . The (undirected) power graph {{formula:259e913e-5036-4aad-b151-8ef6ecff7dd2}} of {{formula:7e8947ce-886e-4716-a013-1a8f4c03b0c3}} is the underlying graph of {{formula:b8068956-f164-45ac-bc48-6e17e549117a}} . Kelarev and Quinn {{cite:4a4f069040e30e217cc8013bf2b5bdc1c59f184e}}, {{cite:2f47e2e1889cd22b7ed48835c0bc2baed3f880b1}} introduced the directed power graph of a semigroup. Chakrabarty et al. {{cite:e59aacfa326bea19518c4bb1a811726a39907efc}} defined the power graph of a semigroup. The power graph has been studied extensively by many authors; see for instance {{cite:6e8ffacbd4064f35e20f0ef705a65a85c1a96dcf}}, {{cite:b3fc526490a4e579d93ad0f9c188406a72316fe9}}, {{cite:dd4d4fdace9388b7b506cbbe12f7f5c7ed0a3d88}}, {{cite:5f67526cb501540653899221a95efa0c4711cd52}}, {{cite:18bb36fd99d284c270fca4966c3fece1c67f85f9}}, {{cite:b7a6833c37c580a18b3698b6bf521a34a045c122}}, {{cite:bbed31c4ecc02a8e99b0b97526b3f0c59b441a55}} and a survey {{cite:33e000363a0b81eded5909d101c0e623b746a26a}}. The computation of the automorphism group of the power graph of a cyclic group was initiated by Alireza et al. {{cite:6e8ffacbd4064f35e20f0ef705a65a85c1a96dcf}} and settled by Mehranian et al. {{cite:bbed31c4ecc02a8e99b0b97526b3f0c59b441a55}}. The automorphism group of the power graph of dihedral group was also computed in {{cite:bbed31c4ecc02a8e99b0b97526b3f0c59b441a55}}. Min Feng et al. {{cite:9efa317ac9d4a5b9b43f0ba38bb48230acc657de}} described the full automorphism group of {{formula:02414e5e-4d4e-403b-a861-8474a856670e}} and {{formula:611a3cc8-0d27-4f14-b36e-e1abe26b8c29}} for a finite group {{formula:5ecb3c6f-4a76-48c7-b316-e4b7b31d68d0}} . By using these, the computation of the automorphism group of {{formula:cb935662-b6c7-4c2c-b521-9be33a9cc168}} and {{formula:b3c228d6-84cb-4c0f-94b4-23ad6ce27aa4}} when {{formula:0eb3d546-95a4-4ba4-a629-39396239d3de}} is cyclic, elementary abelian, dihedral
and generalized quaternion groups have been made. Ali Reza Ashrafi et al. {{cite:701a914b6b3ef7fc0cd658ffa27df2480da692cf}} computed the
automorphism group of the power graph of some more classes of finite groups.
| i | 1ea0c843cbed6dddc6c948b2ab9f9eb4 |
Notice that, at the initial time {{formula:98bdb663-0cba-4ca3-82ac-3c58c7777ebd}} chosen in our problem for the causal scenario, the maximum spatial PMF scale {{formula:691a279e-2c7d-4f1b-ba14-f0b11970d3eb}} occurs too small after its following growth till present time. It grows upto the scale {{formula:9009bf85-1ddf-487d-b314-35b63dd87165}} only since the horizon expands during the cooling much faster, {{formula:398f4a72-09a4-43ff-b956-4ad0466e2ce9}} , than the correlation length {{formula:37fda7cd-8361-4f6a-9849-d1a22e72f538}} . However, the inverse cascade in the relativistic MHD, accounting for non-linear terms in Navier-Stokes equation (we do not touch here the full MHD approach), can rearrange the Fourier MHD spectra in such a way that {{formula:731dda0b-4af2-4318-b2f0-46921f29449c}} as a measure of the coherence length of the magnetic field, can increase in the coherence by the five orders of magnitude {{cite:7c42d832f242c3702ce12c511ff91c36406279f4}}. This fact allows us to get closer to the present scale {{formula:42d8c650-f5cc-46cf-ae50-c8eee1c66f33}} for PMF bounded in Ref. {{cite:3ea1d9b8775b5a7a0c75629fa93e128d265dc83b}}. The other problem is that, around the time of the recombination, the
photon diffusion becomes very large and so-called the Silk mechanism could destroy the PMF characteristics. This danger was refuted in Ref. {{cite:2956cc8a51c443929417222e520b13a0bc20cb20}}, where nonlinear effects were shown to prevent most likely this problem from happening.
| r | d0dfb3f535eb6e5c5a1432050788404b |
Throughout the paper we use the notation {{formula:cfde96aa-ce96-43d5-95e6-42cd29679b51}} from {{cite:c7486aa6c5846e7bcd7f7e36876de54d0a14b1ec}}, {{cite:7450a448921b5ec2a9f59bd2321e86ab650ea4b6}} to denote a tensor formed by contraction on some indexes of tensors {{formula:3eb4e4f2-a472-4a39-a056-8943cf1e6e52}} and {{formula:305cc49b-e772-4553-a520-dd18bec91b47}} using the coefficients of the metric tensor {{formula:b7c377d9-fccd-4397-86ea-46adcd167f87}} if {{formula:86af2544-2d98-4ccd-807c-c71a525e1208}} and {{formula:f72dd982-af20-45e7-a9d2-f23afd76ab5c}} are defined on the boundary {{formula:404c54ca-20a8-40cc-941e-5595c7a2c5c5}} . We also use the convention that {{formula:d5bde4a9-6b02-4369-a35f-8241155a9514}} denotes contraction of some indexes of tensors {{formula:18d49bd9-a102-4020-9e57-bfa07d9d90f1}} and {{formula:7bcd79e9-7193-4f6c-aa77-494f449fa5f4}} for any {{formula:6a575561-ef1c-4e37-b2de-3fb05da01d5e}} and {{formula:28b3e01b-bfc2-4851-84fc-1769ebcbe7ca}} . In other words, we include also the lower order covariant derivatives.
| r | 06724845fdcec827ec0250f8ce0e7a01 |
Although RRs are a rising research method in the medical domain, they are barely known by the SE community. To the best of our knowledge, our recent work on RRs {{cite:b2582603d2f3fca88adf71b78f2d96b52f7d5f50}} is the only one reporting experiences of applying RRs in SE. In that study we observed that practitioners were very supportive towards the use of RRs. The importance of exploring the perceptions of practitioners, as we have done recently {{cite:b2582603d2f3fca88adf71b78f2d96b52f7d5f50}}, is easy to understand since practitioners are the target audience of RRs. But the perceptions of researchers should certainly not be neglected since according to Rogers, the perceptions of all individuals involved in an initiative is one of the main predictors of its adoption {{cite:e854db526dee1d0f8fd06283032cb3b751d46ee8}}.
| i | 294deb33875d43c2ffb12e52bdabbab2 |
Comparing for model size the proposed DTCN outperforms the larger convolutional Conv-TasNet model {{cite:65f34f3adac907ce7b24707eee13ef9752e58eef}} and recurrent SkiM-KS8 model {{cite:e5749a9972aae12e13ba53aa5d44ed3903e38485}}. When DM is used in training, the DTCN outperforms the much larger convolutional SuDo-RM-RF 1.0x++ model {{cite:4da87ab0ba630de8badff3de19b4071a4a5195f4}}. Using SW reduces the model size by two thirds but is still able to give comparable performance to the SuDoRM-RF 0.5x model of similar size and much improved performance when DM is also used.
| r | 317d544a12067489a4e4cbef0c5804fa |
In this section, we will characterize the complexity imposed by the EQPO Alg. presented in Alg. REF and evaluate its heuristic accuracy versus the complexity invested. Additionally, note that since we had no quantum computer at our disposal, the simulations of the QSAs were carried out using a classical cluster. Explicitly, since the quantum oracle gate {{formula:db3e285f-691c-4c21-996d-3e0dc9301a0e}} {{cite:fc04f44c5937e85848db6a6e5879cbc653e416a5}} calculates in parallel the UF vectors of all the legitimate routes in the QD, they were pre-calculated. We note that this results in an actual complexity higher than that of the full-search method. Therefore, the employment of the quantum algorithms in a quantum computer is essential for observing a complexity reduction as a benefit of the QP. Hence, in our simulations, we have made the assumption of employing a quantum computer and we count the total number of {{formula:80ba2ffb-70f9-48e5-8c1c-4f02a27daaf3}} activations for quantifying the EQPO's complexity. This number would be the same for both classical and quantum implementations. Note that in the following analysis we will use the notation {{formula:41511a1f-b48e-4d8e-b27b-ca438b8bfa98}} , where {{formula:bfa990a4-7631-4c50-8025-206e46f1a585}} corresponds to the cardinality of the set {{formula:207c099e-6aa5-4cb9-87a9-a1cd58ccfb7c}} .
| d | 7492126bff933594823bd9a5f90187d6 |
The fact that a large number of unequal overcontact binaries were observed challenged the argument from {{cite:ad3f22c7ba3604ec9b4cbc752be929e16cfe4ed7}}, who stated that overcontact stars cannot be stable, however it was shown by {{cite:5fc71898b69c85be1417f20e02e87f2d927680e5}} that for the convective envelopes of W UMa binaries, this argument does not apply and they constructed an approximate model of a low mass overcontact binary.
{{cite:bc290da6ecba20a652126a9be2a0c66aee867672}} expanded on these models and argued there must be significant energy transport in the innermost overcontact layers between both components.
However, the theory of overcontact models of massive stars with radiative envelopes is not yet well developed.
| i | 34ffba1bd7bba1c78dbe34e7670c48f2 |
We adopt the DA scheme used in the DailyDialog dataset {{cite:a881ee555af277b52e5873ac407cb4858027b475}}, while it should be noted that a wide range of fine-grained DAs have been proposed {{cite:69beeb1eeeb1d6e59d9cf678ed372539b20f8568}}, {{cite:2df45b6ec6c9422941f534ec8efedeba08ce4341}}. Besides semantic meaning, human conversations also convey pragmatic meaning and implications that require commonsense reasoning {{cite:716756c1fcf015387b1f635d8d42d9e5908e11c3}}, {{cite:fd9e171204651b11dbf0bdd535b32bdbf213f75c}}. These messages are also beyond the token level and even go beyond the semantic level. Promising results in our experiments may encourage future research to explore such higher-level explainability, e.g., explicit understanding and planning aided by pragmatic and commonsense reasoning.
| d | 2a63a1b42cc78a569b69c7b734380e3a |
In order to verify the performance of the proposed method, we conduct experiments with three different dose levels: 2.5%, 5%, and 10%. Meanwhile, several state-of-the-art LDCT restoration methods such as BM3D {{cite:9a5ed4ca94cb2dfbff0c4bee4b929e667aced06a}}, DnCNN {{cite:2e5d7eac73c201b6e3f415fb6dc6fab0fbf72946}}, MAP-NN {{cite:48b03774d3ebd4aae5c0f22ba673e950144ba62b}}, and RED-CNN {{cite:2358676514adbadb1f32e0347d3804a66025152c}} are compared. All the models expect RED-CNN are implemented according to the original paper and employ the original loss functions. In RED-CNN, we employ the same loss as MANAS. The quantitative results on the whole testing set are listed in Table REF . It can be found that the networks searched using our method only achieve middle positions in terms of both PSNR and SSIM for all the dose-levels. However, according to current studies {{cite:0e211c98e27b03421b0c9f15e10846f1741ed9e7}}, {{cite:ff6afb9e88a7f023419771408e4e7f9696b41449}}, {{cite:e27df2e987efd3a2850b29b595228267fef8607c}}, both PSNR and SSIM cannot always well judge the image quality. Therefore, we add perceptual loss, termed PL, as a metric to show the performance of image detail preserving. We can see the MANAS has the lowest perceptual loss, which shows our MANAS can preserve more image details. To further illustrate the performance of MANAS and the visual effect of image detail restoration, three slices reconstructed using different methods from 10%, 5%, and 2.5% dose-levels are given in Fig. REF , respectively. It is noticed that as the dose-level reduces, the artifacts and noise become serious and most details are covered. All the methods can suppress the artifacts and noise to a certain degree. In the 3rd row of Fig. REF , there are obvious streak artifacts near the femurs in the results of BM3D. In the results of DnCNN and MAP-NN, the details are blurred, especially in the 1st and 2nd rows of Fig. REF . Some contrast-enhanced vessels in the liver indicated by arrows are smoothed. RED-CNN obtains most close performance to ours, but it still suffers from some perceptible over smoothed effects, which leads to the spatial contrast loss. Overall, the proposed model achieves the best visual effects in both artifact reduction and detail preservation.
| m | 0949ee4ab47f9bbad2751b5f9e06886d |
where {{formula:73d4c2fd-dcfb-468e-9424-ea7cbbcc34fa}} is a three dimensional vector field.
Since {{formula:d643e457-1b21-41f2-b42e-e6a526cf48d5}} , each Beltrami flow gives a special solution to the stationary Euler system. We refer the reader to {{cite:e0ce54f83de908d7d286dfb54e65d28c5ec0df3c}} for the basic properties of Beltrami flows and to {{cite:21fc940efd96b509b54a2b9895d80218d4ed30bf}}, {{cite:88577c203556c155a0bbe22d839dc44a69274d3e}}, {{cite:fc23da7611e161ced6b97e96bbc581498972e951}}, {{cite:4bffcd1407da2a208385c9ca4e8ff9edc9ec73d8}}, {{cite:053515b53345c728445228792f92c2cfbe79c6fe}} for some recent results. Here we mention that the Beltrami flows are also called force-free magnetic fields in magnetohydrodynamics since the term {{formula:9739dc94-b948-44a9-aff8-210f1cd98a70}} models the Lorentz force when {{formula:11610946-b30c-452a-9cfc-199483cd7b66}} represents the magnetic field, see {{cite:e2ce99d0b9fabd70279e880f571a0bd3ab71073e}}, {{cite:6ba0bebe51ed97fbec372dda954f2c45ca0c8fbd}}, {{cite:9658c90eeac04fbbfe514f3714057b64fc77c274}}.
| i | 4293115c10281a2bfc780746a70ec8e1 |
The conventional double averaging corresponds to the secular approximation at the first order in the magnitude of perturbation, indicating that the approximation works well for those systems with high hierarchy (i.e., the systems with weak perturbation). Physical hierarchy is measured by the separability among degrees of freedom of the dynamical system or the differences between the periods of the so-called “short-period” and “long-period” variables {{cite:756b43dbb390863fcc6ee397d756b9c1298f519b}}, so that high hierarchy requires that the semimajor axis ratio is small. However, if a planetary three-body system is not highly hierarchical (in this situation the perturbation is so strong that the separation of `short period' and `long period' becomes blurred), the short-period effects within the orbital periods of the outer and/or inner planets need to be taken into account in the secular approximation {{cite:1d40090b8550dedd3754266d2939abbfa9b0f3a0}}, {{cite:85031611c143f2548947705bfc14bdb2b5c9877d}}, {{cite:e655c55f2ad1c41c388bda5862cc348ab2ca4fcd}}, {{cite:0fce7868992c4a095805898267659260c88b601b}}, {{cite:8e0386effe1b07c79af39b9d0549ab99727b1d02}}, {{cite:4b8a0f5ab865149dc87dc210c4511fd0111a27dd}} or high-order secular Hamiltonian needs to be formulated to predict long-term behaviours {{cite:756b43dbb390863fcc6ee397d756b9c1298f519b}}.
| i | 1f3dd7dc3df55247ec1880343a5348ba |
Studying data corruption in ML dates back to the eighties {{cite:06459415f69a1441f7f30e14efcf5d3a6ac6fa2b}}. Remarkably, the first twist models were assuming very strong corruption, possibly coming from an adversary with unbounded computational resources, but the data at hand was supposed to be binary. Hence, the feature space was as "complex" as the class space and twist models were lacking the unparalelled data complexity that we now face. Getting such twist models at scale with real world data has been a major problem in ML over the past decade for a number of reasons, not all of which are borne out of bad intent. Robustness inevitably comes to mind {{cite:eeb51ca6c9ed548b21660e9139ef14ee50fa54c4}}, {{cite:b4ac2c84fa9c7c8009d28019813970f19d936de7}}, {{cite:9592fbec02b131345859d6bc0e7f79759412410b}}. Data augmentation techniques also come to mind, with Vicinal Risk Minization standing as a pioneer {{cite:e655b9d075862ad6b14ae9dbfff727edf2fc501d}}, {{cite:62a64a6cd4aeae4b39204bb76abc68ebb799ab43}}. Data poisoning techniques can be much more sophisticated {{cite:c6b91d36e07f03e573155b9a8ac2e29f245bafa4}}. Privacy techniques like differential privacy can also alter data with the objective to obfuscate specific information {{cite:fbf289a05ac500561b3406e01d9d5b1e2b14cc4d}}, {{cite:58e50228b93b73a20ad6d7cf13b182a6eff1d8df}}. Invariant risk minimisation aims at finding data representations yielding good classifiers but also invariant to "environment changes" {{cite:a0097e2442471e4f76b691a71ff6558cbc4e0c53}}. Quantization can reduce the coding size of data to lower the computational cost of ML {{cite:97f695abd70432267508df193c1f1fe3f1587b3b}}.
| d | e2263250e0750fd0b1cf394ede62be9d |
The first part of ass:intersample assumes that {{formula:0dc4670c-091f-4a0e-8589-dc94e478784c}} such that all training inputs {{formula:5e69f2e4-284b-4aaa-bc13-1a42b02782fa}} verify {{formula:c3aa6a63-e92e-46c9-9687-7f4193de94e7}} .
Note that this equality is standard in some kernel machine algorithms {{cite:cf4c182e2d12e655f66458367d23d5df7c09ba3d}}, {{cite:eba97bff165e84915144818194b0e92155f01ee3}}, {{cite:e5e1d1b344db21122b7a4aa77b14283e273f9cb5}} and is usually achieved by replacing {{formula:c72baba1-37be-4cb2-b8e3-cc8916490541}} by {{formula:9444a23a-6c4c-4dae-815e-1d86b7d53e1d}} .
In the NTK literature, this equality is achieved without changing the kernel by normalizing the samples of {{formula:63e2b626-1d72-4176-a018-eb7f95a63ef7}} such that they lie on the hypersphere; this input preprocessing was used in {{cite:162143977888b47d8410dafffd4e8b8f3e38912b}}.
This is theoretically based: for example, the NTK {{formula:7e3e0a76-ae48-4c5c-be1d-9de517fec955}} for an architecture with an initial fully connected layer only depends on {{formula:fc73eb07-fe38-454e-824e-40cfa583bfef}} {{cite:4a856691859efdc20220f5a81750c8b1602aa849}}. Thus in the case where all samples from {{formula:c46902d6-995b-4e95-a576-50d354e5ac8a}} are preprocessed to have the same norm, the value of {{formula:80d16a64-ed0e-4ecd-a055-7f1fb1b3fad7}} does not depend on {{formula:a5476fea-247f-47c6-8e2e-599f037cc896}} ; we denote {{formula:c1a89730-d6c6-499a-bf91-e62b8b493030}} the corresponding value.
| d | 83d4338123fd8ecbc93aa8fa48dd62a6 |
The goal of this study was to investigate the potential of explainable deep learning algorithms to identify and differentiate KD from acute febrile diseases.
We therefore selected several well-known deep learning algorithms (VGG19, Xception, ResNet50, ResNext50, SE-ResNet50, and SE-ResNext50) to distinguish incomplete KD from other acute febrile diseases.
We selected pneumonia as a representative of other acute febrile diseases because it is the most common febrile disease in children. KD and pneumonia show similar fever patterns before the occurrence of respiratory symptoms in pneumonia.
Despite the small training dataset, the results in our study demonstrated that the deep learning algorithms show excellent performance for the identification of the KD.
Nevertheless, as the performance of a deep learning algorithm depends on the quantity of training data {{cite:9ae22d5285dd05eb5e5d41af39d44785863bf511}}, {{cite:8e7a17dad35ccf8ff61307fd4d82d4ea01f47573}}, the deep learning algorithm for KD diagnosis should be extended.
| d | 12e709fb5237e925bf8608ad0dd22c5b |
Quantum decoherence has been a subject of active research over forty years and various methods had been proposed to interpret its mechanisms quantitatively, for example, einselection {{cite:69396d78797065f797121e15b8d85520e1d144a3}} and semigroup approach {{cite:23b00da666a4a089a20f1b89a7552216fb91212d}}. In this note we quantify the decoherence process by virtue of the quantum coherence resource measures quantitatively. Our results is in agreement with decoherence issue and quantum coherence resource theory.
| d | 6371fd7539d63405c8700c53157c57e4 |
Therefore, a first grouping of style-oriented paraphrasing methods is
determined by the types of corpora which are available to develop a
style transfer system: a handful of studies relied on
parallel corpora {{cite:3c36b070d1a5beff102efc087d277a90cfceb2ba}}, {{cite:775d06e95233e21cd7ec7c8e13d0033e2ad091ad}} or performed data augmentation to create
them {{cite:6eb427e739625a2f088ef8b20740e1a272ecb953}}, while others aimed at
making use of mono-style datasets {{cite:a96d56178d62a4651bfeb73ac2cf10a94b5f8911}}, {{cite:54e2854af49f349c99e03d11d563b9577404e181}}, {{cite:9ad584b28b0a6a6bc20984da176464a872d5e690}}. As illustrated
in Figure REF , which is an adaptation of the taxonomy of
methods presented in {{cite:0b1743e564769aed081daa559c9defe4a67fe4ba}}, these groups are further
divided into sub-categories with respect to their training technique.
| m | 1e53aa453a95a757754e9080d21162e6 |
One of the most important experiments is the CNN backbone selection for SAR-APD, since the choice of a feature extractor is a fundamental aspect of solving any object detection problem {{cite:f18aad29d3283a8f9422735c44fd9deee0c7b20b}}, {{cite:9dccd89d9f8e06fc66f88c358e61d418a6ace23a}}, {{cite:4283525c44d6cadf4c68539e55739da1c75a4e40}}. Therefore, we test fine-tuning eight different ImageNet pretrained backbone architectures on HERIDAL, and the results are shown in tab:backbone-selection. Here, we report the PRC, RCL and AP metrics computed using the VOC2012 evaluation scheme. As evident, ResNet152 outperforms all the other candidates on the test set, thus, it is selected as the CNN backbone for later experiments. Surprisingly, the simplest VGG16 network yields comparable results to ResNet152, while the last four most sophisticated models in tab:backbone-selection, according to ImageNet classification score {{cite:4283525c44d6cadf4c68539e55739da1c75a4e40}}, perform relatively poor on HERIDAL.
{{table:0282dbd7-d7a6-4af1-8350-7ea7e8d1c6f9}} | r | fdc46baff51d2ba3d879c5e8633f6fbc |
We use the NCEP/NCAR dataset {{cite:70015f02b86e343be112a1134b06d42b11168934}} of monthly means records of air temperature near the surface and the world grid cells ({{formula:525f5716-4578-4a87-ae81-8f5ab2fa53a1}} latitude {{formula:f84e35f0-2ec8-4e2d-bd4f-5878719195f0}} {{formula:8056f3f5-82c4-4de1-9be4-8050fb579355}} longitude) from 1948 to 2016 are taken into consideration. The resulting data is composed of 10512 time series with 816 values each. Next, we removed the seasonal component using an additive decomposition by moving averages {{cite:f5eac4fbfcb249ab10719662aeb8e388fad7f10c}}. We considered an additive model because the seasonal fluctuations are relatively constant over time. We avoid this trivial effect with the main purpose of focusing on finding similarities in other components as trends and longer cyclical features. Finally, we normalized all time series between 0 and 1.
| m | baf4a50a63debfaa44fd731e2e71e32b |
Recently, the problems outlined in earlier works by {{cite:42533c3fd0e3567921bfb88a636fcebf606d9b06}} and {{cite:e98dc7261f7d4acc59daa61f195f090d83482dd5}} have been ascribed to a “bias” in the Civ{{formula:c71ab450-2a43-4b3b-8eb3-f239032a5353}} 1549 {{formula:f2d9a067-ef85-4d4d-bb21-2be1bc65a5f5}} estimates {{cite:53929d13d0e630dce285ace29a4d510fc73b6f30}}. The Civ{{formula:f1115207-41ad-46ca-b217-f09571131924}} 1549 {{formula:4fb42879-834d-4360-bd67-49cce98e3275}} bias is dependent on the location in the 4DE1 quasar MS: Fig. REF and REF clearly show the different behavior for Pop. A and B. By the same token, an {{formula:b0c3b86a-4799-43c8-8bcb-32a8d20ab41d}} – dependent correction is in principle a valid approach, as {{formula:9603bbf6-2d98-4492-979b-439b7b8fec13}} is probably one of the main drivers of the MS {{cite:21dbedb99ae36be4ba1c3ff72a11a178f189e4d6}}, {{cite:32b3b77d192559f4124cf7b6a7d973cef3495970}}, {{cite:61ac7a969a128ed8554feaeb5ea026b83e3a0a27}}. Unfortunately, several recent works still ignore 4DE1-related effects (or, in other words, MS trends). For instance, scaling laws derived from the pairing of the virial products for all sources with reverberation mapping data should be viewed with care {{cite:4d8cd891683f159732d0aa1981d7f89b0633ed11}}.
| d | 75b1b0e3c3160b1ee7cdf6541500878d |
The development of BERT-based models for entity retrieval faces a major challenge: there is limited training data for the entity retrieval task, and fine-tuning of BERT in data-constraint regimes leads to instabilities in model performance {{cite:4619e7505e5c79ca20fea02b4de89772109a7611}}, {{cite:da3e8507e7c41d6a921ad456c4659ec39f38943f}}, {{cite:50f58acec917a85055c6c92c7aa20162779b941b}}. More specifically, the official entity retrieval dataset, DBpedia-Entity v2 {{cite:20380c90be77463a766db3e5ab8ae6ff13b07ded}}, contains 467 queries; considered a small dataset for fine-tuning of a large neural language model. When fine-tuning BERT (especially its large variant) on a target task, the training data should be large enough that the model parameters get close to the ideal setting of the target task, or otherwise it causes forgetting of what the model has already learned.
| i | 662c8faac7535f9b56aa256c95ef8152 |
Fully convolutional-early fusion (FC-EF) is considered for the supervised change detection method on the OSCD dataset.
In this method, the bi-temporal image pair are stacked together as the input.
The architecture of FC-EF is based on U-Net {{cite:7b634078d429d0c06fdf7bfa26e8ad367ebecfca}}, where the skip connections between encoder and decoder help to localize the spatial information more precisely and get clear change boundaries.
FC-EF-Res is an extension of FC-EF with residual blocks to improve the accuracy of change results.
In addition, it is worth noting that the first dataset (OSCD_S2S2) has previously been extensively used in other works.
Hence, we also compare our results with those of some conventional methods {{cite:8e09c7961060eb4de226cb1bb9faef57d843647e}} (Log-ratio, GLRT and Image difference), an unsupervised deep learning method (ACGAN {{cite:82a79e47290d94097caf87fcd4dea71785960b2c}}) and supervised deep learning techniques (FC-Siam-conc and FC-Siam-diff {{cite:8e09c7961060eb4de226cb1bb9faef57d843647e}}) reported in previous papers.
| m | b6fd78f3c7f41b7ec0f08bf5cb046190 |
In contrast, the treatment of unequal binary configurations of extreme Kerr-Newman (KN) BHs {{cite:1a81fbf6effc5c5f27fcfd26f60a4e5d14fe495a}} has been a fairly complicated problem beyond our possibilities, due mainly to the fact that the axis conditions are not enough to define properly KN BHs, therefore, it is necessary to impose an extra condition in order to kill both magnetic charges, otherwise, Dirac strings linked to the KN BHs will appear {{cite:5eb46cce113caef8779d105996c0c443480aa937}}, {{cite:4bc9e03ee4927a9e4b223bdb8e4872a614cd3167}}, {{cite:c39dfbf758f4ac0ccd02e2089b947c40cb94334e}}. The main purpose of this paper is to derive a five-parametric exact solution that completely describes binary co- and counter-rotating extreme KN BH separated by a massless strut in a unified manner. To accomplish such a goal, we are going to take into account the recent results of {{cite:d6ad421079c8565a2748a95de4495959af341a0a}} where a complete derivation of the metric and thermodynamical properties for non-extreme KN BHs has been succeeded. Hence, the Ernst potentials and metric functions will be depicted in terms of physical Komar parameters {{cite:66ac686db43bcd561945483c30c53dbcfbe346ca}}: the masses {{formula:560c7fda-f774-44c9-b57e-8bf7ab660e06}} , electric charges {{formula:a7cc4447-8119-42b7-bcf5-b9b949bf1756}} , and a coordinate distance {{formula:a2c1078d-0e32-448e-861f-7357c938d5b2}} as well. In this scheme, the five arbitrary parameters compose an algebraic equation thus defining a dynamical law for interacting BHs with struts, which is reduced to some previous studied cases {{cite:c251aaf1f2176ba4dd26b320290b19c6fc684c7e}}, {{cite:09cdbb32f0b6217fa57c1a9105668c69583a05cd}}. At the same time, the metric is concisely given in terms of Perjes' factor structure {{cite:6bfa0841bfb5a02e573c41e945a2ec65809ef787}}. Since the physical limits in both rotating charged models are well identified, after turning our sight exclusively in the corotating binary BH setup, we derive quite simple formulas for the area of the horizon and the interaction force during the merger limit. In addition, a deformed metric for a near horizon extreme binary KN BH is also given.
| i | 62d96bde53e5a46d60ffc9a1742109c4 |
Single-parameter PH. When {{formula:22c8af84-f3c7-4f84-a4b1-87cfbe31626d}} is totally ordered, e.g., when {{formula:17c4b877-c841-4339-8cb3-ae7f68f998ae}} ,
then applying the homology functor {{formula:7ebd4fdb-a769-4dc3-ba9b-6b4fb58587c3}} for a field {{formula:d8493146-36e4-4e18-8441-0b109b432927}} to a (single-parameter) filtration
results in a sequence of vector spaces
connected by linear transformations. This sequence is called a single-parameter persistence module and has been studied extensively
in the TDA literature {{cite:a6e786d84dcd50ba32ec19e4a87bb3e20fab075a}}, {{cite:d1708ae79210fda717a163674edd62c019114f31}}, {{cite:9d4cb03d405264f387efbec9512b5b77bdcfcc0d}}, {{cite:6587c0686bac1c43265c94a76ba6358c5fbc5183}}.
Notably, one can show that such
persistence modules can always be decomposed into a direct sum of simple summands,
which intuitively represent the appearances (birth) and disappearances (death) of topological structures detected by
homology as the index increases. Moreover, single-parameter persistence modules can be efficiently represented in a compact descriptor
called the persistence barcode, and several vectorization methods, as well as kernels and machine learning classifiers, have been
proposed for such barcodes in the literature {{cite:f381376373d6eee312538c0c06258e82ee56569f}}, {{cite:c82e464a73e685e1e76c08d7c3a6c8e4d6b1c7f0}}, {{cite:cbecf7bd0ca2acb6300fad2836442fcb93230807}}, {{cite:0559b1b80d9165c1bda404d0082a8157d59175f5}}, {{cite:ea1167a6221d7b5cad6e97c19a67408854d11b76}}. As a consequence, most applications of
TDA use single-parameter persistence modules, and often use the sublevel sets of, e.g., the data set scale,
as the corresponding single-parameter filtration.
| i | ab95af6a9788f620c046831d55f2b7cb |
In contrast to the intense studies of deep Reinforcement Learning(RL) in games and simulations {{cite:c456d305df6df8135c143aad767a91567535c061}}, employing deep RL to real world robots remains challenging, especially in high risk scenarios. Though there has been some progresses in RL based control in realistic robotics {{cite:5022cb865d33c2b24799ed8b251745ba44a1e0f8}}, {{cite:fa77099867de26def94149536745099cecb2ecd2}}, {{cite:a2c616c4a0659d40f1e5d417af1b122136f927a8}}, {{cite:e0c587dad877438bd0f2eb2230128ec67bd7b672}}, most of those previous works does not specifically deal with the safety concerns in the RL training process. For majority of high risk scenarios in real world, deep RL still suffer from bottlenecks both in cost and safety. As an example, collisions are extremely dangerous for UAV, while RL training requires thousands of times of collisions. Other works contributes to building simulation environments and bridging the gap between reality and simulation {{cite:a2c616c4a0659d40f1e5d417af1b122136f927a8}}, {{cite:e0c587dad877438bd0f2eb2230128ec67bd7b672}}. However, building such simulation environment is arduous, not to mention that the gap can not be totally made up.
| i | 6afd5906649b532117e96e26d257b3b8 |
Full-gradient Representations of Neural Networks. Srinivas and Fleuret {{cite:ffb892b998958622f2cb77d66c8aa3ea8f9d972e}} attempt to reconcile local and global perspectives on variable importance and introduce full-gradient (FullGrad) saliency maps which assign importance to both input variables and feature detectors (intermediate units) within a neural network. The framework is motivated by two relaxed notions of local and global attribution: weak dependence and completeness.
| m | 537e7ada84745b57fcfe978a53211acc |
Black holes have fundamental importance in quantum gravity. In fact, it is a general conviction, arising from an intuition of Bekenstein {{cite:bd07640f06456be9673fa05908df38f6df0612b6}}, that they should play the same role in quantum gravity that atoms played in the nascent quantum mechanics. Bekenstein {{cite:616251feaa95c65d9a779c38bc65ef4a35d9c623}} and Hawking {{cite:899f18a6604567c21a9f6315dc45724d0f3742b0}} also found fundamental connections between gravity and thermodynamics in the framework of black hole physics. Remarkably, some of the most important thermodynamic quantities like entropy and temperature may be associated with the black hole horizon. Bekenstein also found an upper limit on the entropy which can be contained within a given finite region of space with a finite amount of energy {{cite:bb06ad02206963bfdaefa60dc00b7cfc6881f1ba}}. The Bekenstein entropy bound implies that the information necessary to describe a physical system, must be finite in a finite region of space having finite energy. In 1995 Jacobson {{cite:e561d882f75c49e621eba925fc26ac6bbbcbb155}} found an intriguing confirmation of the connection between gravity and thermodynamics by deriving the field equations of general relativity through the assumption that the Bekenstein bound and the laws of thermodynamics should be correct.
| d | 0b667bd4e7e8303c1c1ad7cd6f4bff65 |
Our study has investigated a novel task in animal sound recognition,
approaching it via two polyphonic sound recognition methodologies related to those previously studied in environmental and bird sound.
Overall evaluation figures are comparable with the state of the art in these neighbouring tasks {{cite:340e5945bb02967ff1ce1075f4c522cd9cebda2d}}, {{cite:252ae3e46b087ea375c152259e4a573c9710a6fb}}.
The details of the timelines recovered (Figures REF , REF , REF )
show that across all conditions, further development is needed before this paradigm can be deployed for fully automatic analysis of animal behaviour patterns from audio data.
Of the two recognition systems studied, the classifier-based system consistently led to stronger results,
including a better match to the temporal characteristics of the true annotations (Figure REF );
however,
the PLCA-based system has an advantage of directly outputting a high-resolution (frame-by-frame) annotation, which may be particularly desirable in some applications,
such as investigating the short-time vocal interactions between individuals.
| d | 2c30ca614171497271374244514ad9c8 |
A final innovation concerns with the loss smoothing for non–regular minimization problems. This practice is often employed when it comes to optimizing non–smooth risk functions and consists of replacing an originally non–regular loss with a tilted one, which is almost equal to the original except for the fact that is uniformly differentiable all over its domain.
Some examples in the quantile regression literature can be found, e.g., in the works of {{cite:efef271d0ace5d29c13875b300a958caaf97d7b1}}, {{cite:7e8537df02e06e25ae3ac07b451278a17a5da5bb}}, {{cite:4fd6dc2c0b523a6dcad7f8cfe60aec999c1601ad}} and {{cite:84796ac1399adcdd613136550697a1be92d3d364}}. Similarly, for support vector machines similar approaches have been considered, see, among others, {{cite:8726110b19c9f8642bebd34f85da2c451cbc4d54}}.
As emerges from Theorem and Figures (support vector classification), (support vetor regression), (quantile regression), the variational loss averaging defined in formula () provides indeed a new recipe for constructing smooth majorizing objective functions starting from non–differentiable loss functions.
However, differently from other existing smoothing methods, which are typically derived starting from geometric considerations, our strategy is based upon a statistical argument with a straightforward probabilistic interpretation. Moreover, our proposal comes together with a practical rule for determining the local degree of smoothing induced by the approximation, which is in turn determined by the posterior variance of the {{formula:1dedb754-4f21-4fd4-9364-2d9d8ef1d404}} –th linear predictor. This leads to a different smoothing factor for each observation, allowing for adaptive calibration of the new loss function.
| d | 4c4ee164875dcfff0e34270f47143dc3 |
In this work, we show that the Einstein-Podolsky-Rosen (EPR) steering inequality {{cite:5e8013c8088460dd91cccfbd5934df56092fb4cd}}, {{cite:340e788e477b6f28bbbb74757f0a258eff21eecb}}, {{cite:ec4e24117e08305cad50459beae6d5f7c5643fb3}}, {{cite:e1a281e9c1d4900472a6e8a99f53954dc1526bc9}} can be used to obtain a witness for the non-Gaussianity of pure bipartite CV quantum states. While the scenario require pure states, we show its broad relevance by reporting the observation of the non-Gaussianity of the CV two-photon state generated in the process of spontaneous parametric down-conversion (SPDC){{cite:ac2b634b3f2a67c543a07d91e5cfa280584ea2ff}}, {{cite:8a19b7d5e16ae63d0d020778ceb2f556fac4b51f}}, {{cite:806762c0ba9347873dde24dcdf4d664813c8d11d}}, {{cite:5bca9b16357caada9f1a03fe54d84d00746d5099}}. SPDC is up to date the most used source for experimental investigations in the field of quantum information, and our work highlights the simplicity of using this source for new applications in CV-QI. The generated down-converted photons are correlated in their transverse momenta and can be used to test the EPR-paradox {{cite:5e50aa03858390b819953e170f142e8d202277da}}. The observed non-Gaussianity is due only to the intrinsic phase-matching conditions of the SPDC process {{cite:167665d1066b0f34a9ddd34f9a603b3c22e7aa9c}}, {{cite:e1b97be3f1007bc702d9a0bc3fc965546c547ce0}}, thus, highlighting the simplicity of using SPDC sources for new applications of CV-QI.
| i | e8dc668c4d0b9f593e315178f5d3fd00 |
The state of a highly deformable object cannot be captured by only its position and orientation, since the object deforms during motion. Hence, we represent objects with point clouds as observed through depth cameras.
Recent neural network architectures, such as PointNet++ {{cite:1ee176102fd32ec40d0317bf7d53d8c46468c81a}} and MeteorNet {{cite:e8981bc5174b61b5a52144543f2d4cf122c1f1bb}}, are well suited for learning to process point clouds. As we will show, they can yield inverse models that offer a viable solution to the challenging task of real-to-sim for deformables from point clouds. Nonetheless, data collection and training for these can be computationally demanding.
{{figure:c46eaf8d-b38c-47ee-9b2b-409cc9c56575}} | i | 564d885e345bae1bdbc963c9b2f21793 |
Another common mitigation strategy is to split the training over
multiple computational nodes {{cite:2cbd82f7ef4565208ab019d7626b73c3cd5df527}}. However, this
incurs significant message passing overheads and costs for hardware
with low-latency interconnects. This strategy can also be wasteful if
the peak memory consumption is only slightly larger than that of a
single compute node.
| i | 04f41474db0698615f7dfa777c83b0d2 |
{{formula:5a92b67a-4e6d-40da-b606-68a7e106599b}} -BPS states have recently played a crucial role in analyzing the duality between Type-IIB string theory in AdS{{formula:d7f2af16-1276-4710-a938-b2195050dd62}} and 4D {{formula:dc454560-9041-4f24-a2aa-ea6ea7f83f07}} super-Yang Mills {{cite:2764b9c31c149484136481f30419aaa23386b20a}}, {{cite:009a7103b50780a2c8111c662b4ed74993bb8418}}, {{cite:808cb58c174f49cf1200045254414ceb84942805}}, {{cite:f47e9a554ccd942b48b2159d7390670f0e5f8a82}}, {{cite:a2da10f107ea0cfb7b58f6c343d6e495abb31368}}, {{cite:3fbb083b01d8b67e31b7b2f62b2b92c54ccc7db6}}, {{cite:de222fd0f2536151c0304f7323a2461dbea163d3}}, {{cite:ff6d24b6e56428f5c6b53bf01118fd59bbfbeb0e}}, {{cite:834ba902cda8d0adf5349f1431f96dcb1fd6af76}}, {{cite:56e307ec152a3d638e94b6767554b5a83041ce3f}}, {{cite:dd4a3df19986545a3a8cbfb875d1481686985a21}}, {{cite:3f4e02e407fe3ee53b1e65badf83e49231fb4cea}}, {{cite:2f0fad67e7b60a659a9914d52939e300e43228ea}}, {{cite:a00bbf970365c8f42f808080a9d9b8a5ff797c30}}, {{cite:422d2df5f898b838c02d61b1619abc20bedf4c88}}, {{cite:91d0c23c57b469fdab80b5c76a421e620410047b}}, {{cite:1726eebc180ebf97e9ec3c774891da0eaaacf818}}, {{cite:54b0125c8af3103aaffe139b4e0053ffeab22fda}}, {{cite:9d44352d7bb05df4f657a7841f12cec9984b1346}}, {{cite:b7256abff2a93a95d61e3dad493a842bbdcb30f4}}, {{cite:0e48dd7ece93de1b0a55938fb66cb809d956c749}}, {{cite:b11b6df93a43a68276e6296ae55f1748f75e6598}}, {{cite:f70f1df6c111d26bccf51b3e147399eccf283d20}}. Such states can be accurately counted by computing the superconformal index, a grand canonical partition function with multiple chemical potentials for the black hole angular momenta and {{formula:1342c67d-f50b-4b06-9caf-f8d4e768a398}} -charges turned on. On the boundary side the superconformal index can be obtained exactly since its independent of the coupling {{cite:5cd3d0362d541de29219eb0e7b51fb8b617c7a80}}, {{cite:ebdeecc825a6936f278881f83c4e32411aa6ad79}}. Expanding the exact answer in the large {{formula:72d616c6-f1bf-4ba1-ba78-c217d1f65124}} limit, most contributions to the superconformal index were matched with a corresponding Euclidean gravity saddles, with the dominant contributions given by well-known supersymmetric black hole solutions {{cite:f70f1df6c111d26bccf51b3e147399eccf283d20}}. The computation of the superconformal index, on both the boundary and the bulk side, has provided a detailed check of holography and a detailed counting of black hole micro-states.
| i | 8889df2105d818048f3f13e2332ff717 |
We have explored the time evolution of CBPs orbiting low-mass short-period binary stars to further explore how tidal effects and stellar evolution impact the habitability of CBPs. We first revisited the STEEP process with the CTL tidal model and included an evolving mass concentration and discovered that they predict a greater expansion of {{formula:bf6ad922-ee14-4026-baa7-21238d54eb88}} than in {{cite:73544aa751dcc620d1fd762648db9849c6e4a27c}} found, suggesting their results understated the threat to CBPs posed by coupled stellar-tidal evolution. We reexamined the transitional stellar binary Kepler-47 {{cite:35ec5c381150a82b20f9af9ee78119647ec167b5}} and found that its planetary system's stability likely required the initial binary eccentricity to be below 0.2. We presented an example of a stable and potentially habitable CBP orbiting a short-period binary to show that instellation can change dramatically over time in at least some cases due to the tidally driven orbital evolution of the central binary. Finally, we reanalyzed the stability of the equal-mass circumbinary HZ and found that CBPs may be ejected from the HZ for stellar masses {{formula:e6f3715e-b23a-4981-a751-4005a3d73127}}{{formula:f0dc8826-30d9-4b94-81c5-c8498f2769f3}} . Taken together, these results clarify where habitable CBPs can exist and how they might evolve around short-period stellar binaries.
| d | 74688716e26078a327693bf8e19363a5 |
The velocity auto-correlation functions (VACF) also confirm the effectiveness of this simple threshold splitting (Figure REF A and B). Indeed, slow and fast endosomes have very different VACFs. Ensemble-time averaged VACFs (E-TVACFs) of fast endosomes (Figure REF B) are positive as expected for super-diffusive motion. In contrast, E-TVACFs of slow endosomes have negative dips at {{formula:d5ceb59f-af37-4b60-b418-69b603268c85}} and approach zero from negative values (Figure REF A). Such behaviour is characteristic of FBM and the generalized Langevin equation, but cannot be reproduced by the CTRW model {{cite:c36a5849c297f470cb47f65f3e5186d09fa80251}}.
{{figure:61c27c7d-3141-438f-9904-b5101331d730}} | r | e636f9594efe67dcd3214cd05a9339f5 |
Vector Quantization (VQ) {{cite:4d52fcb248812ff9cb9f2dbf25686f8b79761014}}, {{cite:faf5d0b55bacb48e236f6738619766cbd09c715a}} is a topic of vast interest, so that we shall briefly review only the literature which is most closely related to the topic of this publication.
Suffices to say that VQ is widely adopted in speech coding {{cite:77ee5fdbb4bec8d435817da11d978aa7fa75fb96}}, {{cite:3e7e4cfd05950f5a81ca143038feb4cc075a7b7f}}, {{cite:7f066e2ad38eb57dc8b79e0e7246d172fbad76c4}}, image coding {{cite:75bd6a422f178479a76d4cfcbfe4c1a7cef2219a}}, {{cite:d46aed7419cc9046f1630be7315377566f912c4a}}, {{cite:3c0205f7721a2ffcbb2c2bd68a508c88c6efb173}}, and video coding {{cite:dad14976f6790a3e60edf7503221a15a07660310}}, {{cite:f91cac5cc791afb3c55d2a1afc6d94f39c6cc34b}}.
VQ has been successfully used for speaker identification {{cite:7ba1e84917efc25f1111aec37933810001991eba}}, {{cite:ceee3dc3316cc761cb42925af8d8103d40f00108}}, digital watermarking {{cite:78b8e00d8cca0d96684c9a737b7cabc4dcc4b16c}}, {{cite:cd10455955ce95f93df969a9440faa34f8151c54}}, and clustering {{cite:7c3a057ea93bd0255fc56f7595c76f0b710afa99}}, {{cite:121de88ba6b410f2c91537a818b37a8028d7b746}}.
| r | 3df4c1ac2c0314877b1fdc704090a0c1 |
Some generalizations were already put forward in the original work {{cite:dd75c203c11c971c433b2b643007e2a418432a73}}. These come under the name of q-complexities. In fact, the authors argued that Krylov complexity bounds the growth of the OTOC, that arise by choosing inner products of the type {{formula:1c6dddbd-cb34-47b3-a11e-ec4fd8527b2a}} , for arbitrary operators {{formula:4d08c6db-e623-4632-9d3f-69513717f5a2}} and {{formula:b065a2c9-5d86-440f-8922-4b6bd4843733}} . Another interesting generalization was studied in {{cite:2c6e5a280bbac5d84ef1e9fa83c35289e197aff7}} and focused on a certain fixed energy bandwidth. Below, we discuss more of such new directions that are natural from the perspective of CFTs and holography.
| d | 93545142c67d368870c561ec029ede00 |
Regarding the density environments, most of the bent FRs in our sample are found at densities of 1/Mpc{{formula:c965c391-2498-4eaf-a712-2be23bb468d9}} (Fig. REF ), with a handful above densities of 10/Mpc{{formula:3e8627e4-a5a2-43dc-a489-eda4917e1022}} . The mild relation we find between FR type and BA is interesting and puzzling at the same time, given the FRIs, which are on average slightly more bent than FRIIs, are found in less dense environments than the other FR-types at 3 GHz VLA-COSMOS. The latter result is also reported in {{cite:2738469ae661f8e3f17065e5aff173991e4d4d6b}}, showing that FRIs reside in regions where environmental density, the number of galaxies per Mpc{{formula:32d9171a-ec8c-4f5c-946b-e75629d9425d}} , is lower than the other FR types. The smaller bent angles of FRIs could be caused by either movement of the jets through a denser ambient medium, or movement of the objects through the ICM, in which case ram pressure bents the jets backwards {{cite:bf51bcc27fcedad917cf03841ecab314dea81a40}}. Given the large scatter of BA values in the FRI sample, and given the average environmental density of FRIs is lower than FRIIs with an overlap in the distributions, we believe this is an object-to-object investigation. The bent angle is likely affected by more than one parameter.
| d | 693153a485f3356034c20775cbdd28c6 |
Usually, {{formula:b51ef667-a7ff-46d4-9331-6a1714b34c79}} symmetry in the SM is considered to be explicitly violated
through complex Yukawa couplings {{cite:574075338fe3f006a927459b4c951dca54ab450f}}.
However, alternatively, the {{formula:fd9cd819-9392-442e-a6ba-839059f24f2a}} can be considered as a
symmetry of the model originally but it is spontaneously broken at some scale
to cause complex phases of the Yukawa couplings at low energy regions.
Introduction of singlet scalars makes such a scenario possible {{cite:44de39f902de107182d8302cb7a88fe69a191021}}.
If they couple with the Ricci scalar nonminimally, they could cause
slow-roll inflation successfully just as the Higgs inflation {{cite:ee6a2568c9c64dd9bf6987da2e3bf2f972b9e68f}}, {{cite:1f0a1ef952edb0e960551c0f27e508e6810f5a77}}, {{cite:fb6bc537fde35cd105c3918913f526333e00da9c}}, {{cite:050d51583fff4132b348e59594ab74e4f2586fcf}}, {{cite:4cc570fa3adfc14f1e2061f37851eb0214986111}}, {{cite:0f1a879deb3146c60ebe96d8aa92f79fea9646fc}}, {{cite:feddc35c32fec84dbebf32791265b015ec1ad6d3}}, {{cite:9fb0e7f3856c609217f09cfbede9ae52799c8d63}}, {{cite:4a7ef661585fe01e2f9636cde6b68bd82f1352e6}}.
One of important differences from the Higgs inflation is that their nonminimal
couplings can take a value of {{formula:428f6879-b41c-4aec-b716-e46943c0f6ae}} consistently with the CMB data.
In this paper, we study the possibility of singlet scalars as an inflaton
candidate, which could be relevant to the origin of the {{formula:5501d4cd-ad7c-4c5b-b1f6-b4243128cd46}} violation in the SM.
| i | 4d292fa3d7d6f350109e94d4918e0020 |
where {{formula:53b9387a-94ba-499e-be2a-e8afbca8d8d6}} is a real constant. This form is more convenient for our consideration and for the case of {{formula:e55543d2-8849-4c08-b74e-096ad445ae10}} , it can be obtained from (REF ) via Gauge, Galilean and scale transformations. We will adopt the Kadomtsev-Petviashvili (KP) hierarchy reduction method. This is a very powerful method to construct explicit solutions of integrable equations and has found extensive applications in the derivation of rogue wave or soliton solutions of various integrable equations {{cite:76482b3e0faa6e1e4bb83e8966f5484ccfffd0ee}}, {{cite:8a46d9ea5c0bb2f1caadf40d94eaeda1957579aa}}, {{cite:baf3ba465e9b3ca70016a8acd0db2fe98180eca9}}, {{cite:3aad09e715eed19eeba221681578992be505723e}}, {{cite:83658b5e218395e9f573070a11849ece44c5036c}}, {{cite:58584e41a84844054d531e106733b9a0f1492f61}}. However, several difficulties are encountered when applying this method to derive rogue wave solutions to the Sasa-Satsuma equation due to its complexity.
Unlike the NLS equation which belongs to the AKP hierarchy, the SS equation belongs to the CKP hierarchy which is a symmetry reduction and the sub-hierarchy of the AKP hierarchy {{cite:ac12a7bf90cdb708b6a1d0d3f150de95ba716948}}. This forces us to start with a kernel of {{formula:dd0b3095-2048-4baf-a568-8adadeed51cc}} matrix which is one of the novelties of the present paper. On the other hand, the bilinear equations of the Sasa-Satsuma equation correspond to eleven bilinear equations in the KP hierarchy. Therefore, as explained in subsequent sections, the reduction procedure and the rogue wave solutions are much more complicated.
| i | e80875c80cb8dacc7f197c3d5c3175b7 |
The poleward enhancement of the {{formula:c3ad6447-3f25-4937-9a4c-7cce67f21bf4}} -ray flux as {{formula:cd468007-2de4-435d-ae36-f7e234c72a8a}} , indeed,
is unaltered if we adopt different BH masses.
This is because the magnetic field lines concentrate
toward the rotation axis as {{formula:bf6a20b2-bcf8-45e5-a275-95d839973592}} ,
irrespective of the BH mass
{{cite:c76d7a8d6e1c8a3169158457a28e95364b796ff9}}.
However, as the BH mass increases,
the IC process dominates the curvature process,
leading to a poleward enhancement of VHE radiation for SMBHs,
which is to be investigated in a separate paper.
| d | 632ffd440f97b6c1f06cf33ac9afe8dd |
In Figure REF , we examine three deconvolution methods, including an {{formula:b19cb725-5ca3-4fdc-9728-1c656b07ce1e}} -penalized method with a soft threshold {{cite:c04f567a1a7f90ef604a50b60455ef34a5f1f415}}, {{cite:c18667ae8bd9a159bda85cd391cc402426493a91}}, a method with a hard threshold (i.e., positive minimal spike size “{{formula:2e80b5b2-2475-498d-a2f1-aa227d3cd038}} ”) {{cite:ca22333d0c195a2137e389c93f7c8f9103225831}}, and an {{formula:3b041258-4cc3-48c4-8ebb-efea7c467bd1}} -penalized method {{cite:fb8e3939dd27a455d4652501a7caf1f0f6cb9e9a}}, {{cite:dd08954c1a0d6dda87f4dff309a3a06178fb81f0}}. Each method has a free parameter which controls the sparsity of the inferred responses; varying this parameter leads to corresponding changes in the histograms of the deconvolved responses {{formula:b1012d9d-584e-4d2b-ac95-e3980b3acbc9}} , with more or less probability mass assigned to {{formula:fb861cb6-19e6-4307-ae5c-9fd5fd370e6b}} . Over a range of parameters, the ZIG model provides a good fit to the output histogram for all three of the algorithms examined here. We also found that the ZIG model provides a good fit to the output of deconvolution applied to data generated from an AR(2) model as well as the more biopysically detailed model from {{cite:39f5b800877a335c46bb08384d3bb78fa00daba3}} (see SI Figure REF ).
| m | 3ee291e647c53a585ea676c74f5be855 |
The motivation of perusing dilaton field comes from the fact that
this field appears in the low energy limit of string theory where
the Einstein action is supplemented with other fields like axion,
gauge fields and scalar dilaton field. The dilaton field changes
the causal structure of the black hole and leads to the curvature
singularities at finite radii. In the absence of dilaton
potential, exact solutions of charged dilaton black holes have
been constructed by many authors {{cite:e3439d226ea3f5760859167f561cae57c0e05b21}}, {{cite:45da30bf923b08572c6e92f892d18992694b8fe5}}. These black
holes are all asymptotically flat. It was shown that the presence
of the dilaton potential, which can be regarded as the
generalization of the cosmological constant, can change the
asymptotic behavior of the solutions to be neither flat nor (A)dS.
Indeed, it was shown that, no dilaton (A)dS black hole solution
exists with the presence of only one Liouville-type dilaton
potential {{cite:b8c4f6c9bc00c2ca61f76fa1538a48dfeaa105e7}}. In the presence of one or two Liouville-type
potential, black hole spacetimes which are neither asymptotically
flat nor (A)dS have been investigated (see e.g.
{{cite:49846e015c233f5271248fd1362303c686273805}}, {{cite:90b925f78ebc3cd79947dda733da0450b181687b}}, {{cite:7f51adc95789bd61f0bdd9b0be2727a4921d214f}}, {{cite:d67b39ee0739c43a1770fd780d59871279ce6ff5}}, {{cite:9ff4ef7a1af4f8b53d425ed982da7e70f43f53bc}}). The studies where also
generalized to dilaton black holes with nonlinear electrodynamics.
Physical properties, thermodynamics and thermal stability of the
dilaton black objects in the presence of Born-Infeld (BI)
nonlinear electrodynamics have been investigated
{{cite:452aa459be6a4ea7683d55c24ae61a2eecb8cefc}}, {{cite:8be69529335bc423046147e5074e224e6856bd5e}}, {{cite:0df89d3ace9712d5d5b3b2272e2aa7250e20545a}}, {{cite:220d04451f2dda92371b6b7a27de741446504d9e}}, {{cite:99572ce7ea4056fb424ac40990c159f78020c0c1}}, {{cite:74e534fc1c27c4aaf772c03ebaf7c0c5266a81a3}}, {{cite:e0a9696cc014ae0f3144132c008f69df2295fa3e}}, {{cite:db30e1b336fac7f046483dd68e04ff96819ab217}}, {{cite:1dd95579b4c59fe88eebc5d40a5d85cd4e47b440}}. When the
gauge field is in the forms of Exponential, Logarithmic and
Power-Maxwell nonlinear electrodynamics, dilaton black holes which
are neither asymptotically flat nor (A)dS have been constructed in
{{cite:7670215b87cf5d0c60d4896ab12a346d06238e95}}, {{cite:cfa899e9923c6efc05063fcfc9728407df788294}}, {{cite:e00c77db388338245af5dc00b3f002fddbbb8abe}}.
| i | 4b77383fd4d76463f20e5f6aa32be69b |
Remark 1.1
The initial data of form (REF ) are essentially the “short pulse data" first introduced by D. Christodoulou
in {{cite:122c73e491e5dcf26f973a4689bd660828da562d}}, where it is shown that the formation of black holes in vacuum spacetime is due to the condensation of the
gravitational waves for the 3D Einstein equations in general relativity (see also {{cite:741e7173d2d41435dfe0fdfb99c5de2ac8e06e21}}). Note that
properties (REF )-(REF ) hold true for given smooth function {{formula:92af0167-fdf5-4fb9-b238-25bee3334859}} and the choice of {{formula:01ec4ff5-9a5e-4476-a409-afb38dc1c702}} .
In addition, a large class of short pulse initial data with the properties (REF )-(REF )
can be found for general second order quasilinear wave equations (see {{cite:e9f44270d718172ba795810dec82599cf9611e9d}}). Roughly speaking,
the “short pulse data" can be regarded as some suitable extensions of a class of “large" symmetric data, for
which the smallness restrictions are imposed on angular directions and along the “good" direction tangent
to outgoing light cone {{formula:e5c7c91a-1c5e-4e82-bcee-9590b043dff2}} , but the largeness is kept at least for the second order “bad" directional derivatives
{{formula:dd9b0d7a-c80e-4c44-9bd9-f47d235b8ffc}} . This provides a powerful framework to study effectively the blowup or the global existence of
smooth solutions to the multi-dimensional hyperbolic systems or the second order quasilinear wave equations
with short pulse data by virtue of the corresponding knowledge from the 1D cases,
see {{cite:122c73e491e5dcf26f973a4689bd660828da562d}}, {{cite:546fe816ea9b5d554dcf151f82a30ead22745260}}, {{cite:741e7173d2d41435dfe0fdfb99c5de2ac8e06e21}}, {{cite:dfc9f916724941e3632ed5d4e8f7b78feaea13dc}}, {{cite:e9f44270d718172ba795810dec82599cf9611e9d}}, {{cite:5a5da465901ce63f3607f3f647d248767581dd34}}, {{cite:9d135813e4180c906b890b5a0fe93299f4497307}}, {{cite:039e288252c58aa83f9a227293dc119f878b94e4}}, {{cite:610223065e1ae92220cb836b25d1694b6c6b4857}}, {{cite:76a9b4b500b0e2fb9bbedafcf453ef07b25be205}}, {{cite:be58c48beb661d72d07212dfda299abb1a66f8ee}}, {{cite:b22df7d0b6207312064dcd8ad20be4fc260ddbf3}}, {{cite:2f0cbdf944824855552f03a1c3adefebd3271b82}} and the references therein.
It is noted that for short pulse data (REF ) and sufficiently small {{formula:4ca0682a-aef2-4e9e-a009-a10b0517a2e7}} , although both {{formula:b2af524e-5e9c-448e-b87e-887ae358eb83}}
and {{formula:9f2a5424-cf7d-45ee-9c7d-1d6f6404ceae}} are small at {{formula:f6596648-474f-451e-833a-49fa39ca2dc6}} , yet the initial data are still regarded as “large" in the sense
that {{formula:ace2f5df-c171-4cc3-8ea5-61ae6d15aa3b}} may be large, and in fact,
{{formula:a955c668-1bbd-4ca3-ab45-9fc8c26927bc}}
| r | 25f03b3fbcd8644ed6f28f61357563eb |
We define effects that contrast potential infection outcomes when the infection history is either held fixed at a deterministic value (controlled) or integrated (marginalized). One advantage of this approach is that it permits definition of causal estimands in which the infection history of others (or its distribution) is held constant in a contrast of individual treatments. In this way, the susceptibility effect of Definition REF avoids the problem of differential exposure to infection between treated and untreated individuals which arises in the definition of the “direct effect” proposed by {{cite:8fbb929e8892822ed705d797bed4a75f1a37baef}}. Simulation results show that while crude contrasts like {{formula:4924348a-f409-493c-9ca2-f5d62d9b9bc7}} may sometimes have the same sign as the true effect when the susceptibility effect is different from zero, they may be biased under certain randomization designs or when the infectiousness effect is strong.
| d | 0f650b47addfffdad1b5d212ace96ee2 |
{{formula:132c6a2f-c6e1-4749-adfe-c61939ebc2ae}} , for {{formula:bcd5a44e-e2b9-448e-907a-56d238f01335}} 1e4, and {{formula:5cddd6c0-b3f0-4a60-8a1a-8c73b9a26ed2}} for the Ikeda systems and the Mackey-Glass systems with {{formula:a2356880-098e-4bcb-af57-d36040917fc8}} or {{formula:1f7ef60f-c30f-4663-90ca-d0851e37973a}} for the Mackey-Glass systems with {{formula:3f3fbe2c-e7dc-4f2e-bd46-810f940c4144}} . These embedding dimensions were chosen because they yielded less than 1% false nearest neighbors using the false nearest neighbor criterion {{cite:ebc6c7f2bb53c5b3c6d6b644734926d0492e52e6}} with thresholds {{formula:002ba5d8-b120-409d-9062-9998baa8a339}} and {{formula:5b8bedda-e918-4b7b-a337-dcab95cb9638}} , and so we expect are sufficiently large to unfold the underlying attractors. Thus, for each simulation a point cloud consisting of approximately 1e4 points in either {{formula:dcb3eda8-4f60-40f9-b516-0485eb6e3da1}} or {{formula:163cb6e6-bef6-4a82-b220-1f85235f7be7}} was constructed, ostensibly capturing a sampling of a diffeomorphic copy of the underlying attractor.
| r | b3d9ebd25748a7d7d4f5490c682b6fea |
Several approaches as {{cite:045c5eb5370a832231a7d3c849cfd36eb59ab74d}}, {{cite:b3e275f5c7fe6f24e31b4f8eb55224436a461ed0}}, {{cite:31dcd117fa7ef72f6c88b5804db51b03e537a524}}, {{cite:41facabf582dc96eb7cfbd4d9b6fcd5835f8560a}}, {{cite:e8b298681bb31aefaf9499fcf0d93a834a1539ec}}, {{cite:d7bcab9a213faf1511c92f6ec4b4e2c41b8fb630}}, {{cite:fbc267b3d0a6067ca02835d4760e8afad1eeff04}}, {{cite:886a1c4ad0e024e8cb232e7730f453742708df59}}, {{cite:66033184e291faa90b648aafc5d21496be4a7150}}, {{cite:f2f98108e4c70efbe9a9b68718ff9d8a1b5a62e5}}
have explored the strength of deep neural networks to learn motion patterns of moving objects and to produce binary motion masks distinguishing whether a pixel belongs to a moving object or not. Most approaches propose a two-stream architecture {{cite:b3e275f5c7fe6f24e31b4f8eb55224436a461ed0}}, {{cite:31dcd117fa7ef72f6c88b5804db51b03e537a524}}, {{cite:e8b298681bb31aefaf9499fcf0d93a834a1539ec}} to separately process motion and appearance.
| m | 30851689415a404600d29eb3344a71e4 |
Following our previous work, we solve the TF dynamics model (REF -REF ) using an application of the method of lines. The spatial derivatives are discretized using collocation at second-kind Chebyshev points ({{cite:5e3b5981e7d2c8d7e6161e12d2fb5161713b8f7c}}, {{cite:ad6be3176048482a2c1e06594aa4be411c2d9584}}). We enforce symmetry at the origin to avoid singularities in the axisymmetric case; this is achieved by expanding all operators in {{formula:b2b7d93c-7259-434b-a755-ab4c87e5d334}} and dropping odd derivatives. The resulting system of differential algebraic equations for the dependent variables at the grid points
is solved using ode15s in MATLAB (MathWorks, Natick, MA, USA). For the optimization, we use a nonlinear least squares minimization implemented by lsqnonlin in MATLAB with the trust-region reflective algorithm ({{cite:b2530a9c86745fcab0c253a5a28630674ada8d0d}}) and we added a second order finite difference approximation of the Jacobian ({{cite:0d1d9db06c9f46642137b40b6bcdab2c5ee53e0c}}), which improved performance. In this work, we found that the Levenberg-Marquardt and trust-region reflective algorithms produce similar optimal values, but the latter is often preferable for its reduced average computation time. For the mixed-mechanism fits we report, the computation times for the optimizations varied from 1.5 minutes to 111 minutes.
| m | 87f8061f7ecee7b884ad6480e8976911 |
All calculations were performed within density functional theory (DFT)
as implemented in the Vienna ab initio simulation package (VASP)
{{cite:8ab0c61903fd0b0b95588bce88c7f18bbca4c5b7}}. The interactions between electrons and ions were described
by the projector augmented wave (PAW) potentials {{cite:cfee7b00b68b53cc8aee58eb566cf806719ed6a9}},
and the electronic wave functions were expanded with a cutoff energy
of 500 eV. To study the various strained configurations, the choice
of the exchange-correlation functional for the lattice and electronic
band structure was optimized with respect to respectively the experimental
lattice constant and energy gap of the unstrained configuration. For
the ionic relaxation, phonon and elastic calculations the local density
approximation (LDA) implemented by Ceperley-Alder was adopted {{cite:e722ebac717b9c1e962f13e6013513c7a160f4b3}}.
For the structural relaxation, a force convergence criterion of {{formula:1b47bcba-0942-4008-bbd7-22e19b08e693}}
eV/{{formula:c103eec6-a521-46df-af20-140c89e8465c}} was used with the Brillouin zone (BZ) sampled by a
{{formula:2cb5bccd-44d5-4dea-a4da-07011f7629cf}} {{formula:174e27e5-c812-4992-af91-04ae533d6ab6}} -centered {{formula:724da41a-24ec-436a-8449-521274ef2214}} -scheme. Phonon dispersions
were calculated self-consistently on the basis of density functional
perturbation theory (DFPT) and with the use of the PHONOPY package
{{cite:c84ef2a202ee04499d7a5f2872fbe1857a1c96d4}}, while the stiffness matrix was calculated by a finite
difference method as implemented in VASP.
| m | 1f0d6d31352cde2ac34f97b680a447dc |
We visualize the embedding space (projected to 2D using t-SNE {{cite:a6db3366da70ea9e777df5c84c87513151425a69}}) in Figure REF , with just randomly chosen ten classes out of 69 for clarity. On the left (a), we see constant bandwidth CQFB features ({{formula:f4358f5a-65b9-4c96-9e98-d068fa4cbe96}} ) fail to recover the underlying data structure. In the middle (b), variable bandwidth CQFB features form clear class-based clusters, but they are fairly diffused, and it is easy to see how a simple classifier can confuse similar classes near their class boundaries. On the right (c), BoostMetric induces more compact clusters with low intra-class variance, but high inter-class separation. Classes are much better separable in Fig. REF .
| r | 3cfd5b63c22f00f94719654bb7792d58 |
Considering {{formula:75941b55-a3f0-4cf8-86e7-9e61e34eb918}} in Corollary 3.10. in {{cite:a60b508789669504c0895ba7d77f90ed604ca8a4}}, recalling that {{formula:c3012d80-4c84-4dc5-83d7-a7d2a1bb8f04}} satisfies the uniform exterior cone condition and {{formula:6fa8ca2b-31e5-43f2-9a3a-e332e48d96a8}} is continuous in {{formula:83bbfabb-de7d-4ec2-8ee5-35bf88d07bf8}} , by (REF ) we have that {{formula:d08b1b39-5b31-4891-98de-5ced53368525}} for all {{formula:80def186-107e-443f-b4df-454791f3148b}} , therefore {{formula:1bf21f5f-ac34-4162-b07d-3bec8297a019}} (using by Lemma 2.5 in {{cite:a60b508789669504c0895ba7d77f90ed604ca8a4}} the fact that the unique {{formula:88ccbc1f-7b6c-4314-8964-46a08c48e82b}} -strong solution of (REF ) coincides with our {{formula:00d2341f-257e-4fae-9003-2be389ec8bd2}} -viscosity solution). This implies that {{formula:58bc55a3-97f8-485a-996c-12ec9d8deb39}} , and by Morrey's inequality (Theorem 4 in 5.6.2, {{cite:8b0a29f3024c2ed8c0b52e4379edf0904e88a5bd}}), we have that {{formula:e77f7b84-329e-4407-8f19-61ef1aba74bd}} , for some {{formula:8cdbe93a-09ed-4140-97c1-5b84e63cd447}} . Therefore, by Theorem 1.2 in {{cite:2d9612a212cd7768602140977886f3477f32e5fa}}, the function {{formula:c729376d-69dd-4829-b731-7fe0ad3b9506}} is in {{formula:80f81107-f064-48b4-af83-333d46122364}} , and this implies that {{formula:bd8907e1-f584-47c7-be4f-366c20c71c3e}} is a classic solution of (REF ).
| r | 3c624de50d1d238241f531bf7abc1f91 |
However, there are two problems that have been persistently neglected. On the one hand, most methods model spatial patterns and temporal patterns separately without considering their interactions, which restricts the representation ability of the models a lot. On the other hand, neural networks generally perform better with the stack of more layers, while GNNs benefit little from the depth. On the contrary, the best results are achieved when two-layer graph neural networks are cascaded, and more layers may lead to inferior performance in practice {{cite:c11ede07e9ce5116b7e71cbfeadc4ecd17019dc0}}, {{cite:7ec67416d6357977d7530c87f9d7e424b5a533d2}}. Ordinary GNNs have been proved to suffer from the over-smoothing problem, i.e. all node representations will converge to the same value with deeper layers. Such drawbacks severely limit the depth of GNNs and make it hardly possible to obtain deeper and richer spatial features. However, to the best of our knowledge, there are few works considering network depth in spatial-temporal forecasting, which is of great importance for capturing long-range dependencies.
| i | e21655dbba7c2d5c8c31c7d544083539 |
The second group of datasets contains images with possibly simpler foreground objects but more challenging scene configurations or background parts. Visual scenes in the CLEVR6 dataset contain various objects and often with partial occlusions and truncations.
Following the evaluation protocol in IODINE and Slot-attention, we use the first 70K samples from CLEVR {{cite:21e9bb547e1d16f0dcf8960287031ce48214c994}} and filter the samples for scenes with at most 6 objects for training and evaluation, i.e., CLEVR6. The tmds dataset is a variant of Multi-dSprites {{cite:a6eb1333e74e4cf7be95d08d10995c6facfd94f1}} but has strongly confounding backgrounds borrowed from Textured MNIST {{cite:999b8a75c660a4c4f1687ad8a2f68b849da41c65}}. We generate 20K samples for the experiments. Similar to {{cite:2a91d2d95a186f0b634b045bde5111f07415ec82}} and {{cite:fb41a38bab7dec4085c4dbcfbde10398813c0b62}}, we evaluate on a subset containing 1K samples for testing.
Note that IODINE and Slot-attention are designed for segmenting complex multi-object scenes using slot-based object representations. Ideally, the output of these models consists of masks for each individual object, while the background is viewed as a virtual “object” as well. In practice, however, it is possible that the model distributes the background over all the slots as mentioned in {{cite:fb41a38bab7dec4085c4dbcfbde10398813c0b62}}. We therefore propose two corresponding approaches (see the supplementary material for more details) to convert the output object masks into a foreground-background partition and report the best results of these two options for IODINE and Slot-attention in tab:extresults.
| r | 8fee93131de8e5e660a3873d68264c59 |
Influence of Molecule Fragmentation.
Here we show proper molecule fragmentation methods are vital for the motif-based pre-training. Given different molecule fragmentation methods, different motif vocabulary are generated with varying sizes. Other than the fragmentation method introduced in this paper, we also try other fragmentation schemes with different granularities {{cite:ca8a3eb373b3a2b7389a8936be39f6a0884e5d1e}}, {{cite:301b3b717c4e98ea534d3c93eeb75900e48edbda}}. BRICS alone {{cite:301b3b717c4e98ea534d3c93eeb75900e48edbda}} tends to generate motifs with large numbers of atoms. Due to the combinatorial explosion, its generated motif vocabulary has a size over 100k while more than 90{{formula:f083ff82-d287-4737-8147-03c6d35d2445}} motifs have frequencies less than 5. On the other hand, JT-VAE {{cite:ca8a3eb373b3a2b7389a8936be39f6a0884e5d1e}} fragments molecules into rings and bonds and has a motif vocabulary size of less than 800. Our methods generate around 12k distinct motifs. By combining different fragmentation strategies, we are able to fragment molecules with intermediate granularities. In Figure REF , we show the influence of the size of motif vocabulary on 5 benchmark datasets. We can observe that the pre-trained models achieve the optimal performance with the motif vocabulary generated by our method. This may be explained by the following reasons: 1) When the motif segmentation is too coarse and the motif vocabulary is too large, the generated motif trees have fewer nodes. It is harder for GNNs to capture the structural information of motifs. Moreover, the generated motifs have low occurrence frequencies, which prevents GNNs from learning the general semantic information of motifs that can be generalized to downstream tasks. 2) When the motif segmentation is too fine, many generated motifs are single atoms or bonds, which inhibits GNNs from learning higher level semantic information through motif generation tasks.
{{table:301601f9-685e-4c44-a76f-e385f227e297}} | r | eb958b3ef60fce7aa16226ace6c9b361 |
Understanding user questions.
The framework is modular to enable different ways for communicating a question to the AI system. Depending on the user interface a question could be given in different forms, for example speech, visual gestures or text input. Different technologies can be used to translate the question into a formal question such as: speech recognition, Natural Language Processing methods or human body tracking. Context can play a crucial role in understanding the question that the user asks. {{cite:abe759feefce8015de1c94c95fa298d8b148fbae}} showed an example of a question requiring two different explanations depending on the context in which it was asked. In each case, improperly interpreting the question lead to an unsatisfying explanation. One promising direction for addressing this challenge is the use of argumentation {{cite:71844c5b8b3a5e9a29edbfff29aba3a3f561bbe5}}.
Formally categorising the set of questions that can be answered with contrastive explanations.
Although some philosophers, such as {{cite:69498be4ae6d1e021784a4bd09ff6b01a64b00cd}}, noted that “why"-questions can be implicitly or explicitly understood as: “why is A better than some alternative?", there might be questions in the planning space for which contrastive explanations are not well-suited. For example, if the user is simply trying to understand the conditions or requirements for various actions in the plan, or the causal or temporal structure of the plan, a contrastive explanation may not be appropriate. Instead it might be more appropriate to highlight the causal structure in an abstracted version of the plan. An important issue for future work is the development of a formal taxonomy of the types of questions that should be addressed using contrastive explanations {{cite:b7f44f57149ecb8f29e03493c8bf546bb4558d91}}.
Expressing constraints on actions and plan structure.
A contrastive question requires creating a hypothetical planning model, which is often characterised by constraints on what actions are permissible in the plan and how they are arranged. For example, the question “Why did you use action A rather than action B for achieving P?” requires planning with the hypothetical model where B is required to be in the causal support for achieving P, but A is not in that causal support. This is substantially more difficult than just universally excluding A from the plan and forcing B into the plan because A or B might be required or prohibited elsewhere in the plan. Currently we do not have a good language for expressing these kinds of constraints. PDDL 3 allows the expression of simple constraints on the order in which goals are achieved, but does not have the ability to express constraints on action inclusion, exclusion, or ordering, and does not allow us to place more complex constraints on how something is achieved or on plan structure. We would like to be able to say something simple like “{{formula:32f545a5-ade2-4770-9f02-e60265578a68}} ”. LTL will likely play a key role in defining the semantics of any such language, but additional concepts concerning plan structure are needed, such as the ability to specify that an action is part of the causal support for a goal or subgoal.
Compiling constraints into the HModel. We showed examples of how a constraint derived from a user question could be compiled to form an HModel. However, providing compilations for more general constraints (like the one above) and ensuring their correctness is an important issue. Additionally, the compilation can lead to producing plans which might differ from the original plan in ways unrelated to the user question. We believe that the work on planning with preferences {{cite:9b495af802b2895845f2988f33a79b4c19da6819}} and state-trajectory constraints (Baier et al. 2009) is an important first step, but does not yet address the full range of constraints needed.
Forming and presenting contrastive explanations.
The form of contrastive explanation we provide, as discussed in Section 3.4 and shown in the GUI in Fig. REF , is a very simple one that presents the original plan and HPlan and highlights the action differences between them. Also, it is possible to obtain hierarchical contrastive explanations by asking consecutive questions. However, this does not show the causality of the plans, or the differences in their causal structure. Fig. REF shows a possible composite causal representation for both the original plan and the HPlan, with the differences shown in different colors. This way of visualising the explanation can help to elucidate how the two plans achieve, or fail to achieve, the (sub)goals of the problem with respect to specific actions in the domain. However, for larger and more complex plans we expect that some form of abstraction will be necessary in order to effectively compare and contrast plans; the user might wants to see the important differences between two plans, not all the details. What counts as details remains an open research question, but is likely related to action costs and to the ease and importance of achieving various subgoals. {{cite:99e46ce932ddf054347e420e5d8ea65f233e6fc3}} have done some initial work in this area, and have considered milestones as important abstractions for purposes of explanation. The issues of what constitutes a good explanation, and how to visualize it or present it remain intertwined. Some synergy between researchers in planning, data visualization (e.g., {{cite:c0f97dd30cd69e66a3d1a4de315c23d096f36243}} or {{cite:0a9dd8f3b569771bff9e734a7a28b0258c3e5382}}), and social sciences {{cite:921114ee774011bb7df3a847e0d65174551aafd7}} would be fruitful.
{{figure:a8c7d710-921f-47cb-9321-a5a1e7ae11ae}}{{figure:387f9fcf-7cde-4610-b600-fe042dd8d580}}
Providing explanations for complex questions. In the presented approach, a user is able to iteratively ask questions to refine the explanation. If the explanation does not satisfy the user, or the question they have is more complex, this approach can provide the user with a deeper understanding. However, this process could be automated by analysing a more complex question the user might have, and decomposing it into several formal questions. In this case new constraints can be added to the HModel automatically until the explanation addresses the intended question and potentially the context it was asked within.
Assessing the effectiveness of explanations.
We believe it is crucial to be able to acquire evidence of the effectiveness of an explanation. In particular, if engendering trust is the motivation for Explainable Planning and XAI in general, then we should look at the actual experience of the users and check whether they gain confidence in the planner or not.
For this, a vital step for planning researchers is to include user studies to assess the effectiveness of the explanations they are providing.
| d | 6009a2cc169a88c768bc93c404f2cd45 |
Here, {{formula:859acb75-0562-4321-848a-b2f1dae1e79b}} is the instantaneous eigenstate of the dynamical invariant {{formula:f80ce897-0940-4652-82ae-9c8280993b4f}} corresponding to the time-independent eigenvalue {{formula:c788c2f9-6d84-4fb7-a95c-4276315688c1}} , and {{formula:5ae24d6e-ee18-4c99-9484-189f3b190d6a}}
the Lewis-Riesenfeld phase {{cite:4e9dcfcddc2513d045659847bfb995c132437874}}. With such an exact solution, the quantum evolution of the driven system can be engineered by designing the proper time-dependent Hamiltonian. Obviously, the invariant method provides an effective approach to implement the fast quantum controls {{cite:4e9dcfcddc2513d045659847bfb995c132437874}}, {{cite:3fdab4aac2691effb417b42b77e1b130623c9762}}, {{cite:120fab7d81c8ab5418884c84f9244df702999315}}, {{cite:2bc30a17c14178e3d69cee18cfa719395b8255cb}}, {{cite:cdc3f57adf0a4bee49cd837927364254c72f2c03}}, {{cite:f1678337a7f3d988bcf2079a830da331aeff1353}}, {{cite:3c3cdad212833711837274404501e6400da5c381}}, and could be applied to design the compacted optical waveguide devices by using the quantum-optical analogies {{cite:3a8fcaa4da66a8718c03a28504de4efbca68869d}}.
| i | dd6d04a65e0e9b77e3d50120d1915fe9 |
In this section, we carry out an experimental analysis to illustrate the comparison of utilities for histogram query of the shuffle model using {{formula:b6e4cd44-59fa-4659-982b-e41b2aea85b6}} -RR local randomizer and the optimal Gaussian mechanism using synthetically generated data sampled from {{formula:d4a422b1-b78b-43fb-863b-64a9271dc724}} . We use Diffprivlib {{cite:f72ffd7bd75d14d2a6f0043bc4e10b58cb7e7408}} which is a general-purpose differential privacy library to implement the Gaussian mechanism. We use GaussianAnalytic {{cite:305dff955621d7db819756ff5a9aa938785df639}} in Diffprivlib, which enhances the utility and enables the implementation of the optimal Gaussian mechanism. We experimented and demonstrated our results in the two categories: (i) trend analysis of {{formula:91e33374-ab7f-4a2c-81f1-66c08ded70b4}} providing the tight ADP guarantee for {{formula:3039e5ee-98f5-48a4-8c76-d6978735c607}} and
(ii) utility comparison between {{formula:181db7db-a626-4dfe-91cf-bc83ff2210c5}} and {{formula:6cf5f439-53f9-4f14-828b-9e78c8922831}} under the same level of differential privacy.
| r | 27d74f78c3ba6d792aa65ea6e6874b2e |
We start with revealing a discrepancy between token-based pre-training (e.g., BEIT {{cite:f31983dc446fa3734e7625e042fe246b75403db2}}, MAE {{cite:5fc35f306bb3db2a330547ce532e5b3d17e7d111}}, and our work) and image-based pre-training (in particular, contrastive learning, e.g., MoCo-v3 {{cite:02d92285c2f8ce8e2d627d9da1fce1853861c1b6}} and DINO {{cite:5d57a1282d21a06dface098b83ef5928f39f3360}}) methods. From Tables REF , one can observe comparable performance on the ImageNet when the pre-trained backbone is allowed to be fine-tuned. However, in the linear probing test (i.e., the backbone is frozen), e.g., MoCo-v3 {{cite:02d92285c2f8ce8e2d627d9da1fce1853861c1b6}} and DINO {{cite:5d57a1282d21a06dface098b83ef5928f39f3360}} achieve high accuracy ({{formula:1dbec886-8cdc-4a73-af86-b15dc044dbd0}} and {{formula:9315b281-30f6-4376-ac63-68a5afedad7e}} ), while BEIT {{cite:f31983dc446fa3734e7625e042fe246b75403db2}} and MAE {{cite:5fc35f306bb3db2a330547ce532e5b3d17e7d111}} report much lower numbers ({{formula:7977440c-9953-444f-bc1d-1022c90b8ce4}} and {{formula:e5703221-7b68-46d2-a91c-587349528304}} ). Our best model (i.e., integrating MIM with zooming-in) reports {{formula:cf83d028-770c-4be2-b804-f8406668d03a}} , higher than BEIT and MAE, but still much lower than MoCo-v3 and DINO. That said, the token-based pre-training methods largely rely on fine-tuning the backbone weights to achieve high recognition accuracy.
| d | 783bd99dd407b5660ac7cf2384dd8cb1 |
for some matrix {{formula:562a5ba8-7335-4f9e-b4d1-8d5fc54f028d}} and vectors {{formula:47265402-18c8-4fab-8bee-a92229f90b98}} . By taking the inner product with {{formula:96d0f579-feef-4024-9fab-cc5c7846b6e8}} in the above equation, we obtain
{{formula:22bda47a-5f24-48b7-a02f-72f41e00e75d}} and then derive {{formula:44455ca1-47ce-43e1-96f7-41270eebc1a2}} . Hence the scheme is easy to implement and very efficient. We also
refer to {{cite:24e87e0081c91d08050ce7e6ef90dc443499d2b7}}, {{cite:6bbc68d5d18b46ba9c0448a7dfbd165e7f3e816e}} for more detailed information.
| m | 87ce9c453e0b954f8ff2096ec7035d5b |
Lemma 2.5 ({{cite:1538f30d7c5521ed241324576908bdeb92ba9ab4}})
Let {{formula:6ec07d99-593f-4972-bded-9f433f2433b8}} , {{formula:98ab4aa9-d6bc-4ca3-bebe-8db928872f33}} be arbitrary, and {{formula:8026c375-4d21-4f0b-ac5d-94ee019889d5}} .
The zero solution of the equation
{{formula:d88273f7-f302-4658-9e6a-44f46f985e44}}
| r | 7794a3276dc95c1ecd33da2f4efc03a7 |
On the other hand, Active-iNAS {{cite:6af90f979978bf97c242a70d141664761a16e8cd}} notices that most previous DAL methods {{cite:feec1d2d6c354197370fc32b1a03d2b253cf78de}}, {{cite:7cd6020eb8f3e86e38163034ed08fc0e2388584b}}, {{cite:129ebc2ba29b9c46d6dd925471c85e4579c5cfa5}} assume that a suitable DL model has been designed for the current task, meaning that their primary focus is on how to design an effective query mechanism; however, the existing DL model is not necessarily optimal for the current DAL task. Active-iNAS {{cite:6af90f979978bf97c242a70d141664761a16e8cd}} accordingly challenges this assumption and uses neural architecture search (NAS) {{cite:1a306d63696f33823a78fa94a085caaae1ecfae3}} technology to dynamically search for the most effective model architectures while conducting active learning.
There is also some work devoted to providing a convenient performance comparison platform for DAL; for example, {{cite:0186621b61a8aa26d5cdf2b5c399e135edbc787b}} discusses and studies the robustness and reproducibility of the DAL method in detail, and presents many useful suggestions.
| m | 10080da76e0c72cf056f9a1e945cd2cb |
It is valid for all {{formula:9b99355c-6eb9-440b-818f-6590d584c955}} even and fails for {{formula:9e43c4b4-b585-411e-b31c-2baeeb58e0fd}} odd.
In the course of proving several conjectures of Chen and Sun, DeSalvo and Pak {{cite:f29eed49a0ecd25a28340ddc8878fa4a8a6f7bf8}}
reproved this result. They remarked that due to the
Hardy–Ramanujan asymptotic formula
{{formula:9b8e842f-90d6-4259-89c3-57408e5d2aeb}}
| r | e565bb976204a4a9820668522d07c0f4 |
To show the efficacy and efficiency of the proposed approach, we extensively evaluate our model on a variety of datasets and architectures.
In class incremental and task incremental sequential classification settings, EFTs achieve significant performance gains on CIFAR-100 {{cite:267d8ea75e3821d93e74f999b8e7a5c9b0cb0125}} and ImageNet {{cite:aa217c0547e84870c76d370ea9a476e2f2337817}} with only a minor growth in parameter count and computation.
We also evaluate our approach for continual generative modeling, demonstrating a {{formula:3fad1eba-0411-4ecf-897b-297b52f951a6}} relative improvement in FID {{cite:c92c771062f3c02c69c8673a94da40893b781d4f}} on the LSUN {{cite:081af6e20e96722d8076720fd9e0b3e28c0e291f}}, CUB-200 {{cite:6b025d2b4bb9138a3a9d7d7678640344f3a73bb2}}, and ImageNet {{cite:aa217c0547e84870c76d370ea9a476e2f2337817}} cat datasets compared to recent state-of-the-art models.
| i | 930a483e69200cb6ab4d83033bb177f0 |
An analysis tool for multigrid methods is the Local Fourier Analysis (LFA), introduced in {{cite:e4ee9df5fa77294916cb03748251030385360cc0}}. It can be used to study smoothers and two-grid algorithms. The technique is based on assuming periodic boundary conditions and transforming the given problem into the frequency domain using a discrete Fourier transform. Thus, the LFA can be used as a predictor for asymptotic convergence rates when considering problems with non-periodic boundary conditions {{cite:7c1b101aa89545a71d7c19a54be75b5e00a7355a}}. The smoothing and asymptotic convergence are both related to the eigenvalues of the operators for the smoother and the two-grid algorithm. These operators are very large for space-time discretizations, since the effective dimension becomes {{formula:e94e96c7-f61b-4371-9560-33a06919e2a4}} for a {{formula:f0e21ef8-76ae-4b2e-8937-4f76dbe08dba}} -dimensional problem. It is therefore not feasible to calculate the eigenvalues. When performing an LFA, the operators are of block diagonal form in the Fourier space, which reduces the problem to an analysis of so-called Fourier symbols. These are of much smaller size and make calculations feasible. Multigrid solvers have been analyzed in the DG context with block smoothers for convection-diffusion problems in {{cite:b4df51d6056ebe54c08ba83407e9f405e1035bf2}}, {{cite:8cd2f95067fb9aa4f909e0bdf3c32648fe05af98}}, {{cite:ae49f834466748bdbaddff26194043c2b7bc85b4}} and for elliptic problems in {{cite:47da8016830d18c813337e04cc8a420090f8e93b}}, {{cite:0cb9c63ede54d296b6a58e401b264fadfae36cc9}}. Space-time MG methods have been analyzed mostly for parabolic problems {{cite:1f538decb61ca279ca7d3ce5c60c816f89d5c324}}, {{cite:7c1b101aa89545a71d7c19a54be75b5e00a7355a}}, {{cite:bfb236345e85af72857cd3be854b2f3b7e52cc8e}}. Analysis of space-time MG algorithms for DG discretizations of advection dominated flows has been quite limited but can be found for the advection-diffusion equation or linearized versions of the compressible Euler equations {{cite:180772102bec830b3e66af68203311beeaee8a12}}, {{cite:0b879e4e01d09903e76f3db1da3d2bd4274cfcb0}} and for generalized diffusion problems {{cite:7c1b101aa89545a71d7c19a54be75b5e00a7355a}}.
| i | 14956447d17a89728036a374628c750c |
Since the FDA of the present paper gives the minimal model of Chiral HiSGRA, it contains the same information as the BV-BRST formulation of this theory. Therefore, one can address various questions (actions, counterterms, anomalies, etc.). For example, Chiral Theory was shown to be one-loop finite {{cite:825bca954467a9abff0d889534e19d774340d2d9}}, {{cite:983d2649be0c6059f47e393641f5818350125b24}}, {{cite:637bbcabc62319b23899bee38f2f814b46a8951e}}, but extending these results to higher orders is challenging in the light-cone gauge. It would also be interesting to construct exact solutions, which should more easily result from a twistor formulation of Chiral HiSGRA that is yet to be found, see {{cite:e92755ac0219558c640b9892510c1dd060ccb57a}} for the first steps in this direction. The results of the present paper should also be helpful in looking for the twistor action of Chiral Theory. Lastly, the generalization of the results of this paper to anti-de Sitter space is straightforward,In this regard, it is worth stressing that (contrary to the old higher spin folklore that there cannot be smooth limit) Chiral Theory has a smooth deformation to {{formula:b2f8ccee-be51-4e7b-8966-92f2ac3702df}} or, equivalently, the smooth flat limit, {{cite:26e3a81d06c6ebb14114ba03a96b28297d51e778}}, {{cite:0abe6120d4961527d114d3f38bd07716ed87c607}}. but not worth it since the twistor action would automatically lead to Minkowski and anti-de Sitter versions of Chiral Theory depending on the infinity twistor chosen.
| d | 4b2b040197db10a40e011d9d3ad49465 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.