text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
We have investigated the gravitational lensing by the Bronnikov-Kim wormhole under the weak-field approximation
and in the strong deflection limit. the Bronnikov-Kim wormhole metric is the same as
the one of a wormhole filled with massless and neutral fermions in Einstein-Dirac-Maxwell theory in an simple case {{cite:ce873ce5b9fbc33d634fef5a695a9db874ecd746}}, {{cite:eaaa66add092a6047ece9447b91553c95a785bcd}}.
The metric becomes the one of an extreme charged Reissner-Nordström black hole in a limit {{formula:60b66398-1713-4495-a9df-2ec88288f51f}}
and the one of a spatial Schwarzschild wormhole {{cite:dbd11cccf59d866d3d96384c43f23139d23c4b67}}
in an ultrastatic limit {{formula:689c37a4-fb6d-45cb-a0f3-b1eee27ef030}} under a constant ADM mass {{formula:36349fe1-139c-4137-8bfe-c774adbb18c5}} .
The parameter {{formula:377fb86e-8904-4be5-bfb4-dca78c8d5df9}} of the Bronnikov-Kim wormhole has been calculated in numerical partly while
{{formula:a987e914-232e-4e1b-b306-b161271f602b}} of the extreme charged Reissner-Nordström black hole {{cite:302247798a86f4b96966326f674979d839ccb943}}, {{cite:59cefb74df16cce14bc00da7ce4979222494223d}}
and the spatial Schwarzschild wormhole {{cite:896014fb4fe8a7a750c1b0699f181aa206791bb0}}
are obtained in analytical.
Interestingly, in the both cases for the extreme charged Reissner-Nordström black hole and the spatial Schwarzschild wormhole,
we obtain exactly the same parameters {{formula:8a3bf512-c304-4dba-bc36-4e114c1d0793}}
and {{formula:edbef7fa-e134-42eb-8760-4e97bb7da2b6}} .
| d | f9e9ebf48fbca769e6953cf080e5f3f6 |
This paper explores how representations learnt for object detection change with various privacy-preserving augmentations of the training dataset. We augment the COCO dataset {{cite:324378e02461a44707c8acccac135d58c2715847}} by face-blurring and face-swapping using an adaptation of StarGAN {{cite:595087573b5340d8d698eec244b5bdd3dcaaff75}}.
We use these, as well as the raw COCO dataset, to train Faster-RCNN {{cite:a78333170509e41b3d64ca1fbedf505e1ace2c0f}}.
We measure the resulting model's object detection performance on both transformed and original versions.
Finally, we attempt to measure the bias of the representations of all fine-tuned ResNet50 backbones via the Image Embedding Association Test (iEAT) {{cite:ca7512ff35db5330dfaab80b20b3f2886dca7d19}}.
| m | 4a7adaca0dc1a0e16f07a29944e7992f |
The strong solutions of steady compressible Navier-Stokes equations with homogeneous
boundary condition has been studied in
{{cite:c4ca4c52e6aecf5b9967c549dd621660a1782a33}}, {{cite:027714e97550777e4040b8b2f5f24756bfdbc74b}}, {{cite:3dc6c8a443536186d7b9265cc1872b8b31f901b4}}, {{cite:8c459dc7ab87364350681d5dec7d1c785b9221ac}}, {{cite:9bec94b94675730515aaf81db07303cfd1884c90}}. As to the existence of strong
solutions to the stationary Navier-Stokes system with the inhomogeneous boundary
conditions, the authors in {{cite:89e7a32a6a04e8f1db23beec0755bcc915011377}} proved the existence of strong solutions to
the stationary problems with an inflow boundary conditions for the density and
Dirichlet boundary condition for the velocity in a smooth two dimension domain
{{formula:94e24cdd-7152-4b73-a86b-4cdc97382fff}} under the assumption that the Reynolds number is small. A mass of
researches showed that the regularity of the strong solutions is restricted by the
geometry of the boundary{{cite:11a89caeb035f715e247ab8df23d69858aa94df5}}. In {{cite:5e97e1b734907a2149b3b9f2fcb0ddee7f8a99bb}}, {{cite:59d4f63ef3dca0b1ac11f56ccba529c84b734de1}}, {{cite:8ca756312165bafb71449140ba50693e74e1a901}}, the authors
studied the existence and regularity of solutions to an inflow boundary value
problem under the assumption that the viscosity coefficient {{formula:e40ee422-d89d-47e4-ad68-3d4b5ef8f753}} is large enough on
a polygon domain. In {{cite:f4c2e51b41f627d456d6ce0ff1fa48491f4da0ac}}, T. Piasecki proved the existence of strong
solution around a constant equilibrium with an inflow boundary condition for the
density and
the full Navier boundary conditions for the velocity filed under the assumption
that the friction coefficient {{formula:addd29ec-677a-4bd5-9304-0e698d7aac71}} is large enough. Later, the authors obtained
similar results in a cylinder domains {{cite:d92bf229ba5bd33e9ee155fcb2ab353f668cabb0}}. It is worth mentioning that the
validity for compressible perturbation of a Poiseuille type flow under the same
boundary conditions as before is also obtained in {{cite:1924c9591046cc9815e2371acc29c782e1e559a4}}. In contrast to the
boundary conditions prescribed above, the authors established the existence of
strong solutions near the constant state {{formula:9326150e-e1f4-4b56-99c5-cffef38ed8c0}} with Dirichlet boundary
condition on {{formula:ee97052c-691c-45b3-8c1e-04ecea7010cd}} and {{formula:57d5a7d9-64dd-44ed-8dec-a0f59e103c0a}} while slip without friction boundary
conditions on the wall {{formula:17200a07-9bfd-43dd-b415-40b306989f8e}} in {{cite:2f66c7b3df6356715fe22f7368afa6d4a9f81172}}. Recently, the authors in
{{cite:be463f4e48283a112d3ee7f03005b31fadbeb2f6}} studied the
existence of weak solutions to the stationary compressible Navier–Stokes system for
arbitrarily
large boundary data under additional physical hypotheses called molecular hypothesis
and positive compressibility in 2D or 3D domains. One can refer to
{{cite:ec406f272ca632bd58ec70b7a69bc581f3b2c8bc}}, {{cite:4b265071fbccfea66de59edfc4f390c2fb516339}} for more results concerning the existence of strong
solutions with inhomogeneous boundary conditions.
| i | f865e048d72b752b902806606c1836fd |
Much of the interest in (milli-)lensing flux anomalies arises because
they may be caused by the elusive substructure generically predicted by
the hierarchical structure formation in the cold dark matter (CDM) cosmology
(e.g. {{cite:7079210d77d1ad11cf523142cfa2ae1e20489efb}}, {{cite:de786d1258da55bf62452141acbad19daeb6df4c}}, {{cite:112934c9ea6189f23239b25710af75bbdbbb2657}}, {{cite:5cd81cc06904319a0f65357b9cee1e90238300b6}}, {{cite:52bf69dcc4eb8eb48cc01eaf1dd8b7b03dd5be8f}}a,b; {{cite:fad06aa9ed59c58b9f2cf4824399b53c7e071220}}). In this model, large structures
form via merging and accretion of smaller structures. The cores of
these small structures often survive tidal destruction and manifest
themselves as subhaloes (substructure). Recent high-resolution
simulations predict many thousands of subhaloes (down to {{formula:f04eeff3-3050-4008-9b03-8136d5c4d63d}} , or to circular velocity of {{formula:4dbb8ac6-d890-4792-8f9f-c44dfe761da2}} ;
e.g. {{cite:81bdadc391d0cf40f504a51f3d5c39702a63e4f6}}, {{cite:9941cf070dfc2aa6d349056b5524de97b765e61f}}), at least two orders
of magnitude more than the number of observed satellite galaxies in
the Milky Way, even after accounting for the newly discovered faint
satellite galaxies from the Sloan Digital Sky Survey
({{cite:0f14470011548c83833ac72475a5b5d68d180aac}}). A possible solution is that star formation
may be strongly suppressed in the vast majority of the low-mass
subhaloes (e.g. {{cite:f606a0dd116812b967a000f66ef3a52e04385888}}, {{cite:7079210d77d1ad11cf523142cfa2ae1e20489efb}}, {{cite:e689f7ff39724a18cc98e00f5894b92382148b2c}}, {{cite:c40fc6ba553e1c3f5562e05e9c7ddf5a16fcc0ff}}, {{cite:bcd199fc4395a2c57d04aac6760b026123dd70f4}}, {{cite:df50ceeac8a113f564e343b3577b00df4704a5dd}}), and thus they remain dark and
difficult to detect through light-based methods. If this is the case,
then gravitational lensing can potentially probe this population since
it depends only on the mass but not on whether the lenses are luminous
or dark.
| i | 3919b362b2971c62cc5afedb02bc6139 |
The results obtained in this paper on phases and amplitudes of SSA components of NHSI and SHSI allow one to prefer Laplace’s formulation. This formulation allows one to understand the presence of the harmonic series (1, 1/2 , 1/3, 1/4, 1/5, 1/6, 1/7, 1/8 yr) in Figure REF . In a schematic way, variations in sea-ice extent are the infinitesimal (incremental) expression of oceanic motions, a topic that requires use of the theory of fluid mechanics of turbulent flow. There is still no solution of the problem in the spherical case {{cite:8d0cf91f4cbefa0e091f0f395b64382d2d293b68}}, {{cite:8ae9e1d554da9d1145a42a20e1e9b4b9b7a76219}}, {{cite:2d1dc7dfc53b2f40c2b0fac75993a97213e33b37}}, {{cite:6ee06d37814d6e797942380d8cfd23790b5075d7}}, {{cite:e112d6467b47d963cfa5ede4d6c3a428621aebfa}}, {{cite:3c10b21e757f0c2b728f8780493fff0cbc052f86}}, {{cite:e5b3b44fd822f4a32b3bf8215edaf0ab8d246c2b}}. But there is one in the case of a flow between two coaxial cylinders {{cite:582d64278f3bfc9e72792ad8db20c9a61b914379}}, {{cite:9204596c2bf94727e5a46b3adedd96a3b798c227}}, {{cite:a4172917bd628c365a451f92e88754f04d11ece0}}, {{cite:0ecd815adddebf240fa2273ad552229ef5528bcc}}. The perturbation in flow velocity {{formula:418bceea-6e92-4c4c-9c0b-c925b0693519}} can be written as:
{{formula:af354f35-2139-4929-afda-511d728d8706}}
| d | 7bcf76ef163458c447b325ecf74579fe |
We categorised projects into treatment and control groups based on whether they were successfully funded or not, then estimated the effect of each crowd feature on fundraising outcome (i.e. fully funded or not), while controlling for project features. We do so using the traditional CEM measure of Sample Average Treatment Effect on the Treated (SATT) measure: {{formula:b68ad1b7-00e2-4a22-bf38-c9cb7e5da72f}} where {{formula:421eb9dc-ed74-4835-8b91-456122478775}} is the outcome variable (funded ({{formula:dd4d9d7d-d35d-458e-9c52-7e4f15efa3e6}} ) or not ({{formula:b917d325-abb9-4906-975d-509a9e1c88d9}} )), {{formula:ce7a4dbb-c3e9-4c05-b999-bcd18571d406}} is the set of crowd treatments ({{formula:80298817-bd6b-4bba-a969-ec6a982363a2}} =Appeal, {{formula:361fafa6-0d73-4cc2-86a3-78cc9a441bdc}} =Momentum, {{formula:207dc44a-d958-4025-87dd-bf98cc298d2c}} =Variation, {{formula:4932f923-20d9-4d44-a1fe-0702b2dc2cda}} =latency, {{formula:039d0b83-b559-419f-8635-f214374686ea}} =Engagement), and {{formula:9fcb2f99-b962-489e-8623-3911d3605759}} is the number of crowd treatment effects, i.e, five. We thus compute the sample average treatment effect of each crowd feature on fundraising success as the difference between two possible outcomes. For each project, the fundraising outcome under crowd treatment condition {{formula:813f1be3-a08d-40f4-8fd8-7414b5bd3452}} is always observed. However, the counterfactual condition {{formula:ac2b33ab-a9e2-4be0-845d-ec20d8fe6b31}} , i.e. the fundraising outcome if no treatment condition, e.g. if no crowd appeal, momentum, variation etc., is always unobserved and imputed via simulation using a logit model. Once the unobserved outcomes are imputed, the estimate of each crowd feature's sample average treatment effect is measured by simply averaging the differences over all observations and imputed outcomes for the counterfactuals {{formula:e06da819-6835-4cf1-aa7a-b90bb34c672d}} . The SATT therefore follows the Rubin causal model (RCM), an approach to the statistical analysis of cause and effect based on the framework of potential outcomes {{cite:6b7a96ff21c82f0a6466b1cdb1d8221796a28755}}. Based on the RCM, the causal effect of each crowd feature is therefore the difference in fundraising outcome between the observed and counterfactual condition.
| m | df8a0a6950c962162d443bbc06fe1fe9 |
Finally, it must be emphasized that {{cite:16a895e9c05e3006be5d76468b11e5ebb76ccf1f}} was intended as a demonstration; the results presented here should not be considered as a verification of the Boström/Sernelius theory {{cite:b20003d4906c8d7f1c6a023756f75fb752520fbf}}, or as evidence against the Plasma model, but the discovery of a systematic effect that brings the experimental results into agreement with the theory described in {{cite:b20003d4906c8d7f1c6a023756f75fb752520fbf}}. It is unclear whether additional systematic effects exist, however, it had always been my impression that my experimental result was likely contaminated by additional, possible large systematics {{cite:7ecd88b19d42ba7c3860314355573275637e388d}}. I have never considered the results of this experiment as suitable for constraining possible new long range forces; had I felt such was meaningful I would have produced those limits in the context of {{cite:16a895e9c05e3006be5d76468b11e5ebb76ccf1f}}. Here I have presented what I consider a very likely systematic effect. We now know to pay careful attention to position fluctuations in our ongoing work.
| d | e74009a24d1d3b59d76fb81fd664985a |
The idea that {{formula:0615ab7e-9aab-4934-a7f2-9b177a630b1c}} (the Newtonian gravitational coupling)
has probably experienced diverse values during the cosmic
evolution has many motivations. It began with Dirac's proposal {{cite:aba48b799f53fd71ebe0d1cc38d2e94eeaf76371}}, {{cite:cd987770ceb42a895b8f9c56efeae9e787190b84}}, {{cite:39fd055bec5f79281e11335d7e7f09b6ae9287fa}} which states that, the ubiquitousness of certain large dimensionless numbers (LDN's), arising in combinations of physical constants and cosmological quantities {{cite:552b7eb89563157689cc5a3625b0e5bac8a03970}} was not a coincidence but an outcome of an underlying relationship between them {{cite:3c1b35092533cc20813bbffb81cd3b972c4f220f}}. In his proposal, Dirac pointed out that the electrical force between proton and electron within a hydrogen atom i.e., {{formula:88056f8b-ce9e-4756-bd2c-d7f6391c9655}} is a large number being 40 orders of magnitude greater than their gravitational force {{formula:973e436c-0f6f-4390-8ad8-4e6eb91dc78e}} , i.e.,
{{formula:417c0cc4-f551-45a9-a9d4-c492b70be496}}
| i | 598645accc25115e58e4f9bb984dfe9c |
The spin-orbit coupling (SOC) in the presence of electric field can control electron spin and it can resolve the
designing issues of the devices arises due to the inclusion of local magnetic field.
Many fascinating phenomena, including spin-momentum locking, spin-orbit torque, topologically non-trivial spin textures arising
from antisymmetrical magnetic exchange interaction, are accomplished by the presence of strong SOC in materials where inversion
symmetry is broken.
In the presence of spin-orbit coupling and electric field, electron spin and momentum can be coupled in non-magnetic semiconductors,
which gives rise to a rapidly emerging field named as spin-orbitronics {{cite:19209f6210838d95605aa7cb135197e242a09df4}}, {{cite:dac00a7837e3324dfc58509379ca13c3abd113e1}}.
The most effective method of creating spin current inside the non-magnetic materials is by spin-charge interconversion.
The electrically induced regulation of the spin dynamics through spin-orbit coupling is the most practical and desirable
method of doing the interconversion {{cite:01f228ca8ca3f28a785d742ba20af0da1701d5d6}}, {{cite:4e68666ae74f47dfb1444580d939251d5decc097}}.
SOC enables the production and manipulation of spin-polarized electrons mainly by an electric field, as the field acts
on a moving charge carrier as an effective magnetic field {{cite:214da0e180046d2f4fade3f20fb630f057234899}}.
Dresselhaus and Rashba are the first to observe the bulk inversion asymmetry (BIA) in non-centrosymmetric zinc-blende or wurtzite semiconductors,
where SO coupling turns odd in the electron's momentum {{formula:054d10d8-e81f-40c9-b8dc-74bf514d8266}} {{cite:abe16e8fbdf3f8043b6ca1c4b32563f70a5b307f}}, {{cite:540a2aa66dcbeee8b5411a6a4a4135b1440a3d79}}.
Following this, the Rashba-Bychkov effect has discovered which allows the control of spin degree of freedom for a variety
of nonmagnetic two-dimensional materials where structural inversion asymmetry (SIA) is present {{cite:306d4b14d843c5486f1303b3047a5b4021bde29b}}.
| i | e25fbf55473ada8601b088880cf8f654 |
Here {{formula:bf91c790-c2f2-4a14-a87b-6f70a12ef195}} is the Gamma function. When the graph size goes to infinity, the probability that a new vertex forms a self-loop tends to zero and thus it easy to check that (REF ) and (REF ) hold still for our model without self-loops following the proof proposed in {{cite:647c6c7ec71cfbeb80d793bb3afff4d720c1b5b0}}.
| r | 7058d2e9d18c142445e7e1b925aa082a |
Our setup can easily accommodate a massless physical particle described by some Effective Field Theory (EFT) at low energies.
This would have several interesting applications. The simplest one is for particle {{formula:8ddaa4f5-e991-4f8e-a8c3-85947c306919}} to be a Goldstone boson of spontaneous symmetry breaking. For example, particle {{formula:bfc09647-13e3-445c-9e9f-95da4583f485}} could describe massless pions like in {{cite:ecbb6fd0d64a29b97b4b25fa73548c6f7f5a5b00}}.
More challenging, would be to take particle {{formula:0dd8a65d-daf0-4e01-a7f2-a1488e8faac5}} to be a photon. This would allows to ask the question: what is the minimum {{formula:4cb25399-be01-401f-abf6-4ad0121bbfcb}} -anomaly of the UV CFT that can give rise to a photon in the IR?
Our method creates a non-perturbative bridge between low-energy EFT and their UV completions (within the realm of QFT). Unfortunately, such applications will be very challenging numerically for the current methods. The main challenge is the extrapolation {{formula:67ed10a9-73b1-4e01-ba84-bcda74acef51}} in gapless theories (see for instance {{cite:d1e652e415e380691e4ee7a6db948923f381eeaa}}).
| d | d74e546130a40c98bb25fa3c95e846ec |
Krylov subspace methods are often applied to compute the spectrum of {{formula:770c0ef9-55e8-4694-a28d-700db8d8a6fc}} , {{formula:be4ffbd4-afb3-4686-bffa-c2f74f39c8a1}} , or {{formula:a4aea7ee-13f5-4a2b-b1e5-834f245facaf}} for a given large and sparse {{formula:2126ff00-0ded-4080-b13b-886589ff9ad4}} matrix {{formula:30b0189e-9fdd-404f-b7a2-8895c3433f20}} , vector {{formula:3b03dae9-1ca3-4245-8767-3f15f3b70d16}} and function {{formula:a465e20f-ec95-4d54-bc88-da479b9f65f5}} {{cite:123afab7bd09c3d7400c090f1f66e104beb16c1d}}, {{cite:2f1242cd2e2b61ef7f336e140f33d9df9c8d84d9}}, {{cite:292590c5569ccc8033fa9726e0182caedd8ed242}}, {{cite:1e16b5d4edfba188c8664d0223b823575aca3ec1}}, {{cite:a4108d58e4613496a814a43a6c6bad4c03cfe074}}.
The theoretical extensions of Krylov subspace methods for linear operators in infinite dimensional Hilbert spaces are explored in {{cite:689c49a61d6e772f46b941791a79bfb6429958c1}}, {{cite:3971d9b735ac6aa358adce33595a57daa948452f}}, {{cite:95487a83a2f504ebcef04c698526a294f746e777}}, {{cite:85793b5477531e6f83871db61c610971487ee14b}} to deal with matrices that are finite dimensional approximations of infinite dimensional linear operators.
| m | f7bad71f4e9bb62b1dbb3761cf1f98c2 |
As shown in the first row of Fig. REF , baseline methods fail to preserve the striped pattern after warping, especially around the highly non-rigid body parts like the forearm and waist. The second row shows the results with the side view where baseline methods are not able to deal with large deformation in poses and lead to blur or incorrect fitting results. In comparison, our proposed method is capable of extracting accurate structural and textural information and performing reasonable warping even when there exists huge discrepancy (e.g. large poses or long-sleeve in target clothes while short-sleeve in reference image). The last two columns that parser-based methods like CP-VITON+ {{cite:486722b26583b3c6037935c8c43d419a53d1768b}} and ACGPN {{cite:1c5a463efa9a3b32c0a575e6af0c55e33a1ebc34}} are delicate to segmentation errors while learning warping flows. They do not always produce reasonable results for areas like necklines and lower-body parts. Even though PFAPN is a parser-free framework with less distortion in clothes warping, it cannot preserve or generate the body parts well which results in blurry arms and shoulders. Unlike them, our method clearly preserves the characteristics of both the target clothes as well as the body parts, benefiting from the proposed self- and cross-DAFlows.
| r | 583fc789a9d2b6626f2fd7c365ed6eb8 |
Fairness notions “without commons”.
In Section we further explore “without commons” fairness, since its connection to EFX for chores seems to indicate it is the “right” notion to study for copies, and since disregarding common items is a natural way to limit envy among agents. We define similar notions to {{formula:7b53c146-a0b6-4938-9a2b-ac0fa883d178}} for other solution concepts beyond EFX, in particular {{formula:ba1440dc-46c6-4ccc-b28c-be64bcafb148}} and {{formula:d24d81b0-9e4a-43f0-a3f7-06301f0a0303}} .EFL (the basis for {{formula:93c65ace-0195-47f7-b746-c949b0f128e7}} ) is due to {{cite:a047a97507d9b01be2148657ff2155d3d0b2c22e}}.
We study how these new concepts expand the hierarchy of envy-based fairness notions. Expansion of the hierarchy is important for two reasons: First, intermediate “without commons” fairness notions can serve as a technical stepping stone for making progress on the elusive existence of more standard notions. Second, as we show, “without commons” envy-freeness has good share-based fairness guarantees,As measured by their approximation of MMS, the maximin share of every agent {{cite:4114c706fbdf71628f79b2934654ea0a819a6017}}.
especially in comparison to standard envy-freeness concepts like EF1. We thus believe such concepts to be of independent interest.
Our results in this section are summarized in Figure REF .
{{figure:1b8bcdec-3722-41f6-b1be-cb628017fcf2}} | r | 9382c5ea964c55bb7e6251f1c018ba29 |
We have shown that MURD yields superior performance over DRIT++{{cite:1d0ed29b40c1ff20215792689154e1825c8a9fc9}} and StarGAN-v2{{cite:6ab0a3b03a6d90ad7b50ba32395573f3d06b9806}}.
For every pair of sites, DRIT++ embeds images in a site-invariant content space capturing information shared across sites and a site-specific style space. The encoded content features extracted from an image of one site are combined with style features from another site to synthesize the corresponding harmonized image. Learning is unsupervised and hence paired data is not required. However, DRIT++ is not scalable due to the need to learn all mappings for all site pairs.
DRIT++ is also ineffective because it cannot fully utilize the entire training data and can only learn from two sites at each time, causing it to miss global features that can be learned from images of all sites.
Failure to fully utilize training data likely limits the quality of generated images.
Unlike DRIT++, StarGAN-v2{{cite:6ab0a3b03a6d90ad7b50ba32395573f3d06b9806}} is scalable and performs image-to-image translations for multiple sites using only a single model. It has been applied to the problem of MRI harmonization{{cite:10b7833c5f82703e60d562647d1985ecc4cc969a}} with promising results.
In addition to greater scalability, StarGAN-v2 generates images of higher visual quality owing to its ability to jointly consider the information offered by images from all sites.
StarGAN-v2, however, does not explicitly disentangle images into structural and appearance information. This introduces the possibility of altering anatomical details during harmonization via style transfer.
In contrast, MURD enforces explicit disentanglement of content and style features by jointly considering images from all sites, allowing it to produce harmonized images with diverse appearances with significantly better preservation of anatomical details (see Supplementary Figures 1–3).
Disentanglement safeguards harmonization against altering image anatomical contents and allows gradual and controllable harmonization via interpolation of style features.
| d | b52406cb914204d372b8bfd7d5e1d6ca |
State-of-the-art (SOTA) methods for image reconstruction tasks such as
super-resolution, MRI reconstruction, or inpainting, leverage the power of deep neural networks
(DNNs) to directly learn a map from the measurements to the corresponding
reconstructed signal {{cite:46c46bf0cb1e5f10f150ecb5c4f50a2b7ae27b73}}, {{cite:8079cc6e60a7782a2c22a87ef62896973d08d174}}, {{cite:7ac5883dbd923e030049a4220b4f9e5cddbdd958}}, {{cite:083bc0a6f9b694510ebb4335d939cacd3801a8a1}}. Despite their extraordinary performance, such end-to-end
schemes are unreliable, failing, for example, to detect small, unusual signals
absent in the training data {{cite:8dd2da5dba09d4759bd1a5566a91d6a4fed88e1f}}. Part of the reason is that during
deployment they ignore the forward measurement model, that is, {{formula:9e9256b7-9cb2-4ca5-8f63-18dd61ec7f0e}} , even when the model is
known {{cite:2ce96059d2db108a33bf28d4fb2ce25925540f43}}, {{cite:9d5e8288098d58b08197e0407f159e1ce210ab97}}. Moreover, end-to-end DNNs for image
reconstruction require retraining whenever the measurement operator
{{formula:fa298c23-9b6a-4b3f-b013-8bc25ec09acf}} changes. Both shortcomings are absent in classical
optimization-based methods, albeit at the cost of poorer performance.
| i | 55fa9e3cd9133666c87e89db2f3ae972 |
The first difference is that expression given by unitarity cut method is written using the spinor formalism, while results in this paper use the traditional Lorentz invariant contractions.
The second difference is that in unitarity cut method, we have assumed the
external momenta to be in pure {{formula:418f3ab6-d4a5-475a-abbc-932b840a087f}} and only loop momentum is in general {{formula:fd5e7ade-7bfe-4a25-8908-53f2199156e2}} dimension. For our new method, there is no such a constraint and the external momenta can be in {{formula:d7dff141-30c0-4493-aa42-f005ef3ad2fe}} or in {{formula:11735afa-96a4-41d7-b6ee-a817939ff92f}} dimension.
The third difference is that results in the paper is defined in iterated way while expressions given by unitarity cut method are just one equation (although the differentiation has the spirit of iteration).
The fourth difference is that expressions by unitarity cut method using the input of arbitrary forms while the one in this paper using the standard input given in (REF ). The difference has a potential and huge impact on computation efficiency. The reason is that with the development of on-shell program, it is well known that tree-level amplitudes will be significantly simplified if we use spinor variables with spurious poles, such as these given by the recursion relation {{cite:b46e3eac01fa9e205fac32c9764a0f11fb301cd7}}, {{cite:c45c2f1248354e144f403daab519a84880cd92aa}}. Thus it will be desirable to cooperate these advantages of unitarity cut method to our current new strategy.
| d | 2f68355cf7e418eb33d0b6a35fe6aa5a |
The models specified here differ from previous Bayesian brain models in a few ways.
First, this study did not involve hidden layers or the learning of of any generative models.
For many, generative variational models that learn to represent unobserved “causes” are the focal point of the Bayesian brain hypothesis {{cite:8898d77bdb81106781d2f7b8559bf5e66c534295}}, {{cite:1d95e95659fd153afe493ff5829d22fae1394d95}}.
Such models originated with the Boltzmann machine{{cite:f2e6a5daec1986472c611734c0d07d04b4cb5597}}, Helmholtz machine{{cite:7d5b1f33f86b2bcb8b1fe48eea05cb60ae3f0e49}}, and the variational autoencoder{{cite:4e788fd628499381b0bd7805a907040da5175cbb}}, and have been generalized in ideas like predictive coding{{cite:be1fe688f36147928121fe4149d48ac2dfc234d8}} and free energy {{cite:4950891da79cd975030a3b8e0050c1ff1e0c1d6c}}, {{cite:1d95e95659fd153afe493ff5829d22fae1394d95}}.
Our probabilistic interpretation is not mutually exclusive with these model architectures, but remains to be synthesized with them.
By focusing on layers representative of two observed variables, we were able to derive and demonstrate Bayesian behavior under tractable first principles that allow for clear comparison with the true data-generating model.
In theory, the probabilistic relationships between input and output layer studied here should generalize to any two consecutive layers within a more complex, multi-layer network.
| d | 9a68e248e28afca5778fb0ad7f5e454c |
We now discuss the choice of tuning parameters and various algorithmic considerations.
In the above description, the choice of {{formula:432314a9-0fa0-46f3-8622-d5437b90825d}} could be level-dependent but optimizing
these tuning parameters is outside the scope of this work.
Following the discussion in {{cite:4e6636655e3d49dc344d0d8deb69a96c2af69e37}} and the empirical findings in {{cite:7f651928c12e2b16a19472db386d8c68147b9b66}},
we will scale the number of particles {{formula:2ac2f566-e761-40f6-8ea0-3e7772fd0faf}} linearly with the number of observations {{formula:1f3857be-6aa0-4d39-aaee-f0de21da91af}} .
Although the variance of {{formula:4fdd6db1-cde4-4bea-9fc4-20ff2da771e5}} decreases as we increase the burn-in
parameter {{formula:b93ecea6-8ef6-4dd0-83b8-b744465dc36d}} , setting {{formula:cb0f4f2e-3dd5-4c62-a3a8-55476739c16a}} too large would be inefficient.
{{cite:7f651928c12e2b16a19472db386d8c68147b9b66}}, {{cite:fcf8d2de07026d0442c93065d22b01d2542d9120}} proposed choosing {{formula:920b2f7d-4065-43ce-a698-e60b08208146}} according to the distribution of the meeting time.
In our context, as the stopping time {{formula:e7efab22-df67-4c29-ad48-6f58450a3bac}} typically decreases as the level {{formula:8d88b5ee-83f2-4b4b-a498-ecc36c58fae6}} increases,
a conservative strategy is to select {{formula:cf62ea0a-357a-4997-90e4-175a8f7e56e7}} based on the stopping time of a low discretization level,
which can be simulated by running ML-CPF and 4-CCPF as in Step 3.
We will illustrate this numerically in Section and experiment with
various choices of {{formula:d2d86376-681e-4c2e-816b-307ef6432064}} .
After selecting {{formula:f8c54967-842c-4188-a265-c34ae9d9d4cc}} , one can choose the number of iterations {{formula:6b953a3b-bb56-4c64-9659-3d4103fedfab}} to further reduce the variance of
{{formula:cde1977f-4d23-49c9-a5c6-683a5959e446}} , and hence that of {{formula:7b9d6806-ed7e-4c77-9317-96f863a85552}} , at a cost (REF )
that grows linearly with {{formula:70b4cacd-3a73-464e-8bdc-2cd68a5667ba}} .
On the other hand, when employing score estimators within a stochastic gradient method,
taking large values of {{formula:6abd29a0-846a-4f22-a0bd-286cda6fd490}} to obtain low variance gradient estimators would be inefficient.
Choosing the tuning parameters {{formula:8ac77721-6ecf-4a4b-a37d-4a7ad6090289}} to maximize the efficiency of the resulting
stochastic gradient method is a highly non-trivial problem, and could be the topic of future work.
| m | 3695fd8dccac765eb8b49112b9b9e5e8 |
First, we study the effect of several options for the proposed pipeline: (1) beat or downbeat alignment for input audio; (2) distance metric for the learned features; (3) miner and loss for metric learning. For (3), we use the proposed MultiSimilarity approach and TripletMargin miner and loss {{cite:f446c9829f155193fe84b5f1604ed0017c3c75df}}; we also test Contrastive loss {{cite:43ea26e746d21077703297324fd0cc803bafba9e}} with a BaseMiner, which samples pairs uniformly. Each version of the feature embedding is trained and tested on the Harmonix Set using 4-fold cross-validation.
| d | 9b73692d0381d37710b2d61ad926b2de |
The overall framework is summarized in Fig. REF .
Given an image, we first learn to predict part masks using MaskRCNN {{cite:1d8525079a7c0f5d4543318aa2bdbddf60ad26df}}, a well-established object instance segmentation approach (Sec REF ). In Sec REF and Sec REF , we introduce how to predict the direction and size of the oriented bounding box for each part. Finally, in Sec REF , we introduce how to assemble the predicted boxes into a complete shape.
| m | de37397d5e0727116db9a5f38eb9296d |
with {{formula:83360d08-bf73-446e-978c-904d5862a34b}} {{cite:c2e8f1fbe0bed7e9d769aeb61ecc56ab45c7aa51}}. Having a non-trivial Lifshitz scaling exponent seems to play an important role in the phenomenology of cuprates {{cite:dee07792615010ecbeb98ba88153a35d5d544553}}. Its holographic incarnation is manifested in the so-called Lifshitz spacetime {{cite:9b453c68ce5ff9d061489224aca54ce2f2e5eec7}}, which has opened a completely new avenue for the study of strongly coupled non-relativistic field theories using the holographic duality. The whole dictionary for gravity duals with Lifshitz asymptotics has been recently formalized in {{cite:c1fff6d725f3c5811c2fb9bae1b98c1cb4fff690}} and it presents substantial differences with the standard procedure in AAdS bulk spacetimes. Given these facts, it is worth to consider more general and complicated gravitational models. In this direction, Horndeski theory {{cite:b5a080f48a44f52ff9edfc58b1372525089abddf}} is a promising framework since it constitutes a natural extension of the aforementioned holographic axion model, and it has indeed already been considered before {{cite:a994d216b4bac9e5ee4657ff28bc4af560232927}}, {{cite:4d6d2dbc4248ad485882ac8a7ce7634da8098021}}, {{cite:fe076b1368d763b7d69c30d57254485a031543c6}}, {{cite:cce57e7344c7fd8ce187c2a84cc55f92817ace45}}, {{cite:a351417fb4fc8021210e1707bf0e82b17f57f8ae}}, {{cite:c73ab6064f8f8b96520f46f215310bcc3a6ffae9}}, {{cite:04f3066452d16513f563108dba3673af8055add3}}, {{cite:642b2df340852988fef69289134783627e5ccb73}}, {{cite:c5f75b6d484e9196c41b3c3618b148fc64d2db01}}, {{cite:fdc956cfd442d5b86f2945b7e62c4ab884d06bcc}}.
| i | 4e80a3e7e070c5451a320a5317564c7b |
About the study of compressible immiscible two-phase flow, most of the works focused on isentropic compressible problems.
Feireisl-Petzeltov{{formula:9be74e85-1772-4b25-b319-b56d7de12c03}} -Rocca-Schimperna {{cite:abf8c2b8c45ec5f9214ff733664dbef67a9ba1e1}} established the global existence of finite energy weak solutions in 3-D by using the framework which was introduced by Lions {{cite:ddc96f98e7253ed492db44df245cfc44438a8a0b}}. Ding-Li-Lou {{cite:a788f80779b657c80356565a2ddf0a3ee1fb6115}} proved the global existence of the strong solutions in 1-D with large initial data.
Chen-Guo {{cite:47dc80dca157cae6a44e780785c0088a38f61ce2}} generalized the result of {{cite:a788f80779b657c80356565a2ddf0a3ee1fb6115}} to the case that the initial vacuum is allowed.
Abels-Liu {{cite:4a50ca9d871764eb6bd5138a1e5016862c74fe1c}} proved the convergence of the solutions for the incompressible Stokes/Allen-Cahn system to solutions of a sharp interface model for sufficiently small times.
Witterstein {{cite:ab134f96aeaf2f9fe44087246a3fbb5bc39bfb8d}} showed that the sharp-interface limit of the isentropic phase-field model is the standard two-phase compressible Navier-Stokes equations by the method of asymptotic analysis.
Wang-Wang {{cite:2740ca952824941f6ca1309f0e81ad7e28f3ea5b}}, Xu-Di-Yu {{cite:4d5959e2b3dbbcdd3869d310d5a977c8be79ff65}} investigated the sharp-interface limits of the incompressible phase-field model with a generalized Navier slip boundary condition.
There is not much work for non-isentropic case. Kotschote {{cite:ab4cddfe256c254587e4eee16b5b3414e67d198c}} obtained the local existence and uniqueness result for strong solutions in 3-D for compressible non-isothermal phase-field model.
| i | 75766cb6997e3c83c7e73a2b1181a311 |
Software developers spend about 19% of their development time in searching for relevant code snippets (
meaning:NTF . e.g catcode:NTF a e.g., e.g., API usage examples) on the web {{cite:956313bb0fdbf6b55ac26f9ddd16b2c482967a44}}. Although open source software repositories (
meaning:NTF . e.g catcode:NTF a e.g., e.g., GitHub, SourceForge) are a great source of such code snippets, retrieving them is a major challenge {{cite:44b4e3c7469bc548fde4cf3a70f99c2371efc732}}.
Developers often use code search engines (
meaning:NTF . e.g catcode:NTF a e.g., e.g., Krugle, GitHub native search) to collect code snippets from such repositories using generic natural language queries {{cite:087cebcf55fecb9e4c5d1af3e47faff5ca37db45}}.
Unfortunately, such queries hardly lead to any relevant results (
meaning:NTF . i.e catcode:NTF a i.e., i.e., only 12% {{cite:087cebcf55fecb9e4c5d1af3e47faff5ca37db45}}) due to vocabulary mismatch issues {{cite:1ed31544be189b8d0cfea35f639689b6547eff52}}, {{cite:c55d9273020f6878396b3eb13805e8e92fd56df8}}.
Hence, the developers frequently reformulate their queries by removing irrelevant keywords and by adding more appropriate keywords.
Studies {{cite:71804432ed3f50384e132a324afdff52cdf843f5}}, {{cite:5f24fc39f381ebae42db88e0889fb080eb9bc5f8}}, {{cite:087cebcf55fecb9e4c5d1af3e47faff5ca37db45}} have shown that 33%–73% of all the queries are
incrementally reformulated by the developers.
These manual reformulations involve numerous trials and errors, and often cost significant development time and efforts {{cite:71804432ed3f50384e132a324afdff52cdf843f5}}.
One way to help the developers overcome this challenge is to automatically reformulate their
generic queries (which are often poorly designed {{cite:71804432ed3f50384e132a324afdff52cdf843f5}}, {{cite:c55d9273020f6878396b3eb13805e8e92fd56df8}}) using appropriate query keywords such as relevant API classes.
Our work in the paper addresses this particular research problem – query reformulation targeting code search.
| i | cf1cf207e90ceda03dfbbe000f58dd96 |
We also conduct experiments on the real-world dataset Real-billiard {{cite:abaee0db442e47aa5a7b7194b6c26530ca3f1eef}} with our supplemented question-answer pairs.
Note that the billiard table is a chaotic system, and highly accurate long-term prediction is intractable.
Fig. REF shows an example of the ground truth video and our simulated prediction based on the perceptual grounded physics model. It can be seen that the predicted collision events and trajectories are of good quality.
Tab. REF evaluates the prediction errors under two different rollout timesteps and QA accuracy with 5 competitors: VIN {{cite:53b3c8d1a67f57b3809e51909cc26682b313f9b4}}, OM {{cite:81e7b4671e1e4c77a03367d4490bebeef36e3060}}, CVP {{cite:e2099c9bf974480b4958f2c1c8aeef212b666713}}, IN {{cite:4c8e6aab6e3aa675ebbfdb4191a8d76deb73f3b2}}, and CIN {{cite:abaee0db442e47aa5a7b7194b6c26530ca3f1eef}}.
For the prediction task, the rollout timesteps are chosen to be the same (S1{{formula:eea9a90d-012c-4640-951a-4aa0b3dcc4ba}} ) and twice (S2{{formula:6b7a7733-d5fc-4124-9721-93fa962e77e1}} ) as the training time, where the training time {{formula:cf4f2e65-da12-4bc9-8a69-d4b11e560bea}} .
We refer interested readers to CIN {{cite:abaee0db442e47aa5a7b7194b6c26530ca3f1eef}} for more details. We find that VRDP is superior to these methods on both prediction and question answering tasks. Moreover, VRDP works well in long-term prediction. It reduces the S2 error on CIN {{cite:abaee0db442e47aa5a7b7194b6c26530ca3f1eef}} by 62.4%.
| r | 2e8c184305ff1aed69a5aa05c35e2126 |
In order to overcome the complexity of rational decision making, an alternative approach has been developed. The underling assumption of the alternative approach is that individuals follow a particular heuristics when revising their opinion. The most prominent such approach is the one proposed by DeGroot {{cite:13f86fe0349c54068d689a9a4aa53a4128cdfed7}} and brought into the economics literature by DeMarzo et al. {{cite:1218cf2d3a16f9e552a5486b2ecb515d13e3f025}}. For a related work that studies belief exchange in networks see {{cite:d64b5940a4d59c05a9a9dab20b7ab6c95eca10cf}}.
| i | 8b86c03f7c43f07f31097872ee9a3c0c |
Transbins:
Adabins {{cite:60d84fb35a171e7f26862665330b999b02c5cccb}} predicts adaptive bins and attentual maps, and fuse the later with the feature map from decoder {{formula:7fe13998-69ec-4487-b2b2-a6f496b42d62}} . The motivation is to fuse global information in the attenual maps with the decoder features. We take advantage of the encoded global information in {{formula:f5070312-1942-49fe-804e-c4c9b89c0114}} via our encoder to only predict bin widths from the full-scale VIT. To predict the distribution over the bins, we use {{formula:6745b83c-8af7-48ad-b381-9c476b440aac}} convolution over {{formula:73a2e5a2-dee2-45cd-9b8b-fc0e3a3922a9}} followed by softmax to predict the output of size {{formula:8ca57894-68b7-4f5b-b445-cfab1cb484e7}} .
| m | aeca786c0b50cb62f1d685ab55dec27a |
It is increasingly common in the natural and social sciences to amass large quantities of geo-referenced data. Researchers seek to use these data to understand phenomena and make predictions via interpretable models that quantify uncertainty taking into account the spatial and temporal dimensions. Gaussian processes (GP) are flexible tools that can be used to characterize spatial and temporal variability and quantify uncertainty, and considerable attention has been devoted to developing GP-based methods that overcome their notoriously poor scalability to large data. The literature on scaling GPs to big data is now extensive. We mention low-rank methods {{cite:bb9f138e6fc75e7af906aab50c4446768449d085}}, {{cite:e8e7a698f780a9dd2ab7a4b4180fae42ef07587c}}, {{cite:40ed537da8c2ffab72b4b4b75dbee6193f89324c}}, {{cite:e8315a24e69ff18f6e300c596461ecdb05a87fc3}}; their extensions {{cite:5157b49d47a6c9e95f1e5ac29e200ff6af81ad8d}}, {{cite:cb41fc62f4ef28094022e2b952d07e9db7557457}}, {{cite:114edfd42c6b6af9b88d145b3ae28603287131b2}}, {{cite:c58ae53a4afe7d7478f1fab89482a56367cb1d2e}}; methods that exploit special structure or simplify the representation of multidimensional inputs—for instance, a Toeplitz structure of the covariance matrix scales GPs to big time series data, and tensor products of scalable univariate kernels can be used for multidimensional inputs {{cite:e8b626b746d8354c172081ddf42fb4d9fc4dab8e}}, {{cite:ce54131c87e9f43ecb139000fe807463b720c480}}, {{cite:586f8705763d9219701a47536c61177d907af562}}. These methods may be unavailable or perform poorly in geostatistical settings, which focus on small-dimensional inputs, i.e. the spatial coordinates plus time. In these scenarios, low-rank methods oversmooth the spatial surface {{cite:16d2c20d26b24969206347e1166eacf99ad0cd1d}}, Toeplitz-like structures are typically absent, and so-called separable covariance functions obtained via tensor products poorly characterize spatial and temporal dependence. To overcome these hurdles, one can use covariance tapering and domain partitioning {{cite:33c0b4f2f61b4e6e934984fbec4405d146cab6b6}}, {{cite:4e128ffce3f69593758d6c7d79c148229d8a8384}}, {{cite:de34ae82787f9f833068d37c52be32f2fe6de0c4}}, {{cite:599b7bcc119f42164f590c30834a759435d71bee}}, {{cite:4ea1696f3448047a5747aac5ffd9940ef660a4aa}} or composite likelihood methods and sparse precison matrix approximations {{cite:51abf1482a98804f6b7e3c858e01a1a2a9336fb4}}, {{cite:8cf4b7e360f459b6add40b098e5ab607e2b72d52}}, {{cite:2fc0e2bdc6c93182d9315cf318f287e4a0a54b71}}; refer to {{cite:f6de18d7cb51c01833656a90b9e996e61a996a36}}, {{cite:915849b2f7de24aa281ba91207f6f49f1fdeeb22}}, {{cite:7bbe6b271450740068e5b100b4daaa48038f2190}} for reviews of scalable geostatistical methods.
{{figure:0d029720-a7d2-4aeb-b32e-f76af31e8c83}} | i | 43cca88602d1a1e5dd5f358039ce56a5 |
We perform DFT calculations using projector-augmented-wave (PAW)
method{{cite:5260f77ab5bf35f1d95b1fc1bdf98f867fde0242}}, {{cite:12d1b2ca0a2dd02bbe6c9b6649334ea98eab9acd}},
Perdew-Burke-Ernzerhof (PBE){{cite:6575812e59798ce1b76c43711482bcc14396c3e9}} and
hybrid exchange-correlation HSE06
functional{{cite:ddd873994bbecfde8a1445d088d83ebe73371646}}, {{cite:e36c2707549abaa2f1a42ebd2330b5942626a24c}} with default
mixing parameter value {{formula:d6796af7-cdd1-4886-ab14-ddae3d3311d6}} in the VASP
code{{cite:f5d06b9381ed97ecde32144281a68b467e8aa724}}, {{cite:e773ee9da81d29b84a4598fa878939a39cad7739}}, {{cite:5d30d82c8f3368fb8b937b092cacd4fc04ba4879}}.
Plane waves with 550 eV kinetic cutoff energy are used. The vacuum
distance between the neighboring layer is set to be 20 Å removing
the nonphysical long-range electrostatic interaction. The ionic
Hellmann-Feynman forces in each atom and totally free energy are
converged to 10{{formula:9af62d76-80e2-4987-a6dc-d6b488b80b9f}} eV/ Å and 10{{formula:c9ca6398-2d67-41eb-953e-2f00d6675346}} eV in the structure
optimization and band calculation. The Brillouin zone is sampled by
uniform 21 {{formula:3569be1e-2353-4a75-8a41-eaaa799fd44c}} 21 {{formula:c5c25bb5-c577-4cc6-840a-e51a42f0430c}} 1. The electronic transport properties
are calculated using the electronic Boltzmann transport theory
implemented in BoltzTraP{{cite:312e72a93dbfa728593e6621adcf1fcd55884231}}.
In the phonon calculation, we used 5 {{formula:72fecce3-45e4-425e-807f-933b4f7a6308}} 5 {{formula:b293df21-dcc0-45e4-b266-051e89d89754}} 1
supercell and 2 {{formula:4d57f927-e35e-45e9-97b8-c4de07b4282f}} 2 {{formula:69f90564-f583-4031-8262-7e5e49417de3}} 1 k-point sampling to compute the
second and third-order force constants. To solve the phonon Boltzmann
transport equation, we adopted a 101 {{formula:16b1b22f-8718-4667-b969-24c8c8df8067}} 101 {{formula:b4a9b0b4-79de-4430-884c-d27d21f7303a}} 1
{{formula:b2b702d4-58c4-49a6-8972-983dad2ce016}}-centered q-grid. We also have tested the convergence
of lattice thermal conductivity with respect to the cutoff radius,
shown in the Supporting Information. The linearized phonon Boltzmann
transport equation is solved by ShengBTE{{cite:de2ed14a32d2233574dc3baf5667e25a4851a1ca}} via a full
iteration.
| m | 6c9cd6d68800040f9d1119473bd71dcd |
A cell-free massive MIMO system is considered with 15 APs ({{formula:583b3710-22fa-4d9f-b860-7b46c627ade7}} ) and 6 users ({{formula:45f4291b-a2f0-4a35-91db-bec8769705f6}} ) who are randomly distributed over the coverage area of size {{formula:3ead7705-527c-4ea9-bc46-f7bb3ed62859}} km. Moreover, each AP is equipped with {{formula:a318b442-3e1d-4f3d-9d90-e92b1c68f09a}} antennas and we set the total number of RTUs to {{formula:3f7d77dc-8eeb-433c-8ed9-f11b5efb2401}} , and random pilot sequences with length {{formula:d51de6b4-670e-4e93-967f-304617f7a327}} are considered.
Table REF presents the achievable SINRs of the users while the target SINR for both RTUs is fixed as 2.3. The power allocations for all users and the max-min SINR values are obtained using the proposed Algorithm REF . It can be seen from Table REF that both RTUs achieve their target SINR, while the minimum SINR of the rest of the users is maximized through using Algorithm REF (If the problem is infeasible, we set {{formula:95cbc90a-bc59-4673-a9d5-7b3179598240}} ).
Fig. REF presents the cumulative distribution
of the achievable uplink rates for the proposed Algorithm REF (the solid curves) and a scheme in which the received signals are not weighted (i.e. we set {{formula:bdae5341-1ac4-4f1d-8abd-93c03c37a947}} , {{formula:a9735b62-fabe-4609-9750-e4316a2b41dc}} and solve Problem {{formula:cf71740d-8a8f-4b87-a47c-761c3560b247}} ), which are shown by the dashed curves. As seen in Fig. REF , the median of the cumulative distribution of the minimum uplink rate of the users is significantly increased compared to the scheme with {{formula:a85ab8bc-3015-45ac-a3b0-d3c69c416665}} , {{formula:c9e410aa-292f-4c5a-97b0-b7fbfe3cc7c8}} and solving Problem {{formula:934f1aca-6561-47e3-9803-49427ade63ce}} .
As seen in Fig. REF , the performance (i.e. the {{formula:9c574816-b4fe-4725-9676-c7f33d4383e6}} outage rate) of the proposed scheme is almost twice that of the case with {{formula:806b3d1a-7d33-49fe-b0cf-a2099afaa1ad}} {{formula:18a735e8-c88c-4dc4-aec5-beca897ae14c}} . Note that the authors in {{cite:2352fe57eb98f36faefa5e2cae588bb494bc1719}} considered a max-min SINR problem defining only power coefficients and without QoS constraints for RTUs. Hence, the dashed curves in Fig. REF refer to the scheme in {{cite:2352fe57eb98f36faefa5e2cae588bb494bc1719}} along with QoS constraints. Moreover, note that the case with {{formula:e9524e74-fc3f-40e9-b43f-2c9f59618d35}} and {{formula:bc3c563a-76b8-4a56-81ae-a9533db5215b}} refers to the single-cell massive MIMO system, in which all service antennas are collocated at the center of cell. As the figure demonstrates the performance of cell-free massive MIMO is significantly better than the conventional single-cell massive MIMO system. Fig. REF demonstrates numerically the convergence of the proposed Algorithm REF with 20 APs ({{formula:a71a1a33-3a32-4cb5-8fe6-550fb635cd6f}} ) and 20
users ({{formula:299099e0-87e5-4cb1-af9c-fa50e5d7fc17}} ) and random pilot sequences with length {{formula:219d0814-c1da-4609-9cb9-d2de60133139}} . At each iteration, one of the design parameters is determined by solving the corresponding sub-problem while other design variables are fixed.
Assume that at the {{formula:89e906f3-2e63-41b8-9851-80f20c290865}} th iteration, the receiver filter coefficients {{formula:5fadf35b-87d3-4bef-9e1f-33f3fbce9c40}} are determined for a fixed power allocation {{formula:581fbf7e-b1a5-4a5c-8573-83f381e15153}} and the power allocation {{formula:143d88c9-3cbb-4253-8cbc-32b0cb7429d2}} is obtained for a given set of receiver filter coefficients {{formula:78225214-b8a7-4fd2-acab-8103a84afead}} .
The optimal power allocation {{formula:730aa1a6-46c9-4e85-81af-b43348ba598c}} obtained for a given {{formula:fc05e642-ee75-44da-bd59-fcc3811033dd}} achieves an uplink rate greater
than or equal to that of the previous iteration. As a result, the achievable uplink rate monotonically increases at
each iteration, which can be also observed from the numerical
results presented in Fig. REF .
| r | b532ac6ad2f5668aa3d739c01672a22a |
Our work complements previous extensive mathematical literature on localized deformations in a variety of non-equilibrium settings {{cite:13be7e931071da3080d1d08921fe2e33031623bb}}, {{cite:51cea4c0891221a0f0e6ed8532d0ceb64614f352}}, {{cite:cf849c7f5e4ed7c68c569fb233c2ee6081e97b33}}, {{cite:c4ad64108fd31056c06dbdf45d7fade5fc7a11d3}}, {{cite:9b7dae9bc48898163ecf4732ebe632a6161fc86b}}, {{cite:7ed4c0b0b0eb4b4d99e72a0e05aa68ad77c17266}}, {{cite:d2ce9c6e13a124649a39541a5ff7b44640ce8766}}, {{cite:1549517f3c79a243a33ab2db3f4e0e30e9acb169}}, by providing a simple physical realization of such localized deformations in equilibrium mechanical systems in a one-dimensional setting, complementing earlier work by us in two-dimensional settings {{cite:1ee542eff192660a93db4ae391332d51e1793b90}}. Our work also provides an example of how to employ multi-material 3D printing as a reproducible, rapid and easily realizable method for studying complex problems related to mechanically constrained growth at multiple scales. Due to the ability to control the deposition of material with very high resolution to create complex 3D structures, this approach could thus provide a practical research platform for investigating mechanical feedback on growth under different conditions, e.g. in the presence of gradients in material properties. By incorporating extensions to 2D systems, our approach could move us one step closer to understanding the two way-feedback between mechanics and growth/swelling kinetics, which is a defining feature in the growth of spatially extended structures in both materials science and biology.
| d | 1ad9c9e0aa0c5b6a958140589dc5d1e3 |
Current methods speed up the learning of radiance fields by different strategies {{cite:063a0f00ca8d4cda84b57cce842e79e52aeda0c7}}, {{cite:ec581c198c7a5c299b798cdd32dce92e892195c1}}, {{cite:a537229f4f677fd73258efb99c33d8af15e99e50}}, {{cite:af16d8f83335bb9d6792510238dbc138baa6a2d1}}, {{cite:1a8630172baa644ab7f67970f20a9625077ad4d9}}. For example, Plenoxels {{cite:ec581c198c7a5c299b798cdd32dce92e892195c1}} replaced neural networks by a sparse voxel model to learn radiance fields, which achieves a speedup of two orders of magnitude compared to NeRF {{cite:53d174ef6719f70a8bf38155b18c095141d34415}}. TensoRF {{cite:af16d8f83335bb9d6792510238dbc138baa6a2d1}} models the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features. Differently, Instant-ngp {{cite:a537229f4f677fd73258efb99c33d8af15e99e50}} introduced multiresolution hash encoding that permits the use of a smaller network without sacrificing quality, which also significantly reduces the training cost. Although these methods can train radiance fields fast, they require specific architectures, such as sparse voxel grids, discrete tensor coordinates or hash coding, which are not easy to adapt to improve the training efficiency of different neural radiance fields variations.
| i | 0d8ab651458297bd136d9a41b8034e48 |
The computational complexity of self attention grows quadratically with respect to the image size. To achieve computational efficiency, we leverage the advantages of CNNs and transformer and adopt the swin-transformer block {{cite:213cee50feb64ba839f89610614774ea5b86e1cd}} in our framework. The swin-transformer layer consists of two parts: local attention and global attention. Within the local attention, the calculation of self attention is restricted to local regions where images patches are divided into non-overlapping local windows. Cross-window attention introduces connections between neighbors by sliding non-overlapping windows.
The structure of swin-transformer block is presented in Fig. REF which is composed of MLP, Layer Norm, window-based MSA and shifted-window MSA.
The computation procedure of swin-transformer block is represented as follows:
{{formula:b33c4fa8-33c9-4c1c-a403-1f28e493b8a2}}
| m | 39d769d2a1b245cdf1d1333435d49827 |
ViTranZFAS: This is our final proposed framework. Essentially, we take the pre-trained vision transformer model {{cite:2ee97fc7a00e24c2f7123c9dce0befdda5608835}} and remove the final classification head. A new fully connected layer is added on top of the embedding followed by a sigmoid layer. The network is then trained using binary cross-entropy loss function, adapting only the final fully connected layer during training.
| m | c4c18d6be403bc52bc268890cc5c1445 |
Triggered by the development of automation and sensor technology, unmanned aerial vehicles (UAVs) have become increasingly prevalent in military, public and civil applications, such as autonomous combat, target detection, video surveillance, data collection, disaster management, network coverage extension and so on {{cite:fab3bafa18ce3ed612164f37d7b2ccfdb99c9996}}, {{cite:0f8bba0cbae20da1a75b76f8eea5616e14c642f2}}.
With the distinctive advantages of high mobility, quick deployment, cost-effective and line-of-sight (LOS) or near LOS communication channels, UAVs open a promising prospect to the future development of society and technology, and therefore, have attracted tremendous interest from both academia and industry {{cite:deb09b2cddceec43098d7c19663965e0c72318d8}}, {{cite:5b49a91346994b3075f5aea748459724e099046b}}.
To support diverse applications of UAVs, reliable and effective information transmission within UAVs network and with the ground control station (GCS) is of crucial importance {{cite:4407c7888f500dacecbe31aedc1c752aeca0abd7}}. However, communication between a swarm of UAVs can become unreliable especially when considering the high mobility of UAVs, the constrained power of hardware, the throughput performance requirements of data packets, and the limited amount of radio resources {{cite:7d96ea2a8dbf02cf7523fe8aad4b6918fa016aeb}}.
| i | 0494a32ca6897d426fabd758df967e7b |
To illustrate the model performance under this subcase, we follow an example of the quadratic cost function with linear perturbation in Section 5.2 in {{cite:f451d07da366cf4a617edd2d546bf68a61da1f83}}: {{formula:d66f42ce-0b5a-4b7c-8b5b-8f643c602279}} We let {{formula:2ffdf7c4-6410-454b-8e5a-88cc85d61248}} and the decision space {{formula:322462da-b2bb-4e2f-ad5d-c7318b710200}} and set {{formula:c25b78f9-831d-4971-a4cb-abe15a45031e}} known beforehand. We use some misspecified distributions with {{formula:16a19f17-fe85-4c2b-9d1c-f9d3e7967484}} but keep {{formula:89e6d5c3-67b2-4eca-8102-cab61fec9413}} such that we have: {{formula:a354677b-b132-49e2-9f4d-101583846409}} . Then we have {{formula:a97332b6-cff0-4f8a-b6bf-3d88f2b68982}} . We illustrate through experiments and show that here our P-DRO model (fit with normal distribution) can also achieve zero error under large ambiguity size {{formula:ee68aa60-282a-4032-9b3a-fc1baeeeb1dd}} (like NP-DRO in {{cite:f451d07da366cf4a617edd2d546bf68a61da1f83}}, {{cite:84e3db5c30e4c71c5cc32cdcb1cca6c6d18c9d9e}}), which outperforms the ERM loss no matter whether we use parametric or nonparametric models.
| r | 9f5f2a394d38af18506db8becd749133 |
Recent advances in deep learning suggest a new way of thinking for solving POMDP problems. However, very little work leverages deep reinforcement learning in partially observable environments. Among this work, {{cite:a30537d82aa19e9d32dc56de608582285f49db73}} adopted DQNs to solve conventional POMDP problems. A policy is obtained with a DQN that maps concatenated observation-belief vector pairs to an optimal action. Their work (we call it DBQN) is designed for model-based representations of the environment where the transition, observation and reward functions are already known. Thus, the belief can be estimated precisely with Bayes' theorem and can serve as input to the neural network. However, in most real-world POMDP problems, the environment dynamics are unknown. To address this, {{cite:73b3bf02d696b7edb543f2ce6b6c020706e969da}} adapted the fully connected structure of DQN with a recurrent network {{cite:f4cb9ba3fc431bb5bfd7c3600de1d809c30a5448}}, and called the new architecture Deep Recurrent Q-Network (DRQN). The proposed model recurrently integrates arbitrarily long histories of observations to find an optimal policy that is robust to partial observability. However, DRQNs consider only observation histories without explicitly including actions as part of the histories. This impacts negatively the performance of the approach as demonstrated in Sec. 4. {{cite:6807f6dedbbf7be33fde2714efad5d495fb7d214}} combined DRQN with handcrafted features to jointly supervise the learning process of 3D games in partially observable environments, however the approach suffers from the same problem as DRQN since it overlooks action histories. {{cite:8554e29303b9588388ea67962f0926968e48ca2a}} extended DRQN to handle partially observable multi-agent reinforcement learning problems by proposing a deep distributed recurrent Q-networks (DDRQN). The action history is explicitly processed by an LSTM layer and fed as input to a Q-network. In DDRQN, each action is forcibly decoupled from its associated observation despite the fact that action-observation pairs are the key to belief updating. As a result, the decoupling of actions and observations in DDRQN impacts negatively belief inference.
| i | 5ecbd0e793a3b4d68a5dbf55ba2c84c6 |
In the strong deflection limit,
the logarithmic behavior appears in the deflection angle of light
in the Schwarzschild spacetime
{{cite:d28f0727a0d480d3a951e579997209d030946c71}}, {{cite:e45c6d00b676357f24c578e7c33163b401c36540}}.
Later, Tsukamoto showed that such a logarithmic behavior is
a rather general feature in a static and spherically symmetric spacetime
with a photon sphere
{{cite:4ca76aa091b2eee3603d10ef83c3098f73629943}}, {{cite:0085ce0dc3df38afb198bf996f64d97a37c093dd}}.
Under certain approximations,
conventional lens equations with the logarithmic term of
the deflection angle of light in the strong deflection limit
are solved by Bozza for the Schwarzschild lens
{{cite:e45c6d00b676357f24c578e7c33163b401c36540}}
and
by Tsukamoto for static spherically symmetric spacetimes
such as Ellis wormholes
{{cite:4ca76aa091b2eee3603d10ef83c3098f73629943}}, {{cite:0085ce0dc3df38afb198bf996f64d97a37c093dd}},
where a source is assumed to be located nearly behind the lens object.
The position angle {{formula:40ba8682-b5c9-4a22-82dd-53ca956b8009}} of the lensed image can be split
into {{formula:d261aada-b322-4ccc-9757-7c5f7e05d3e2}} and a small offset angle {{formula:4ca4c461-1d6c-4335-9f9d-c8ccef948afb}} ,
where {{formula:641dddc4-a5b6-41fd-9845-9a419fa90184}} is a positive integer
(corresponding to the winding number of the light ray
orbiting around the lens),
and
{{formula:a60eeb46-e486-4638-89ff-e98e5d0b6293}} denotes a small offset angle to be determined
by solving the lens equation.
Indeed, the leading order solution in the strong deflection limit
fits with the numerical solution, as shown by themselves.
| i | 5237401fe7418ca6369822bfa365415b |
For systems where there is a sign problem {{cite:c7d7b00cdd739f4d04e61eef11bd65377a496146}}, constraining the random walks in sampling the space of auxiliary fields, will led to considerable progress, these methods are called constrained-path Monte Carlo (CPMC) {{cite:ab998d58dcf025582280677463cea21622d80fc4}}, {{cite:5a95d36ca2dfb5d1df2663e3903f158766748da0}}. Sign problem arises from the combination of Pauli principle and the use of random sampling. If the system size or inverse temperature increased, signal-to-noise ratio will vanishing exponentially. The idea is to constrain the sign or phase of the overlap of the sampled Slater determinants with a trial wave function {{cite:5a95d36ca2dfb5d1df2663e3903f158766748da0}}. Applications to a variety of systems have shown that the methods are very accurate, even with simple trial wave functions taken directly from mean-field calculations {{cite:38be0f4c3edad4f8169b98852a8b0d9d5beff1c4}}. Here, we mention the key features of ground-state auxiliary-field quantum Monte Carlo (AFQMC) methods that are relevant to this topic.
| m | efd753820e3dac319a5934b0fee28faa |
In order to understand why the inclusion of resonances above threshold lead to an improved determination of the {{formula:f86e07ca-413f-4589-9a8a-56df065c2975}} quark mass,
we provide in Fig. REF a graphical account of the landscape of {{formula:e36ed12c-63dc-491e-8e68-8a79fef067f0}} above threshold.
The upper plot shows that the continuum alone does not describe the data in the energy range of the {{formula:0b2651cd-531a-44d0-a69b-a8b3bbfbae73}} , {{formula:02f73ebb-96b6-4afe-8126-02e466b59346}} ,
and {{formula:191ab8c6-ebdb-4605-866a-edbf3f8c7610}} resonances and the pQCD limit is reached only when {{formula:50d21446-456a-461d-ab1c-be0051de5f5f}} is above threshold by an amount
of the order of the {{formula:b6e63716-42e2-4310-b0e5-17f8b6d5f6aa}} quark mass, i.e. far above the energy range where data are available.
It is therefore not a suprise that with the continuum ansatz alone one cannot obtain stable solutions from the set of sum rules.
The second row of plots in Fig. REF shows how the global description of data for {{formula:5a7ac3f8-b6f5-4274-b78c-0df05d7e34c1}}
can be improved by the inclusion of Gamma distributions, Eq. (REF ), for the {{formula:e68d7e4d-29f6-4924-9f3d-20a63e5c1d23}} and {{formula:4f59aba4-39e1-424b-8367-b24bc7459f70}} resonances.
If we use the total decay widths {{formula:2643771d-6397-492a-9f06-0681e47289bd}} in Eq. (REF ) as given by the PDG {{cite:1d7f9189a31c75e531203973587484dba01c1db8}}
the local description of the data is still not good; however, the moments, i.e. integrals over {{formula:eb9978e8-795a-4e35-9d03-a0581e119b23}} can be matched.
To see this more clearly one can exploit the fact that moments do not change even if the total widths are significantly increased (which we denote by {{formula:06992db6-3f14-4bc0-b79e-815b34c300c3}} ) if one aims at a better visual representation of the local behavior of the data, as done for the right plot of the middle row of Fig. REF .
Here a good description of the data on average is clearly visible.
The lower row of plots in Fig. REF shows other possible choices, namely to add only one resonance,
the {{formula:b3d141fd-876a-449e-a212-8b85a1d54ec5}} , or three resonances, {{formula:8687b38d-c68f-4420-aa5d-3cc769428063}} , {{formula:7ef32bf9-3342-40fa-99c5-0e57f7d24908}} and {{formula:cc0c23f4-f86b-405e-911b-cdfb29e81bb2}} , on top of the continuum.
The first (latter) choice would lead to an underestimate (overestimate) of moments in the region above threshold.
As a consequence, these choices would lead to solutions for {{formula:3b62c168-288e-4480-a63f-c81ef827f334}} from the set of sum rules in disagreement
with {{formula:570b995a-0795-4994-9f93-2c6ffd3af836}} as determined from data.
| r | 3cf8bdd888f2b69351a6a271993ed84d |
Inference for the MSCE model is straightforward using the adaptive MCMC algorithm of {{cite:43fa0837e0cdce206f0bd4be062b7561b2bad5b0}}, and convergence of MCMC chains is relatively rapid; in practice, 10000 MCMC iterations is more than sufficient. We believe that the MSCE methodology is an interesting extension to the statistician's and met-ocean engineer's tool kits, providing a practically applicable yet statistically principled approach to quantification of conditional extremes behaviour over multiple spatial fields.
| d | 23c3da00d701c8c8a6116f82ea78eb4c |
Uncertainty for ADL. ADL can use uncertainty to select samples, but it does not mean it can totally avoid uncertainty in training process.
For most of shallow models such as SVM, Logistic Regression etc., their uncertainty is not so obvious in training. However, the uncertainty of ADL is very obvious as it is based on deep model.
Training a predictor based on neural network is always affected by stochastic factors such as data augmentation, mini-batch selection, normalization, initial labeled data, size of the selected sample set and etc. When both predictor and selector are deep model by data-driving, the uncertainty will be more difficult to eliminate.
Some studies {{cite:d309a28b934d9af76b0960fc4307ba1ced15282c}} {{cite:71126333c7c252914809243d6f6892330f7e162d}} {{cite:fe736b2c9312b1634a99c7ad7dee6b9aaf1a480d}} {{cite:fe736b2c9312b1634a99c7ad7dee6b9aaf1a480d}} {{cite:3b320145648a53e03a005b62495e64509cc2d624}} have reported that the newly designed selector may lead to worse or unsteady performances of ADL, or even perform worse than passive random selection.
The performance comparisons in many studies are not controlled by same baseline such as parameters, initial samples, optimization and etc. In reference {{cite:3b320145648a53e03a005b62495e64509cc2d624}}, it detailed addressed the problem of towards robust and reproducible ADL. This survey believe the steady and reproducible problem will be more and more important in the regime of ADL research.
| d | 01b70f6ad0511e6ae81433f5affcbf40 |
The charges of the {{formula:cd101441-cd36-45c5-9373-ba8fd0a306b5}} field are also such that a global PQ
symmetry will accidentally emerge from this local {{formula:d56db85d-6ea7-44e9-ae39-da9acf2abd56}} . Moreover
the global {{formula:2377ef91-5f0f-4653-a983-8e4abe1ad4b1}} symmetry has a color anomaly due to the
presence of the additional colored fermions and generates the {{formula:15856eec-5f4d-4616-ae98-9ef440c7b371}} term
for the axion field after the exotic quarks are integrated out.
After the QCD phase transition a potential for the axion arises from
QCD instantons {{cite:00ad5abf6f76082ef133e745a3708a5677c314ee}}.
Due to gravitational effects not respecting the global symmetry,
we expect higher dimensional operators at the Planck scale which
violate the global PQ-symmetry but conserve the gauged ({{formula:2deec5ba-185d-41bd-abf4-b4258257ee34}} ) symmetry.
Because of this extra contribution from higher dimensional operator, the
axion minimum shifts from the value {{formula:cafe0c69-1c1f-41d9-b343-1c71cb6b8152}} . Such shift
is restricted by the bound on the neutron electric dipole moment and
so also the range of the scalar v.e.v.s will be bounded from above.
As we will see, reaching the value of {{formula:b8a9ccd0-401d-4367-a5c6-21de23e165ae}} compatible with the full
DM density via the misalignment mechanism {{cite:45c20bf97ea126440a550473fbfff0bb01b97ac8}}, {{cite:114a2e2971e207975c4fb91917dcbd69a6163cb2}}
is not always possible in the model, but luckily, if we assume the RH neutrinos
mass to be at the GeV scale and the {{formula:59a2ba09-590f-4f52-9414-fabca709bfb5}} gauge coupling very small,
we can consider one of the RH neutrinos as a feebly interacting massive
particle (FIMP) DM component.
We will explore in the following the full parameter space of the model
and see that in most regions a mixed DM density arises, lowering
possible axion detection signals as the FIMP fills the gap to the
observed density.
| i | 15978c1375b3b388f1637cee2deaa1b7 |
We also compare our method with KL matching {{cite:d58191395fe2f4eb2a8051538f92616c18d11e74}}, a competitive baseline evaluated on large-scale image classification. MOS reduces FPR95 by 14.33% compared to KL matching. Note that for each input, KL matching needs to calculate its KL divergence to all class centers.
Therefore, the running time of KL matching increases linearly with the number of in-distribution categories, which can be computationally expensive for a very large label space. As shown in Table REF , our method achieves a 6x speedup compared to KL matching.
| m | 1e35141c2abb2dca1daa41938ea372e1 |
The underlying idea behind this approach is self-expressiveness, which posits that data points sampled from a union of affine subspaces can be accurately reconstructed by a sparse linear combination of other data points in that subspace. The process of computing these linear combinations is known as sparse self-representation. The key step in a sparse subspace clustering algorithm is to use the magnitude of these sparse self-representation coefficients as edge weights for a similarity graph, which can then be used as input for spectral clustering {{cite:0fbc28906dd96f3b807a16fdd098f7421ba89d55}}.
| m | 99410565bc1ed32eea1714669d7745ef |
When C.1 holds, the approximation error induced by MOD factorization is negligible, since the original MOD {{formula:ce009fa8-9b8f-480f-901e-44569adadad8}} can be reconstructed exactly using {{formula:855245c1-c6ec-4b8c-ac89-4455a8df2083}} by applying the convolution formula {{cite:2b40c0ef75c9cf9ea41329ed3d758660e0583dd9}},
{{formula:83365ecb-b2e3-4a9d-89bc-b27c25dbfb96}}
| m | b358a6b11e3c067eed08705e58b4ebf1 |
In a number of CL settings the observed accuracy can be a misleading metric for studying forgetting, particularly when compared to finetuning approaches
Naive training with SupCon {{cite:85293604e9578a49547f5a2814ba64dcb828b294}} or SimCLR (in the unsupervised case) have advantageous properties for continual learning, particularly in longer sequences.
By using LP based evaluation, forgetting is clearly decreased for wider and deeper models, which is not seen that clearly from earlier observed accuracy.
| i | 10fa5ab8b722aeaa1f253e9dbbf77e1e |
There are a variety of literature on distributed training methodologies such as data parallel {{cite:8df8e5577ae7c2109d99e0bac2b11417994d20b4}}, {{cite:c43ab76ce7d80328b935aa5a29e3ec9bb7f619bc}}, {{cite:3a88cf9bb1b4ede47ad60d5f64e9566128a16460}}, model parallel, pipeline parallel {{cite:30f1e4487241d330bdbc8443a217c27ce01cb31e}}, {{cite:a5c4fddbb2c62469d708a17c91f35758ef8b80c9}}, {{cite:f94cc5c8120bfdcb01fef48950c9db5fd9a3eec9}}, {{cite:919b6eb925757d913ce65112d2ab4623d90e2024}} and so on.
Data parallelism accelerates the model training by making each device to be responsible for only a fraction of the input data. However, it requires each device to hold a whole model replica during the training process and collaborate with each other through model synchronizations. Apparently, such redundant model storage does not resolve the memory bottleneck per device. Model parallelism and pipeline parallelism are promising research directions. For example, Megatron-LM {{cite:a4e002fd2cabd170140ad181896612cfbec465a7}} partitions the model parameters and computation in each layer to multiple devices. But they also bring significant inter-device communications on the intermediate results and lead to unacceptable training efficiency.
Recently, Zero Redundancy Optimizer (ZeRO) has been proposed to eliminates the memory redundancies while retaining low communication overheads.
It only partitions the model parameters to reduce the memory usage but remains the data parallel computation through sharding and gathering model states across the devices. There are several popular implementations, such as DeepSpeed {{cite:317d52a3ec3f938a8db0af4e7ad636104c8f62f4}} and Fully Sharded Data Parallel (FSDP) in FairScale {{cite:9edcd47e1aadc7993e033e887234e79aa0fc7542}} and the latter has been integrated into PyTorch {{cite:cf1fecb0211115c18d662ef82735e020c15e369a}}. These systems have been successfully used in producing large pretrained models in real industrial scenarios like Microsoft and Meta.
| i | 1e92f034912f648a1dd19ff1d8d5970c |
On the unsupervised side, {{cite:410a83449cdfc7f6e15f9bdd78c977abbb1f30bf}}
experimented with models that include losses corresponding to the
three criteria, and that could be used both for model tuning
and selection. Among such losses, many of which had been already
explored {{cite:a96d56178d62a4651bfeb73ac2cf10a94b5f8911}}, they tried to favour content
preservation with a reconstruction loss, with a cyclic consistency
loss (similar to the former, but with the transfer happening twice,
i.e., from source to target and back), and with a paraphrase loss
obtained with sentence-paraphrase pairs coming from a parallel
dataset.
| m | 1887bd0796fd1b42566e66efb052c6a3 |
By part (a) of Lemma REF , {{formula:378a3974-4040-4124-afcb-2a85d5d66b42}} can be implemented by a ReLU network {{formula:94f4c66a-207a-4330-ae9b-a9a401eb4d71}} with width {{formula:e715dd89-0da5-4649-a1bc-00c69105feea}} , depth {{formula:24f71d7d-592d-4ba6-80e8-08eee5c78523}} , size {{formula:a166dfef-0f08-4059-8fc1-a8888e03d3c5}} . By Theorem 6 in {{cite:4062652e108559238b9f2504e78d18ef14dc3a34}}, for any ReLU network {{formula:a6615a7d-09e6-4a72-be17-8acc9ed02a6c}} with depth {{formula:55baac9d-ac83-47e9-bb94-314f30ea849e}} and size {{formula:6a5a5e65-213b-4e84-9fac-6db4867a53db}} , there exists a universal constant {{formula:09ede420-2de3-493b-ab3d-5236ce6d0301}} , such that
{{formula:3f04b327-5b8b-4128-bfa5-2d7ecee4e960}}
| r | 4a2c500245763b387ae86e49f2399616 |
Proposition 2.3 ({{cite:d7a38e0a3cc4e0465b95241d1a56683a76ff4ac4}}, {{cite:814da7a17750f85be76d9b9f7694a6985294d1a0}})
Let {{formula:45fafb2b-f102-46aa-ac7d-10acc82e2ffa}} . Assume that the initial data {{formula:2e78c547-1168-429e-bbd1-af48c606721b}} {{formula:77fbb8a5-408f-49df-9ef0-23a34db045e2}} satisfy
{{formula:ffe93218-4317-4c0f-92f5-98638567cd40}} , and {{formula:c9230200-ba8c-4343-b13b-9eeb33cbbad0}} , {{formula:90f339d2-ef72-45e5-806f-a560ee3a5e15}} .
Then, there exist a
{{formula:0dacbe91-2f9e-4cc2-aed1-ee7fee1782bd}} and a unique smooth solution
{{formula:b2a1044f-ccc7-42bb-9f4f-39fa6aae6be3}} to the
incompressible MHD equations (REF )–(), and for
any {{formula:b0a47602-b6b4-4e81-948a-0277c069e7ef}} ,
{{formula:b69537c3-9f4c-4bd9-a142-d8ac06d8a1a8}}
| r | a8096ab5fcb64c61ccbd60d611948f2b |
An alternative method to achieve a sparse classifier is to use a sparse prior distribution on the model parameters and update the classification model from Bayesian perspective. In fact, the {{formula:320708fc-f3e9-4dd5-a5cd-10b652ca1b35}} -regularization is equivalent to employing a Laplacian prior distribution {{cite:7e5abce312d50215e676b503b1a80af0a91e4144}}. Compared to regularization, an advantage of using a sparse prior distribution is that, one does not need to adjust the regularization parameter manually {{cite:a2f32298dfcad6a407f9e639f8d3a9b9145036ee}}. Notably, the automatic relevance determination (ARD) technique {{cite:fd13cc081caae14d5fe9166e298f081f2fdfd571}} was introduced into the logistic regression classifier {{cite:715fd60fdbbd60292a44e58b5e85dab94286e145}}, which is a hierarchical sparse prior and has proved to be more adequate than the Laplacian prior (equally {{formula:4a93cb18-aa91-4920-a0cb-323b3269c484}} -regularization) for feature selection {{cite:b9617b36c6082b66fd6ebfcf7d41ca6847cfebad}}. The ARD-based sparse logistic regression {{cite:715fd60fdbbd60292a44e58b5e85dab94286e145}} (SLR) has been widely employed for brain decoding, including EEG decoding {{cite:93bb1f32ff732985e2c838d3bc21b71ea107ca56}}, {{cite:c365cd81e4743dd31701202681153f36220b828b}}, {{cite:1633080b358ee5d1a13adf75dd32a065d946366b}}, {{cite:1da01a0ca256a6455e5feb1b1c333e5b6e2534e3}}, fMRI decoding {{cite:5489d8c9291ee63f72922f9d4e4e62cf21246b2c}}, {{cite:b95763c8d53762c774087d7d0b239b9c65516957}}, {{cite:9ebe91166832230f6561152c6d907f5b542969f8}}, {{cite:1470c33797c9269beae2370e15b25373a9106b41}}, and current source density analysis {{cite:4eda107929c6015818e500ec1d02fecb86e4a77a}}, {{cite:95fc5852a3ac6147aecd32fa3550d987d758b52e}}, {{cite:366b1ab924836bf810c4d66161e5f3e500dcafda}}.
| i | 97764e7346ffbb58da2da32f8bfc2d0b |
As mentioned above, the choice of the library in rvm is important for the performance of MEDIDA. Building an exhaustive library of the training vectors and inclusion of any arbitrary nonlinearity and function is straightforward but becomes computationally intractable quickly. Any a priori knowledge of the system, such as locality, homogeneity {{cite:4d9ed033f350073977a77ad85c3ebcb522b458fc}}, Galilean invariance, and most importantly conservation properties {{cite:e6c2b3c28a565d917385b3a73ea8e26e335abce8}} can be considered to construct a more concise library. Conversely, the library can be expanded to systematically “explore the computational universe”, e.g., using gene expression programming {{cite:4725414e4a825e3753d9154bd5c10535f6291e22}}. Even further, additional constrained, for example on stability of the corrected model, can be imposed {{cite:232d47dd6a2230a188ccdc7aa2797ae10411da4b}}. Effective strategies for the selection of an adequate and concise library should be investigated in future work using more complex test cases.
| d | 519dc22487747e91c1af8acbf670761d |
In this section, we numerically justify the efficiency of Transformer-MGK/MLK and empirically study the advantage of using mixture of keys on various benchmarks, including different tasks in the Long Range Arena (LRA) {{cite:ec8981527d4561f0a8c7ec391b20b5b40b5901de}} (Section REF ) and language modeling on Wikitext-103 {{cite:e7c10ae68f3cdf6402b4735ebef042aa0fd8d960}} (Section REF ). We aim to show that: (i) Transformer-MGK/MLK with half the number of heads is comparable or better than the baseline softmax and linear transformers with full the number of heads while being more efficient in both computational cost and memory footprints; (ii) Mixture of keys helps reduce the redundancy in multi-head transformers and benefits learning of the long-term dependency in long input sequences; (iii) Using the same number of heads, Transformer-MGK/MLK significantly outperforms the baseline softmax and linear transformers. Especially in the case of Transformer-MLK, it helps reduce the performance gap between softmax and linear transformers while still maintaining linear memory and computational complexities.
| r | baa109180391dd67cb8aeffc500badd6 |
Note that it is impossible to decide OV in time {{formula:0c3fa1e7-c784-4c23-afd7-9627a93a8d9e}} , so when {{formula:40ff5985-7bef-4e2e-bc09-0c8067e50651}} is polylogarithmic (as is the usual assumption), our algorithm has only polylogarithmic overhead over decision. Thus our result is able to turn the {{formula:aeab0ccc-53b2-457f-8a26-6712f8dda996}} -time algorithm of {{cite:cffc99c36c3181feec58781d5090cce03867e58d}} into an approximate counting algorithm, but Chan and Williams {{cite:b6914397056af6d199eb85ac55e4b566a82e18ff}} already gave a deterministic exact counting algorithm of similar complexity.
| r | e03bf431a56843753ad8b22ec614aae8 |
As a reasonable and efficient characterization for dimensionality reduction and pattern recognition, the low-rankness has been witnessed and well explored for matrix data arising from a wide range of application problems. The resulting matrix optimization with embedded low rank matrix structures can be found in diverse areas such as system identification{{cite:97e12f5e4801337f92c2707be6aa82fbf1aef600}}, control{{cite:5711426c9e70a93ee74a2a268b6caae7381928b8}}, signal processing{{cite:2538ae1d3115e579fbbf357b8eae6bf9309eaf5d}}, collaborative filtering{{cite:c77257de9cfcae431cbae88dd766de19f0d2c036}}, high-dimensional statistics{{cite:17c4fb16a3a97d249bc83d9236ac461e0da6df26}}, {{cite:b53982cad25beaa542d27862efccac060c0289ac}},
finance{{cite:29563ceac2b6594a697ebf9fab9e79a5e438cc23}}, machine learning{{cite:b3e55ff2a84ddd8c89cc667d6808309c4f306af6}}, {{cite:094e9867b5b73fe266d367eefa591ac30d1f6013}}, among others.
| i | 418b872c4bdf0a749905968be51ba0db |
We evaluate our model against all the published approaches which made their code publicly available: Social Force model (SF) {{cite:8edd7885375ac8b04e0c423325e247d29227987b}}, Linear Trajectory Avoidance (LTA) {{cite:e3aa9dcae312e0a5c58936c776bbf232243bfb04}}, Vanilla LSTM and Social LSTM (S-LSTM) {{cite:4152881b9c62917c3e0a45e4bdd05f3908c710ff}}.
| r | 7fc92ab715e06e9e841321f9e5a13d8d |
First we must choose a global and static reference, since the
Neumann solution consists of integrals defined over the entire
radiative domain. Such reference obviously exists, in which the
velocity field of the fluid flow is also appropriately defined on a
grid. Next our scheme requires the mathematical expressions for all
relevant quantities of the fluid flow (such as the emissivity,
absorption and scattering coefficients, the mass and particle number
density, temperature distribution) to be explicitly given in this
frame. These quantities are well defined in the comoving (or rest)
frame of the fluid flow. So simply through a Lorentz transformation,
we can obtain these expressions in the global static frame (one
should also notice that in order to get the values of these
quantities at an arbitrary position from the simulation data that
defined on a grid, a suitable interpolation scheme should be
introduced). The Lorentz transformations for these quantities have
been extensively studied and can be found in many textbooks (e.g.,
in the appendix of {{cite:481c73bbb67740d0e71ec785d90eafb5024ed329}}). With these quantities are
specified, we can write down the RTE immediately and obtain the
Neumann solution as well. Then the following steps are just routines
that have been discussed in the previous sections.
| d | bbd73cd951ac4193ab96fe51ca14694c |
First, we infer the interacting-particle systems approximating the dPMF dynamics by modifying the systems known to approximate cMF dynamics {{cite:47a3eed62bdfc74261783b9f922b7e478982d48d}}, {{cite:c542bee5f85bf112d87539d17fc5474ac26a2ff8}}.
This involves considering noisy synaptic interactions, whereby spiking updates in downstream neurons are i.i.d. following a normal law with mean and variance equal to {{formula:dbe6dfeb-c3e5-42bc-93ec-9ac0c06d4fd7}} , where {{formula:03815dab-c3db-45b6-b4a5-5014db599dd9}} denotes the number of neurons.
Conjecturing propagation of chaos {{cite:e1c887ee1e337a596721759733327b8996359d2b}} in the infinite size limit {{formula:9bf65ef9-5e27-4025-8380-30aa38f5fd3f}} allows us to justify the form of the PDE problem associated to dPMF dynamics, which is only well-posed for nonexplosive dynamics.
In order to extend this PDE characterization to explosive dPMF dynamics, we must give a weak formulation to the associated PDE problem.
Due to the mean-field nature of dPMF dynamics, this weak formulation involves considering the cumulative flux {{formula:ded11e74-6ceb-495f-aad8-8cbe5b852c51}} as an auxiliary unknown function.
| m | 17dfe6a855dc3a2fd388960225a2dc21 |
We qualitatively and quantitatively compare the sinogram results of our DP loss with the original VGG16 perceptual loss {{cite:09fedcddec155793e1a81aaf8db69ecf929e0e9c}} trained with a SIN model. The SIN model learns 1D super-resolution from a 23-angle sparse-view sinogram to a 180-angle full-view sinogram. The models used for comparison share the same architecture, data and training procedure, thus all the differences are contributed by the different perceptual losses.
| r | 670c699569c6ce17729c59085e552210 |
In this section we describe the methodology of analysis of JLA data to obtain
bounds on equation of state parameter {{formula:24929243-41ea-4276-b028-6c29e8d88c94}} of dark energy.
There exist diverse statistical techniques for analysis
of JLA data. Some of these methods are discussed in detail in
({{cite:40b858a348d2786e13d4c52caf597e5e4256de09}},{{cite:6823a623736839707b748e34ac1d45242dc259db}},{{cite:55308dc69bd487c12923862a8db05d91537692c1}},{{cite:cbe4e632a5ced6c555842d21180e68ca1bb18285}},{{cite:bf9fe91e93d4f7c886b12087a1912d023d594e50}}).
However, we take the
{{formula:41536f85-8c8e-4068-8204-3cd0748c7a7a}} function corresponding to JLA data as
{{cite:82e16282390546bdc3596e484cf060833e4d0113}}, {{cite:59abfccb62424f691c55c671e11e0e43e14f37a0}}
{{formula:3fa9b26d-0d8b-4400-a952-3df1eadf0c33}}
| m | b3d1154ff34dc8ce5093bb3566dfd410 |
In the swampland program (see {{cite:623fee0919ac2d42209f48d7041041b132fa1888}}, {{cite:575ba2700fbb584f736a9a0d8f54ad7f7ddaa6aa}} for reviews), the absence of
global symmetries in quantum gravity (QG) plays a central
role. A recent incarnation of this is the so-called cobordism
conjecture {{cite:1e557b9d129142269dcba7eb1ae4f4deba0a0c7b}}, based on the observation
that a non-vanishing cobordism group would lead to
a global symmetry. Therefore, the conjecture says
that any physically consistent configuration in quantum gravity
must not carry any cobordism charge.Note that cobordism groups also
play a prominent role in the computation of
Dai-Freed anomalies, see e.g. {{cite:45c2f110fef70c2d80c26fe8c811c446d2172f17}}, {{cite:218898d310c08d7e001e5e1acb6f35efaeda5f5a}}.
| i | 7acfd82263af1f9f6bb4773ca0d854bd |
We focus on the case where {{formula:5e6a3ee6-15fc-4243-b198-67cca0de8e72}} . While the entrywise eigenvector analysis method of {{cite:a72ce7cd3be9cd275267e53ece351283a72ff74d}} allows us to handle slightly sublinear {{formula:1e953d2c-83ee-4b61-a12e-5e5772db1851}} , it does not allow us to match existing results for SDPs. When {{formula:dc8f142e-8e8a-464f-bd76-fdbddb0bed82}} , the SDP threshold matches the information-theoretic threshold with sharp constants {{cite:282d026cd389bf4890dd3e65b05f128e7a355820}}. On the other hand, when {{formula:b2190478-31e3-4a7a-8209-6d56c27fd771}} , then the SDP is order-wise suboptimal {{cite:282d026cd389bf4890dd3e65b05f128e7a355820}}. Finally, if {{formula:58718c6a-a73a-4a47-a5a0-5e5d2267e30a}} , then the SDP is suboptimal by a constant factor, though order-wise optimal {{cite:282d026cd389bf4890dd3e65b05f128e7a355820}}. It was conjectured that a spectral algorithm would require a stronger signal than the SDP algorithm. Comparing the performance of SDPs and spectral algorithms for sublinear {{formula:b69e114a-e2cc-4052-bc21-e161a0c87b6b}} is an interesting avenue for future work.
Can the optimality of spectral algorithms be generalized to related problems, such as submatrix localization? In the CPDS model, can we consider a more general censoring distribution that can model the case where the edges statuses are not missing at random?
Since the degree distribution of nodes inside {{formula:3e0c0081-7b0a-42aa-a8b6-8738c4b39d95}} and outside {{formula:9a696046-526b-42dc-a733-23068dd16985}} are different, it is natural to ask
how Belief-Propagation or approximate message passing perform. This is of special interest when {{formula:1af77335-14e8-40bb-81be-2c546af1512e}} . Some related work for the hidden-clique problem and for the block model includes {{cite:a32323bfeac034602a872273f079eaca4d48a7a0}}, {{cite:1049b42d5d7ad0737247085729e3fcc5f7f89ff5}}.
| d | ffae2d20525cab6b978d8c6612b0bf4c |
At first glance, two algorithmic ideas seem relevant for tackling the challenge of computing approximate pure Nash equilibria. The first one is to follow a sequence of deviations by players who improve their utility by a factor of more than {{formula:0b9ad585-c62b-490a-bd86-dcb9a8888829}} . The existence of a potential function guarantees that this process will eventually converge to a {{formula:a279e802-2c01-4d2d-a01d-d1774c4cf735}} -approximate equilibrium. Unfortunately, such sequence can be exponentially long, as Bhalgat et al. {{cite:983a7100003c1b0b480f24e2aaea16eaedc791f1}} have shown specifically for cut games (and for small approximation factors). The second one is to exploit an approximation algorithm for the problem of maximizing the potential function. For cut games, this could involve the celebrated algorithm of Goemans and Williamson {{cite:cd34213a812560433b352bdc768ec2349f6ae0d5}} for MAX-CUT or, more generally, excellent approximations of local maxima using the techniques of Orlin et al. {{cite:51de5ca3ca3bcda52c152853208a66ab87958645}}. Unfortunately, approximations of the potential function and approximate equilibria are unrelated notions. So, the algorithm in {{cite:983a7100003c1b0b480f24e2aaea16eaedc791f1}} exploits the structure of cut games to define restricted subgames in which sequences of player deviations are applied separately. This approach leads to 3-approximate equilibria in cut games and is also applicable to the broader class of constraint satisfaction games.
| i | 78070a36bfee5477c0cc510d25dddd42 |
A powerful tool to analyze the strong coupling behavior of {{formula:7fbb7588-21f3-4472-a6c4-015cc6dde731}} {{formula:f312f3ac-7447-4766-8f7b-5749e35ea83f}} gauge theories is given by Hanany-Witten branes setups {{cite:a67c5b63408a1b55853cc8ccf68a02f0bd77fe3c}}, which in this case involve webs of 5-branes, a.k.a. pq-webs {{cite:2314c5587a43c6acb600bb6bcc4c7470fe599260}}, {{cite:1842e2a6d636488537334f12aabd84f4feb4c205}}, {{cite:5d12db24b073a624ca44882a5348cdfbe46527b9}}. Pq-webs were used to study {{formula:ff5ea3b3-e220-4d1b-adfd-12dc28e7c4f8}} dualities in {{cite:cc1b6552359a4b4ddc1823e487ad4cca833e0e98}}, {{cite:fe5dd101c189ba00f00ec80776b5db44a3166a70}}, {{cite:297b5e0efb245938ce8002b0e981a9cd3fd486a0}}, {{cite:f5c6ddd4de63d352e179e15b25c15e77ed138316}}. Later, the pq-web technology to deal with KK theories was developed: {{cite:e5e31cd10cf08cdf3b3083e09ad3c1b34c12e153}}, {{cite:e24cc672c2ffdd79c538fc5b7cb5ea4d32558aa4}}, {{cite:17906bba0c77851d089abe9e0d6981f0bdd2af5b}}, {{cite:b2082c47fcfbb711602a221cffbfa71e1e3f6b10}}, {{cite:528946c719746524aa5b381cbb8a5a4094b87bd0}}, {{cite:d443a8a1b5022644d55fe32f8aef1ab5b25d9f0f}} discuss many examples of different {{formula:e9f10a29-100c-4f25-9c8e-6645de1e421b}} {{formula:85cbce1b-377c-4d58-bc50-1303213fd011}} quiver gauge theories with the same {{formula:8447e904-3b72-4fb8-9c24-f789bb863f6e}} SCFT in the infinite coupling limit, described by Type IIA brane systems {{cite:868397392896ddf111dacfeb10227ba1e42adf99}}, {{cite:12a5f24773dde3507bc64fca5083d085ab9beba1}}, {{cite:e6a69752281122831c99a18629bd95bbaae0326f}}.
| i | d9944748900c7ba25624ea1eea5db54a |
{{cite:b197b1912f55644e8ee54a8e84fcc50433d32f17}} have used red clump giants drawn from a wide range of Galactic longitudes sampling the outer disc to find a broadly flat longitude-averaged rotation law within R{{formula:9e8d8d32-9fb2-469f-b7ea-0c6d4347fe37}} kpc, with typical circular speed V{{formula:f43c97da-2b39-4f6f-8ce8-40996807fe5e}} km s{{formula:73be779f-ad47-49ca-a42b-05dc32b6333e}} . But over our sampled region, between R{{formula:d24b871b-3176-472b-82f8-6a6429399b02}} kpc, their inferred rotation law is quite sharply rising out of a dip at R{{formula:111fbd9d-f377-4a9c-90dd-a9dd6851369d}} kpc. Figure REF compares the {{cite:b197b1912f55644e8ee54a8e84fcc50433d32f17}} results (red line) with the rotation curve derived from the mean RV trend we obtain at {{formula:e0520b6f-7f7d-40be-891a-1589fe59264f}} adopting solar metallicity (green line). In constructing this figure, we choose the same LSR parameters as favoured by {{cite:b197b1912f55644e8ee54a8e84fcc50433d32f17}}, R{{formula:20a397ec-25a3-43ab-b2e7-3d4732de9b13}} kpc and V{{formula:904d2b99-b09c-4dfe-8e41-aa61f8e5cc22}} km s{{formula:792c6674-77db-41ac-b7d5-9e21a12319c7}} . The agreement is very good.
| r | 0e7e253958f38b7753d7a909339754dc |
To be more specific, given a class of objective functions {{formula:1c562f45-388b-4461-b9cc-2c3113fcf875}} , the aim is to determine the argument {{formula:11b70cc3-46c3-4c90-8f91-8e1b04e3a270}} that leads to the smallest objective value even for the worst-case function parametrized by {{formula:ed241b6d-87ac-4c99-bfee-8c8c883d969a}} .
Such type of problems were originally formulated in two-player zero-sum game theory {{cite:5b779176df678f20401dfee8255497261c08dbbb}} but now arise in many areas in mathematics, biology, the social sciences and especially in economics {{cite:8fb2e47166d88475ec6734320a9f82a094834269}}. Diverse applications may be found in engineering, operational research, biology, ecology, finance, economics, energy industry, environmental sciences and so on. In the last few years, minimax optimization has also experienced substantial attention from the signal processing community, due to its connection to distributed processing {{cite:efa192234451ac310568df533c06fb373c8b013b}}, robust transceiver design {{cite:e6058e05d0f910a1260debcba3a4433752adbe28}}, and communication in the presence of jammers {{cite:ded28eae54baed9e9bc4eec67013ae608606b75f}}.
Moreover, in modern machine learning, several problems are formulated as minimax optimization, such as the training of generative adversarial networks (GANs) {{cite:cf1033e646dde9867673f4710f6790097d806102}}, multi-agent reinforcement learning {{cite:17d84c3c07e9f4c968289b1dea0e33fd0498a053}}, fair machine learning {{cite:ce054f1c2fd9b89449f2f01fac91071957c9c4e4}}, and adversarial training {{cite:f18680500539012525e6263da8df6f7ddbaa431c}}.
For example, when training GANs, {{formula:39ed8b8f-e14d-444d-acbf-e41c09cf6930}} models the parameter of a generator, usually a neural network, whose aim is to generate synthetic data with the same statistics as of a given training set, while {{formula:e24a4f43-ab92-4d88-a7b9-f1ee7db6fd03}} represents the parameters of a competing discriminator, who has to distinguish generated data by the generator from data of the true distribution.
Relatedly, in adversarial machine learning, one aims at learning the parameters {{formula:5544b28d-6dc4-4059-bac4-c3a636b65448}} of a model in a robust manner by exposing it during training to possible adversarial attacks modeled by {{formula:9d5ae056-32d8-4dda-919d-45974ed66ced}} .
Both examples can be interpreted as a game between two neural networks trained in an adversarial manner until some kind of equilibrium is reached.
| i | 9c266c63841efdbd6d04976a320c407a |
In this section, we apply the proposed composite EMPC to the stand-alone IES and compare its performance with hierarchical real-time optimization approach to demonstrate the effectiveness of the proposed method. The optimization problem is solved in Python based on CasADi using the BONMIN and IPOPT solvers {{cite:7b5a50f00d5de26ef66ebf5871ee268c17e9bded}} on a machine with 16GB RAM and 2.60GHz Intel Core i7-10750H.
| r | cd2ca107adce172a6b262497e2be1fc7 |
{{formula:b6f0ba8e-338b-4972-9965-347c23e6d35d}} has been measured for the first time in 1992 by COBE {{cite:37386bbd7bd4aa3b23ecd9419d4b4238c1ef54ff}}, {{cite:b11e4f1f119536a5662ffb2e21461b1498f51eed}} from the 1–year maps, and in 1996 from the 4–year maps {{cite:b9fda1a668ce34aaf314c9425341f128d25faf24}}.
The COBE data revealed small correlations in the large angular range {{formula:371bb0fd-0c50-47ce-8620-72e2eeb2fb81}} delimited by {{formula:6f9c2497-e7cc-47ad-8871-1be88e733759}} {{formula:4a70df9d-78f0-4f82-bfd2-b9a478d5c765}} which later has been confirmed with high precision by
WMAP {{cite:6ba473479687708c00ec15f72287908709c8a64c}}, {{cite:ca28cb0db6774a2ac54559cf22b649b57c69b82a}}, {{cite:9da446d77aaa76b097aa5b71ffa37d62dd084f4c}}, {{cite:9e0e3f16e9a5ef2e7300bd57238afcaecf5657bf}} and Planck {{cite:c5a22d373a2672343ba0e97f2352960012ced032}}, {{cite:cc1ec11a08a523d1f5213f2dc77411465f369a03}}, {{cite:65ea3b61bd6d971918401e3123aefcee9ddb4e45}}, {{cite:71a0e738465086b801edf030c7250b2bb9fc79f0}}, {{cite:f04b35d362551cb6b985294f0a28359af26383b4}}, {{cite:04e0b56a33a1462d64dd3406f07ff6fc21124ddf}}.
COBE compared the observed correlation functions with a large variety of theoretical predictions within the class of FLRW (Friedmann-Lemaître-Robertson-Walker) cosmologies,
including flat and non-zero constant-curvature models with radiation, massive and massless neutrinos, baryonic matter, cold dark matter (CDM),
and a cosmological constant {{formula:6947b80b-b026-401b-be64-0096c0ec5328}} , using both adiabatic and isocurvature initial conditions, see e.g. {{cite:382ad37a25a338384d5a5108d14dbbc076fa7b88}}, {{cite:90fa723ff73f5c6a79b9e0a52d7cf1a2c77d6083}}.
From COBE observations it was concluded {{cite:37386bbd7bd4aa3b23ecd9419d4b4238c1ef54ff}} that the two-point correlations, including the observed small values of {{formula:6598fd27-0f6c-4b1d-8e97-40c06d778b27}} in the range {{formula:9d032d01-67d1-4b13-93ec-5fc7c05bab25}} ,
are in accord with scale-invariant primordial fluctuations (Harrison-Zel'dovich spectrum with spectral index {{formula:2eb31498-d76c-410d-a569-a4cdc51ea3c9}} ) and a Gaussian distribution as predicted by models of inflationary cosmology.
Thus, there was no indication that the small correlations measured in the angular range {{formula:ce9a8ee8-7ea8-4cd8-89cb-0e1cb7a58dfa}} could hint to a serious problem, or even to new physics.
The situation changed drastically with the release of the first-year WMAP observations that will be discussed below.
| i | 3b5b0bd8dac7821f589ad579db170958 |
In recent decades the performance of automatic speech recognition (ASR) systems have been significantly improved with the successful application of deep learning techniques. In the conventional deep neural network-hidden Markov models (DNN-HMMs) based hybrid ASR systems {{cite:d617fea47b2b152cdf9a1193b2adfea2ecaf2bd6}}, {{cite:819fb985f0bc692ad8a506d2c4bd2a033a032f8c}}, artificial neural networks are used to estimate the conditional probabilities of HMM states given acoustic features. In these systems, word sequences are transformed into phoneme sequences using a pronunciation lexicon. The resulting phoneme sequences are modeled by HMMs representing phonetic units. In order to sufficiently model long range temporal contexts and spectral characteristics of continuous speech, advanced forms of DNNs including convolutional neural networks (CNNs) {{cite:3d66cd90bfeebdd945be24398d79a9d5f6782fc3}}, {{cite:51effc2028784931576cfae281826e71c4356a12}}, time-delay neural networks (TDNNs) {{cite:5e0faabfeb825d6d3a8e5821fa0625f21cd8e094}}, recurrent neural networks (RNNs) {{cite:90c58bfc556e95369945310cd8f144fe102e0b6f}} and their long short-term memory variants {{cite:2dd741d24150646de0304ff3bd1f1b6eb47e31f3}} as well as self-attention {{cite:a784b862aa8007cd00989f22b4e4e94abb5801ae}} are widely used in current hybrid DNN-HMM systems. In order to reduce the mismatch between the conventional frame level cross-entropy (CE) based loss function and recognition error rate normally measured at word level over a sentence, sequence level training criteria based on maximum mutual information (MMI) {{cite:2740453be1c8b24fb530982f69fa65478d456773}}, minimum phone error (MPE) {{cite:456f78339e9a7694c09accb3f78231f8b014406e}}, segmental minimum Bayes-risk (sMBR) {{cite:c8f2379f30e350ca1b90cdccd3aaedb6b0e7015c}}, or lattice-free MMI (LF-MMI) {{cite:a983dd579396984649c11a6ebf233310fa27df84}} can be used to further improve the performance of hybrid ASR systems.
| i | e171d227ea253f05a52d2b61cd57374a |
In this paper, we propose a novel holistic model, SuperGF, which unifies global and local features for visual localization. It works directly on the local features generated by the image-matching model and aggregates to a global image feature, similar to BoW {{cite:6682ef65a58182a7c3cd4054277d92075b4700ab}}, {{cite:ff6fd958bcf1be9673f680cfa31f7d8b680b10e2}} or Fisher Vector {{cite:d6781bc300ed8c75eda4c7ffc4bbabb000ef78da}}. A transformer is adopted to perform feature aggregation, which is more accurate and resource-friendly. It integrates global contextual information and establishes global correlations between feature tokens by the self-attention mechanism; thus, semantic cues can be learned automatically from local features of the image-matching model. As a result, the transformer module can bridge the feature-level gap between the two tasks and yield robust global features for image retrieval in an efficient manner. We experimentally evaluate global features yield by SuperGF on several benchmarks. Then we also assess the performance of visual localization using the holistic model combining different kinds of local features. The results show the advantage of our model compared to existing methods.
| i | a3b40701f9aea2275def006cbebee2c7 |
EF1+PO. EF is actually a demanding fairness notion, in the sense that any approximation of EF is not compatible with PO.
Instead, initiated by {{cite:a9e13479d9479e24a03806b11ad635a0fa95a7a7}}, most research is focused on its relaxation, envy-freeness up to one item (EF1), which means the envy between two agents may exist but will disappear if some item is removed.
Unfortunately, EF1 and PO are still not compatible even if all jobs are rigid and agents have unary valuations.
However, the good news is, if all jobs have unit processing time, an EF1 and PO schedule is guaranteed to exist and can be found in polynomial time. This result continues to hold when agent valuations are weighted but identical, i.e., the jobs have different values.
It is shown in {{cite:c403bb59b349ffe0d7006e4af2111a92d6ee1e38}} that under laminar matroid constraint an EF1 and PO allocation exists when agents have identical utilities, but the efficient algorithm is not given.
We improve this result in two perspectives.
First, our feasibility constraints, even for unit jobs, do not necessary correspond to laminar matroid.
Second, our algorithm runs in polynomial time.
| r | 999446cbfc6bdec18dcfabbbbca9242f |
With the ROM basis at hand, the classical Galerkin ROM for a given dimension {{formula:f4efeb23-2615-4eeb-aedc-72b58bfa249d}} can be readily constructed and is given by (REF ) in Algorithm REF . Thanks to the discretely divergence-free condition assumed for the FEM, the pressure term {{formula:2fffffdc-dff3-4538-bfaf-e7a28fc859d3}} in (REF ) vanishes in the Galerkin ROM since {{formula:88e7d046-66fa-4929-a62b-975f64d05989}} for all {{formula:d2958f5a-65d1-430f-ba37-9d0996519bca}} in {{formula:36339073-e78d-462b-81a6-1c02673bfa55}} (see, e.g., {{cite:18f0e0fc2ea37dd782f93dfb7bac5ca69c810fa0}}, {{cite:2da65f3165d9f3af7d5d86ff5cfe3fd9a92453cd}} for alternative approaches). To distinguish it from the two-level ROMs proposed in Section , we will call the classical Galerkin ROM given by (REF ) the {{formula:1d695765-63a8-47a2-840d-4f4de01c9a05}} -dimensional one-level ROM (1L-ROM).
To distinguish it from the two-level ROM solution, we denote the one-level ROM solution with {{formula:d61e816f-dc4e-4ac7-95ff-341737fc612e}} .
| m | fcbf6388e5e19bc64c6e21182fc24c36 |
The success of ReLU-based neural networks in recent years originates from their learnability and expressiveness.
Piecewise linearity helps them avoid gradient exploding or vanishing problems of early architectures and, in terms of expressive power, they are shown to be able to capture an exponential amount of variations {{cite:7f0f96e97ccbc931843fc0538099eaa952d81418}}. However, little can be guaranteed about the validity of the learned representations.
Recent work has pointed out two troubling phenomena: adversarial examples {{cite:f9dffe57e6fa5fda5d4c3b1ba3810f1210dfb117}} and fooling examples {{cite:79bc2b932cecb250d590c9f1fd15dd34cf132688}}.
In the context of image recognition, the former are images slightly perturbed to fool a model while being semantically the same to human eyes. The latter, fooling examples, are images that do not belong to any class and yet are classified to one with high confidence.
Both the results of Szegedy et al. Szegedy2013 and Nguyen et al. nguyen15 demonstrate that a model that is easy to train might also easily make invalid generalization.
| i | 68773382dfca27492266f53e2f432d51 |
More recently, with the rise of deep learning, a number of methods relying on deep neural networks have been proposed to address the mass-mapping problem.
The strength of these approaches is that they provide a practical way to leverage simulations as a prior to solve the mass-mapping problem. In particular, the DeepMass method {{cite:c0a59da3a36717236ab6f38ba3f2b00e380712f8}} uses a U-net {{cite:4cac38f3e0222618bd113f149cc6dcfd748e8c4f}} to recover an estimate of the mean posterior convergence map, with a prior defined by a set of simulations. Besides, {{cite:548006f3cc3db92942fc23e450066c2200df1412}} have proposed a model based on a Generative Adversarial Network (GAN) {{cite:b3180ac7796c6a3f50fbca2269fa21e686f1bdf6}} which is able to to denoise weak lensing mass maps.
| i | 2af00195edc941fe56b7c2f870ce6157 |
Analysis and control of agent-based network systems have been an active
research topic in the past decades, and the consensus problems have been
extensively studied in the literature (see,
{{cite:a6f816501edca873f5e05122b3ac0de00e4d4c12}}, {{cite:93d342a8fd1fde5c6cf48488c8441995b1206441}}, {{cite:4ef3be0fd7b081bfd523b37682ffa74d08ca40e1}}, {{cite:38cc6181b93f4666e43b6c46d62bc033324cdf6b}} and the references therein).
The problem of multi-agent consensus from a graph signal processing
perspective was considered in {{cite:9f636bb5c43ea1b559433efe813478ae1c0415c1}}, where analytic solutions were provided for the optimal convergence rate as well as the corresponding control gains. In the theory of agent-based
network systems, the social network is one of the important and interesting
case study, the opinions of the social individuals usually reach
disagreement in the social networks {{cite:dde8440051f9cbca0f2ba5e4e6d8eb46c36b4054}}.
For example, in the
cooperative social networks, the disagreement of the heterogeneous belief
systems was investigated in {{cite:62fb11fdb26405165e0fe34a0a5a7586c044fa19}}, where it was revealed that the
disagreement behaviors of opinion dynamics were affected by the logical
interdependence structure {{cite:62fb11fdb26405165e0fe34a0a5a7586c044fa19}}, and for the opinion dynamics in
the antagonistic social networks, the disagreement problem under the
leader-follower hierarchical framework was studied in {{cite:70203aa3037cdcea0caac29b713749cb3e65e2cc}}.
In the opinion dynamics model with biased assimilation,
how individual biases influence social equilibria was reported in {{cite:56f2d7e1ec26f608ccf717f4332d9ccac78244e3}}.
In the nonlinear opinion dynamics model, sufficient conditions guaranteeing asymptotic convergence of opinions were provided in {{cite:73ee0dcec51b8ebd35680470e01b5be8a322d1d3}}.
| i | cddd579d442e73d064918c150efbbc82 |
However, the recall of original GAN, WSMOTE, and EMICIL is much better than SMOTE, but worse in precision and FAM scores. It is worth to mention that, WSMOTE is better than EMICIL and EWSMOTE on precision, which means the decision boundary, is prone to the majority class region in order to generate the new samples. As it observes, the proposed model performs well on imbalanced multiclass learning in all the assessment measures. We also plot the ROC graph in Figure REF . We used Case 1 for binary classification and Case 2 for the multiclass classification. We used SMOTE {{cite:a2d564356e76b2b6c160d5b59ef0df4f43928a1b}}, GAN {{cite:0e51d75d876d6985299fea8d1f7a078f17529552}}, EMICIL {{cite:ac7822a6c1e72f3fff52c3c7b9cd8f65c3d490e6}}, CaAE {{cite:8941b064b10cac870cf7bc6536dbe463a2154f89}}, and RJAAN {{cite:bce6d188688843a84531b8e31a3d72b296fc279a}} as baselines. The results show, our proposed model performs better than other sampling and none-sampling techniques for both the binary and multiclass examples. However, in the binary classification, the second-best result is for the Original GAN, and SMOTE. While in the multiclass dataset, the second-best result gives to RJAAN which has a comparable performance with MoGAN.
| r | 817542731d0bb094ba31224e3920b8e3 |
Monte Carlo approximations {{cite:b86a95b50f1f582c37ce0beb99cdc792194b585e}} are based on the Law of
Large Numbers (LLN) in the sense that an integral like
{{formula:05af3213-3af8-4d6f-bd03-16fd1fcfb524}}
| m | 438eae59eeb1ec9763042f90e8625bf4 |
{{formula:ebcb8ca1-f32b-426a-897a-bf6931e56711}} designates the distance to the border of the nearest edges and {{formula:a529ccdb-7c5e-4be6-b2ce-548e5b6c22d1}} designates the distance to the border of the second nearest edges. LB score is shown as {{cite:cfd1a00d7c445c6611edeb28eb79dae0eaa88f7b}}. We use the deep convolutional neural network (CNN) of two {{formula:81cf7f0a-2773-4022-b133-2e644308e6f7}} convolutions. Each step followed by a rectified linear unit (ReLU) and a {{formula:2f0e9fb5-352d-4579-9406-d3cb9abebbaa}} max pooling operation with stride 2 for downsampling; a layer with an even x- and y-size is selected for each operation. For the U-Net model, we use existing DRIVE {{cite:fd45986a26be2d7de0e1698e1273da78dcb5b451}} dataset as the training segmentation mask. Then, we use Our proposed model converges at the 44th epoch when the error rate of the model is lower than {{formula:e0fd8a85-a268-49e8-b309-ae3a3af8b411}} . The Jaccard similarity of our U-Net model is 95.59% by validated on a 20% test set among EyeNet shown in Figure REF . This model is robust and feasible for different retinal symptoms as illustrated in Figure REF . The area under ROC curve is {{formula:cb142315-daff-4883-9158-fd1159fba062}} and the area under the Precision-Recall curve is {{formula:44be5556-bf36-4cb7-acec-649704f890c7}} .
{{figure:0a2c1ae5-4502-42f7-9ce8-ede58ecabdca}} | m | 9519c471845444a83f7cea42abf971fb |
However, this choice of tangent space is never fully justified in terms of the model under consideration. In {{cite:82ddb366eb366d55608fea98c297d3df1b41ed2c}}, the authors do justify the necessity of these restrictions by noting that if {{formula:94b74c98-7d30-43ef-9f39-90a76fcb0ef0}} and {{formula:b1d8e4a4-8266-4edf-a4ae-a3989770ea33}} are differentiable with respect to {{formula:8f7805ee-a88a-4f49-be18-ce7b55ec3bc0}} within a given submodel, then we must have
{{formula:c486d1e9-6505-46e9-893b-21d044472960}}
| d | 932b0ec61878ca0925c210cf0d8ed3aa |
Calibrated equalized odds postprocessing is another method that changes output labels after classification to preserve fairness. Introduced by {{cite:a0ac4cbe2d4097ba532e73619dfe993d9e624b53}}, the method builds upon the work of {{cite:0fefc6335ba40b7b259d7cc78bb3d9e10032784c}} which introduced the equalized odds fairness measure. The calibration is added to the method of mitigating bias, by giving the possibility to ensure fairness for both protected and unprotected groups without leaving the option of incentivizing the algorithm when taking into account the sensitive feature. The method gives the practitioner the freedom of choosing the level of fairness constraint, an adjustment needed when classification accuracy suffers after the calibration.
| m | 7ba29fa4465f8660416511c06c8d3c70 |
It is known that the HMC algorithm with exactly one leapfrog step
boils down to the MALA {{cite:bc25bbe553f6c113c53ce9882dcc9edf0217b25b}}. As mentioned
before, {{cite:673d2459baf287b8662c94a424af517fef3bba60}} establish geometric ergodicity of
HMC when the `mass matrix' in the `kinetic energy' is the identity
matrix {{cite:1dc4788a7ce16056ed13fd860a8ca30b56b55819}}. On the other hand, {{cite:703c7685926d588594111e154eb190121423ed34}} argue that a
position-dependent mass matrix in HMC may be preferred and they
develop the Riemann manifold HMC (RMHMC). As a future study, we plan
to extend the convergence results in this paper to study RMHMC
algorithms. Finally, Langevin methods have been applied to several
Bayesian models {{cite:6b088ebf966c72534623eadb966cbafbd2933624}}, , {{cite:d810b5c42fa35b943b9df24c0205535424a52434}}. It would be interesting to compare the performance of
PMALA and PCMALA in the context of these examples.
| d | 91669ba320a80a864354e8c160f2a98a |
(1GMEdd t)2
=6.810-32 (M10 ng)2
(267 nm)-2
(d200 m)-4
(10 m)2(t20 sec)2.
After performing time Fourier transformation on the probability {{formula:0a05fe0f-674a-4f32-a71c-67b7e270b53a}}
obtained using the spectroscopy experiments, the least necessary
precision to detect the QG effect in our proposal is approximately
{{formula:7888d299-14c4-4d40-af8e-a562d1b771b6}} , which is an extremely small value to be distinguished in
the present clock spectroscopy, whose observation uncertainty is
approximately {{formula:3e408a37-3056-46f3-9c96-d3ef28245de3}} {{cite:500d5bc5b4206e65c01a3066b6e1562ac408b337}}. The relation between our
result expressed in Eq. () with visibility change
obtained by the setup of Carney et al. {{cite:ec89319672676a98cc2089afd3ee26d83d792bca}}
needs a discussion. Carney et al. investigated entanglement
between a massive oscillator and a source mass particle with a cat
state, and evaluated the interference visibility of the particle
state. Their estimation of the time change of the visibility due to
quantum gravitational interaction is
{{formula:b7b506c8-d383-4cf0-9fa3-c70224745a66}}
| d | dd0b43e34ad8ab701c06367ae2a4540b |
The presence of such a “topology” allows one to define a continuous mapping from a topological space to a topological classThe presence of a pseudometric makes it possible to determine the continuity of a mapping from a topological space in the standard way, using spherical neighborhoods generated by the pseudometric. However, the neighborhoods can be proper subclasses, so it won't be possible to build a class from them. The topological class defined in {{cite:83c77be75a71cbc6e36ed6881860b1bb6456bffe}} allows working in more familiar terms., and with them continuous curves, their lengths, as well as the notion of intrinsic and strictly intrinsic generalized pseudometric. In {{cite:83c77be75a71cbc6e36ed6881860b1bb6456bffe}} it is proved that the Gromov–Hausdorff distance is intrinsic, that is, for metric spaces at a finite distance, it is equal to the infimum of the lengths of the curves connecting these spaces. The question of whether this distance is strictly intrinsic remains open. Recall that in the Gromov–Hausdorff space, the distance is a metric (it is always finite and positively definite) and, as shown in {{cite:b2c7cbdd4e3e4a14a4bad13c74d215ddee085bf7}}, this metric is strictly intrinsic. It is also well known {{cite:997a4b979c5d2da3c61ab2cf76e8a8dbfb4c208a}} that the Gromov–Hausdorff space is complete and separable.
| i | 0ce62c9e7bea86d2fbfa7f58735d5333 |
Perhaps the greatest utility of synthetic data exists in problems where real data is not available - problem spaces where data collection is prohibitively challenging, or where object and events occur rarely and spontaneously in the wild. How might we increase the performance of a model trained on purely synthetic imagery? Recent work in computer vision implies that Generative Adversarial Networks{{cite:fec07a7294a5a48c96ed380fc64f057f1d31486a}} have a great deal to offer as a means of domain adaptation{{cite:4a451b1754edd475720623a20da2238e78d56fbf}}, {{cite:2c1128ecc1ac7cee5aa2193c186084b75c5351ae}}, but in a scenario with extremely scarce data, or a multimodal target distribution, training an adversarial network may present a circular problem. We don't fully understand the disparity between real and synthetic data, and extending work in image quality metrics and statistical imagery analysis may provide critical insights for advancing domain adaptation techniques.
| d | 9700a2e5d7bb21190f870266c11f2244 |
The authors in {{cite:47fe1c3e8ccffad4d664d365755ab909ba9988d8}}, {{cite:7dd0d54039fd0ae8f43a0dab9cd17e43728c48f8}} enabled HE for FL, where the server is honest-but-curious. Before training, the server randomly selects a client as a leader. The leader generates and shares key pairs with the rest of the clients. The clients compute gradients locally and encrypt them. Then, they upload those to the server that aggregates the encrypted weights. The aggregator sends the encrypted model back to the clients for another FL round. However, CMs could not be enough for providing privacy guarantees since authors in {{cite:16a29bbd874c2d62b233c9cebd513ee3e0f9733f}} successfully performed Inference attacks over encrypted data.
| m | c19649bb16b5d75f3e798423767eb17f |
For SALICON {{cite:66481d2d7cd8a3ab20113df9657cc9f6b74e83c6}}, the additional results in Figure REF yet again confirm the benefits of exploiting objects' dissimilarity on saliency. We show results from scenes with multiple objects and from scenes that consist of a single object to demonstrate how dissimilarity affects saliency. The higher performance of our method than the baseline DeepGaze II {{cite:18591c46ea588236afcdded34894fdd3596ad834}}, specially, in the event of single objects present in the scene, is due to the size dissimilarity masks of the object and due to the appearance dissimilarity of the object that has both low-level and high-level cues encoded in them. As we have positive values in the size dissimilarity mask of the detected objects, it facilitates the decoder to learn the overall object dissimilarity better, thus improving the saliency. This is also evident from Table 5 in the main paper, where size dissimilarity outperforms the baseline DeepGaze II. Furthermore, the appearance dissimilarity masks has encoded information of not only the high-level cues but also the low-level cues, which in this case, facilitates the decoder to learn a better saliency estimation compared to the baseline DeepGaze II. We see an example of this in Figure REF , last row, where the saliency from the single bird is close to that of the ground truth, whereas the baseline DeepGaze II {{cite:18591c46ea588236afcdded34894fdd3596ad834}}, overestimates the saliency of the bird.
Similarly, for MIT1003 {{cite:9e64a20d40033321dca63b9d1eaa79bb631266be}}, we show results with either multiple objects or single objects in Figure REF . Lastly, in Figure REF , we show results from 5 different subcategories in the CAT2000 dataset {{cite:6fe46abe88be29a289ca8cfab9ea1ac391dbc83a}}, namely Fractals, Affective, Cartoon, Low Resolution and Noisy. Note that the CAT2000 dataset is very diverse. Therefore, learning the most salient information across different images becomes difficult because the number of image samples belonging to each category is quite small. However, our model learns to predict the saliency for most categories, outperforming the baseline methods.
| r | 3935507d848f5b2aa2dafe68ede7948f |
In Appendix E of Runeson et al. {{cite:e8adbf7901273d078bd052207999fcb73c73cdef}}, there is an
example of a consent information letter. It informs the interviewee
about who the researchers are and how to contact them, and it
also highlights that participation is voluntary, the interviewee may
refuse to answer questions and withdraw from the study at any time.
They also inform the interviewee that the interview data will be
protected by law (however, this law has since been outdated and replaced).
Furthermore, the
authors claim that the interview data will be kept confidential
and only available to the research team,
“or in case external quality assessment takes place, to assessors
under the same confidentiality conditions.”
Researchers should never promise that nobody outside
of a research group will ever get access to collected data.
However, many researchers promise this
out of ignorance or because of a mix-up of important terms
{{cite:94432a7c491c3befb7ad0aa031c3f6be2230ab49}}.
| d | 89728e64ce36b52a64aed717849d5ccc |
Since the operator {{formula:698d0051-ca02-4e99-a84f-7b3beaf3bf35}} is self-adjoint, by using a weak formulation and a suitable variational framework, Servadei and Valdinoci {{cite:97c1b5f69726b92d8c62f343a5a51b86a1e3e6ef}} investigated in detail the discrete spectrum of {{formula:6c367f57-89d7-4602-a659-27bcc098dcf3}} in {{formula:a22d392b-cd09-4e94-8b44-36c5c381e474}} for any {{formula:536428ef-5d28-40cc-8de3-fbf608845c43}} . In particular, they proved that the first eigenvalue {{formula:71bcafd8-d1c9-4950-af6e-36ac06b409e2}} is positive, simple and characterized by
{{formula:4a9567ef-61de-4665-9c54-5f6797574dd3}}
| r | 49181158c4dba182c499a3f829cf5fd1 |
Note that our requirement on the sample size {{formula:6a8d164d-8bb2-44c6-b153-c3be2f598292}} matches the known regime for exact recovery with nuclear norm minimization {{cite:556a0069360c51dcd71894aaff5418d5b9510363}}, {{cite:480a767cf60965691e50718428ed1ba9570fe145}}.
Since we are interested in the high dimensional regime, the extra condition {{formula:94f84f60-48cb-4e33-8e3f-421d6daf1d5c}} can be assumed without harm.
| r | ae575d947125ff4e6ffeed2426f27950 |
Much of the world’s information produced daily is stored in structured formats such as tables in databases and documents, or tables on the web. Question answering over these structured tables has been generally considered as a semantic parsing task in which a natural language question is translated to a logical form that can be executed to generate the correct answer {{cite:0faf6b16c1d48a28a4be58a1aec6955ca6c83ba3}}, {{cite:0d728dfd922268692db55f7e82fec59655a0470e}}, {{cite:f1d2e5e9a8830bec29a8d56997feffd686600d7f}}, {{cite:7ad7e38d0ba9d3cc7da7eafd5c658002db7499a9}}, {{cite:58deb55bc9c12e635442a4137422c730c1d5c6b4}}. There have been many efforts to build semantic parsers with supervised training datasets such as WikiSQL {{cite:0d728dfd922268692db55f7e82fec59655a0470e}} consisted of pairs of questions and structured query language (SQL) and Spider dataset {{cite:6d6546f91bbbb772ecbfb8be3ae2e6c983123e14}} that aims for a task of converting text to SQL. However, it is expensive to create such data, and there are challenges for generating logical forms. In recent years, a few studies attempt for the task of question answering over tables without generating logical forms {{cite:666309c6816add05adb4d040d5963e8a943ec5f6}}, {{cite:09f609561bcd5db41ec6adfa9ff4585821026c07}}, {{cite:f4236303511c2a51bbe0ce0abe9c3a49b80f958a}}, {{cite:adfa3b6fe851887179fb5aef885a3ed0223b776d}}, {{cite:adfa3b6fe851887179fb5aef885a3ed0223b776d}}. They introduce approaches of pre-trained language models based on BERT {{cite:f46b4e52e47144626f1298fa6d0b91deace7a6b7}} to learn representations of natural language sentences and structured tables jointly by extending embeddings, and these models achieve strong performance on the semantic parsing datasets.
| i | eed1d1d4df48b965b0a7a01be6b0fe6d |
Recall from Lemma 2 in {{cite:831b9fcad75bf233a7456a827f0ef54e51c30291}} that the process {{formula:814a1463-5e62-44dd-98ac-c6646f9236ec}} is exponentially {{formula:fa2960c2-60fd-4be5-abaf-b206b0437ef9}} -mixing, which implies that {{formula:7517c645-ec60-4227-93e5-d8440b0fcd49}} , where {{formula:3d7d968b-b872-4c8e-a542-d13efaa1bc39}} is the {{formula:de035353-f71f-4dae-9a25-e3ca3496e5d3}} -mixing coefficient defined in Section 1.3.2 of {{cite:0411ff2b0d4a2f2d7e8a3d81c64918db74d10cc9}}. It follows that, for any {{formula:9ca0e16a-2d38-4928-a361-8a0cd365d388}} , {{formula:8a6fc0ef-2ca3-43eb-81f6-1e7c7f7c3d50}} . Thus, by Proposition 10 of {{cite:a4f7c91693b422fb1986a4b37cb59354f9cb5fc4}}, inequalities (REF ) and (REF ) and the integrability of the {{formula:fd036103-8bde-47d1-8354-e92a0c90db9d}} -mixing coefficient imply WCL2. Therefore, we are left to show (REF ) and (REF ).
We start showing (REF ). Integrating by parts and using Lemma REF it yields
{{formula:8d36a755-87c0-4679-8214-e4e137d6e130}}
| r | fadabb77c5f32bc71ed570b936936962 |
The extension of a quadtree to {{formula:87dcd279-f152-4d0d-94bc-29c213bbff26}} leads to an exponential dependence on {{formula:7faa3aaa-6697-4285-b462-1d57ad91a39d}} , because the {{formula:9bb63241-c164-4959-9560-aa52a0e87c0b}} -dimensional box is subdivided into {{formula:b9c5109a-af0f-43ff-8dc6-858659235934}} smaller boxes.
The first attempt to overcome this curse of dimensionality was the {{formula:b7f04180-0b35-42df-a326-f584d38c203a}} -tree {{cite:2e4d80c26f39e9561dfd87736d9931b327df3328}} subdividing a subset of {{formula:acb66805-57e7-4573-8b91-823c87ebad2c}} at every level into two subsets instead of {{formula:e8c61001-7791-4082-a2ec-964d23e1b21f}} subsets.
The nearest search algorithms have positively impacted many related problems: a minimum spanning tree {{cite:173e3cd8d3e91bb2212af3dc72935cf2a223530a}}, range search {{cite:e2df15eed244aa9121cc984965eb2b26de1f3823}}, k-means clustering {{cite:e2df15eed244aa9121cc984965eb2b26de1f3823}} and ray tracing {{cite:af93d8e583bd9d1f172c8fc15bd653ea41051a40}}.
The single-tree structures for finding nearest neighbors in the chronological order are {{formula:88694d66-5d89-4168-8be9-aae25aae1098}} -means tree {{cite:8751992f15f3b831a0172d50207e211ab5ec0c4c}}, {{formula:2b2160d3-5f4e-4dce-baff-c4f833dab11f}} tree {{cite:8c174d2bf42d10fd1a055dcd3f90897afb06d023}}, ball tree {{cite:75967833320e924535edcf5ee1ce5a758c505277}}, {{formula:78214d0e-b970-4122-8bac-c419f958b066}} tree {{cite:8c174d2bf42d10fd1a055dcd3f90897afb06d023}}, vantage-point tree {{cite:c3b3ca8b504fbdc30eb373875d9f8989dd037ab6}}, TV trees {{cite:c3b08a838ca47c4c4c2016c79022156b4e788bee}}, X trees {{cite:4176bfa82e75f032d412b99fb932add1f2e34b10}}, principal axis tree {{cite:be1b95487fe6c2d94525533af35e77f26afa2255}}, spill tree {{cite:528e0612f60b4508d7098fef27ccf27be92eb03a}}, cover tree {{cite:95f841bf42d674a2ace99c8b2574e198e4e37b27}}, cosine tree {{cite:d2741d41f34a627c45180c98432ff7db622ab4e2}}, max-margin tree {{cite:09c94f5a0ef650123535c26c23e3ee289dabdd84}}, cone tree {{cite:e836e55e8fb26f2e9bc5fffdfae32750b13dd4bf}} and many others.
| r | 221295316ffb218c23fccb921f0a42c7 |
In {{cite:b8d635d9fba7037b732bd955db309f3a2160662c}}, the privacy-independent leading term has a dependence on {{formula:92335bae-32c3-42c0-a17e-82cff84b5740}} rather than {{formula:2cef4bb7-0a1c-4b9c-922d-afa6198497e1}} in our result (i.e., the first term in (REF )). This is because they directly bound the transition term {{formula:1135b799-cdcb-44e9-90b6-aab8976ea2f9}} , which incurs the additional {{formula:ddd1fc44-9ae9-42a9-afa2-31ce50bd5916}} . In particular, after step (a) in (REF ), they directly bound {{formula:5b5ccfff-c9ce-42d0-8ce9-fe8d42041648}} by {{formula:af6792d4-9316-464e-81e8-780bd972fef8}} and then recursively expand the term. Note that {{formula:1a2cf763-29cb-41d4-b4cc-0ca24e455a0c}} has an additional factor {{formula:a3fdea36-3493-4ffb-bf83-784a9fa8b687}} , which directly leads to the dependence {{formula:27a44ce9-7322-42f2-a973-559dfca5303d}} in the final result. In contrast, we handle (a) in (REF ) by following the idea in {{cite:03d3eb7c8669d0758374a88f3980d38636622160}}. That is, we first extract the term {{formula:852abdb6-e2de-477b-84a3-951966cde29f}} , which can be bounded by standard Hoeffding's inequality since {{formula:d022ab37-b08e-427e-b764-1cc78f078334}} is fixed and hence no additional {{formula:59ad654b-3958-406a-826b-d78bef1d5a88}} is introduced. Due to this extraction, we have an additional `correction' term, i.e., {{formula:e8f8376b-2091-4dc1-ba24-4a9a759cca8c}} . To bound it, we use Bernstein’s-type inequality to bound {{formula:5f668a2c-b2fc-4cca-a0bb-4698cb9504d7}} in (REF ). This allows us to obtain the final recursive formula.
In {{cite:468b5d33cc9b8da89e7b44d1c1f87462fab170c7}}, although the claimed result has the same regret bound as ours, its current analysis has gaps. First, to derive the regret decomposition in Lemma 18 therein, the private estimates were incorrectly used as the true cost and transition functions. This lead to a simpler but incorrect regret decomposition since it omits the ‘error’ term between the private estimates and true values. Second, even if we add the omitted `error' term (between private estimates and true values) into the regret decomposition, its current analysis cannot achieve the same result as ours. This is due to a similar argument in bullet one. That is, in order to use its current confidence bound {{formula:95379e89-ff41-40f7-bf70-f079daf15d56}} to avoid the additional {{formula:f4734e14-4130-4f14-ac52-dbb114db6e05}} factor, it needs to use Bernstein’s-type inequality to bound the `correction' term. They fail to consider this since the regret decomposition does not have the `error' term as it was incorrectly omitted in Lemma 18.
| d | 15f247d5efddf000e84267631c8e8a65 |
{{formula:2d24139e-8f34-4260-96fd-0fec60312d0b}} , {{formula:9cbaead7-4fd8-4d89-a5fa-c3b7dd1323f1}} , and {{formula:ba553acc-42ce-4439-9d48-8e99d8f469c1}} always agree at {{formula:36424238-ab9e-4442-abf1-1473978a8705}} [Figure REF (a)], since our protocol is exact in this case. The OTOC stays nearly constant at {{formula:6930ef2d-cfb5-47b0-b4c3-d8a6a939d6bf}} for some time before the onset of decay. The time for which it remains nearly constant, {{formula:ba6cabcf-7dc1-45bb-8939-bb090c09d62e}} , is set by the distance {{formula:6e63d043-5cf8-4c2d-9f01-650f4cb35ff4}} between {{formula:c0833c2f-0b12-4c61-9d29-362ce1deb77a}} and {{formula:eabbd14a-f98a-4b54-aaac-5836d4acc8f6}} , and the butterfly velocity {{formula:87760095-8088-4745-be8b-96e5a22250e9}} {{cite:159ffb30c6e03edd348644bac1bbcf490b57e88a}}, {{cite:81ac17619144d27ad4eac69d69d2386ef87c7350}}, {{cite:23dbcfd1810a930c0150fc227d1329ea5326dbaf}}. This is because the measurement of {{formula:75a01c53-c6e4-4bb3-b8e6-c0670e31d1b9}} at time {{formula:90467e1b-2065-4584-b39c-6786ef58a116}} is affected only by the neighborhood where the Heisenberg operator {{formula:f3cb7866-7eeb-4d87-adcf-05fdf1827121}} has sufficient support, and this neighborhood grows linearly with time.
| r | c26e6b394c73d37bda5a9fef14c97a45 |
Besides those results, an important problem in random matrix theory is the asymptotic behaviour of the least eigenvalue of covariance matrices, when the matrices' dimensions are equal. In the case that the entries of the matrices are normally distributed, the limiting distribution has been described in Theorem 4.2 of {{cite:6e3e5c640541030b90198f20b7b089b1b8cec782}}, by directly computing the density of the smallest singular value multiplied by {{formula:d3fd587b-902c-4e0a-b1dd-335365e7ef99}} . In the general i.i.d. case, under the assumption of finite moments of sufficiently large order, the least singular value is proven to tend to the same law as the least singular value of a Gaussian Random Matrix, in Theorem 1.3 of {{cite:3fe1e16828d195bfd18bf9d7a247efce3aedd44a}}. This phenomenon, the same asymptotic distribution for the least singular value of a matrix as in the Gaussian case, will be called universality of the least singular value for the matrix. Lastly, in the most recent papers {{cite:af42886c9f9a36e34db918bf01185a3dd328762f}} and {{cite:96abd2af6de881df35605ca2cedefa6cd3de8880}} the authors proved that universality of the least singular values holds for more general classes of matrices.
| i | 28ee75ebf0188ef29c9b7a3ba59d905f |
Another bottleneck in the previous parallel algorithm of {{cite:84d0614f0f89f24a340c6af66ba3784af5910bb7}} is solving the so-called two-respecting cut problem, where the randomized algorithm of {{cite:84d0614f0f89f24a340c6af66ba3784af5910bb7}} requires {{formula:aa5d30a8-ac35-4c6c-b7e6-1eea4b18c5bf}} work and {{formula:62e5e712-72d8-4ee1-8313-45375204f1ec}} depth. The work does not match the then-best time complexity of {{formula:bf2b318c-b11a-47e9-920d-8ceaecf25072}} in the sequential setting {{cite:2a918e8cbc239200311561b8642949d9278cc38d}}. In this paper, we obtain a work-optimal algorithm for this problem. Our algorithm is deterministic and requires {{formula:610d4254-4002-4012-9926-1448142a9f00}}
work and {{formula:a0190944-d166-4816-a192-a7b5ec80507c}} depth. Its work matches that by the sequential algorithm of {{cite:57c0f12d54d7e1b3d1e1d3ab859cc008d7dcbc53}}, {{cite:a41577dd952b606a65ce37ee4b99365205570b05}}. To do this, we parallelize the algorithm of {{cite:57c0f12d54d7e1b3d1e1d3ab859cc008d7dcbc53}} and its simplification in {{cite:a41577dd952b606a65ce37ee4b99365205570b05}}, which exploit a connection between the 2-respecting min-cut problem and 2-dimensional orthogonal range searching.
| r | 01c72b5c3b11141fb3d2e4100cfd39ae |
Another noise assisted variant of MD includes ensemble EMD (EEMD) which leverages the dyadic filter bank property by adding Gaussian WN directly to the signal {{cite:d2704a171e06510850333b214da667e18ef45f9f}}. This procedure is repeatedly performed, and for each iteration a different realization of WN is used. The output IMFs are obtained by averaging the corresponding IMFs from the whole ensemble. However, determining optimal values for the noise level and number of ensembles is not trivial.
| i | cab04b767d42e74234a529b05a151da1 |
UNet, initially proposed in {{cite:64727a450d8021bb6659c0570acea59b488aa87b}}, is mostly applied CNN structure for the WHS {{cite:76e341acd155e61dc136eb9a2cd1373f8767ddcd}}, {{cite:c186c7b9bfed8a6352e5c6d602aec44b122ff8bb}}, {{cite:5e37d53e4ac4b0b3cb08d808dd53f043131d8140}}, {{cite:b0c4d65f21a24f20e8c4e08783cb0995ee3aed57}}, {{cite:e5f1372a3dd6f71a61d8592d0e6f4ae1274414be}}, {{cite:f1318eade26f58d9ca74ddba280fb9def3948577}}, {{cite:194ef5875279551369b35c5091233163b301a2d9}}, {{cite:718f5a48aa3edf3a87295d6159d4e58486034b65}}, {{cite:b95f6d3636d124619ea4d3dde2fef1a15f56ae73}}, which can be summarized as follows:
| m | 1cc7a9d9c0cc69b094204f21fae954d5 |
Is the learned compression function suitable for learning a hierarchy?
Does the learned hierarchy transfer across different tasks in the same environment?
How does our HRL algorithm compare against state-of-the-art flat algorithms such as Self Imitation Learning {{cite:c9504938615cbc51403105e0ef57f05eae4a8d74}}?
| r | dc7c84be6620f04a2d8e1becdfac60c3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.