text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
Multi-agent reinforcement learning (MARL) {{cite:2a4edaa5e36ac7b05eccd3da71b2750456762860}}, {{cite:c10c7b56e0793dae3eedae192d40c4f378987db5}} has been very successful in various models of multi-agent systems, such as robotics {{cite:b9792be2f2b31429800fde397c8d464fd7e71f65}}, autonomous driving {{cite:883e75b51d76e347c0f4606a24a9ce002aff6f99}}, Go {{cite:d9758f605aba197d0fa84ad0eb75835e4eb99a5e}}, etc. MARL has been extensively explored in the past decades; see, e.g., {{cite:47e0807c3fcc93896a08436cc7ec77ea93af1ca4}}, {{cite:f2ce83011c187c2048997d09f647d72dfcfe18da}}, {{cite:16210896bfbeb946290fcd5392785b220cb3a158}}, {{cite:84419e5fbffed5c5993d46c1df444651bb1242be}}, {{cite:6fb003981a2fa7e7c8640a01fee20961971bb579}}, {{cite:2efdd98b0a04e4824bacf7c3aa9dbe0adb540195}}. These works either focus on the setting where an central controller is available, or assuming a common reward function for all agents. Among the many cooperative MARL settings, the work {{cite:ba83ba5f18591884f27de6d7062d08caadf53771}} proposes the fully decentralized MARL with networked agents. In this setting, each agent maintains a private heterogeneous reward function, and agents can only access local/neighboring information through communicating with its neighboring agents on the network. Then, the objective of all agents is to jointly maximize the average long-term reward through interacting with environment modeled by multi-agent Markov decision process (MDP). They proposed the decentralized Actor-Critic (AC) algorithm to solve this MARL problem, and showed its impressive performance. However, the theoretical convergence properties of such class of decentralized AC algorithms are largely unexplored; see {{cite:16210896bfbeb946290fcd5392785b220cb3a158}} for a comprehensive survey. In this work, our goal is to establish the strong finite-time convergence results under this fully decentralized MARL setting. We first review some recent progresses on this line of research below.
i
cd0f4376b4f2d85b102bb07ecbd32048
Here, we implement ECA in a time-multiplexed photonic system as shown in Fig. REF (a). Cell states are represented using pulses of light produced by a laser with a fixed repetition rate {{formula:13b4ef00-3da4-4765-b8ee-a8766455bf9d}} ; the presence of a pulse indicates a live cell, and dead otherwise. In this way, the 1D lattice sites correspond to time bins in a synthetic temporal dimension {{cite:753aab0c036e0247f9abff1beb05f52aad8c541c}}, {{cite:20d622049a913041c43469c8c9b98aeb322be31f}}, hence permitting a lattice that extends indefinitely with an arbitrary number of cells by time-multiplexing a single nonlinear node. The pulse amplitude/phase representing the initial cell state is encoded using an electro-optic modulator (EOM), then equally split into three paths. Nearest-neighbour cell interactions are achieved using coherent interference of the optical pulses passed through the delay lines with time delay {{formula:31ce8f6b-bce9-42cd-a41e-7471f210ffb1}} relative to the cell being updated (the optical pulse through the 0 delay line), followed by optoelectronic thresholding to enforce the binary states. Finally, the optoelectronic signal is stored on a field-programmable gate array (FPGA), which performs feedback of the measured cell states by driving the input EOM for the next iteration. By repeating this process for many cycles, we observe the emergence of complex phenomena in the cell states of the ECA. The desired ECA update rule, such as ECA Rule 90 (following the Wolfram naming convention) shown in Fig. REF (b), is programmed by tuning the thresholding value and variable optical attentuator (VOA) in each delay line, which represents constant amplitude/phase weights. This rule encoding can be interpreted as a weighted linear summation followed by a nonlinear thresholding function, which is akin to a single perceptron in the context of artificial neural networks {{cite:de9aed7a06ed9b270d8fc34f8c61b3c9d13abeda}}. Therefore, the dynamics of the abstract ECA rule is exactly mapped to the physical time-evolution of the photonic simulator.
r
84e51a7068d1f8a1ee6d5412d13248c9
There is a better alternative: the center vortex confinement mechanism (see Greensite's book {{cite:3f8eeab9ba9cef2124e11829bfdd265eb42b6d33}} and Reference {{cite:8de092c5f90673cee709134f436eb4c37bdbb2e3}}). In this picture, the Yang-Mills vacuum is two-dimensional, closed center vortex surfaces superimposed on a non-confining configuration. Confinement occurs when the vortex surfaces “percolate” spacetime, so that the vortices randomly pierce the Wilson loop surface in a topologically nontrivial way, causing the magnetic disorder. The (thin) vortex configurations are observed in the lattice at finite temperatures by Engelhardt et al. in Reference {{cite:8de092c5f90673cee709134f436eb4c37bdbb2e3}} and we can freely reinterpret the system as our spatially compactified zero temperature setup. The observations are made in three dimensional slices so that the vortices appear as one dimensional loops. When the {{formula:2f19ecdc-20f9-47d4-8841-d2771f7f8d90}} symmetry is intact, the vortex lines percolate spacetime isotropically. But when the symmetry is broken, the vortices cease to percolate in a three-dimensional slice that include the compactified direction. The vortex lines tend to align with and wrap around the compactified direction, only fluctuating little in the transverse directions. Meanwhile, in a slice that does not include the compactified dimension, vortices continue to percolate into the {{formula:95e5cd43-e81c-4152-bb46-57e291054432}} broken phase without noticeable difference across the phase transition. In other words, the vortex configuration in the uncompactified three dimensions does not change qualitatively and, in our system, the confinement mechanism is identical across the {{formula:9930a138-b563-4456-9b04-c99012c0eb4c}} transition associated with the compactified spatial direction. This sort of anisotropy is hard to imagine for a Euclidean universe of dual superconductor. (The spacetime permeating monopole condensate would have to dissolve when the time coordinate is compactified down to the critical size and the condensate would somehow have to persist when a spatial coordinate is compactified at any size.) The vortex picture is attractive but difficult to relate to my work. My observation is that the compactified component of the gauge field acquires nonzero expectation value at any size. This breaks the {{formula:126ca08c-7ecd-41ac-9427-f61e6553a36a}} to {{formula:8cb42cb4-f5a4-46bd-9119-a580169b5ccd}} , so this can partially explain the abelian dominance {{cite:6e4346e811ef92c9f8831bb63128fbbf5b0c7b7b}}, {{cite:a39ac3b23e783b8e48ee36fe6311b52f8189f021}}, but not the center dominance {{cite:11f8ab0acb0f5889a074249d4dc98616de247514}}, {{cite:3f8eeab9ba9cef2124e11829bfdd265eb42b6d33}} in the string tension. The static gauge is not adequate in filtering vortices out of the vacuum.
d
2767188caff9b5f3f462e2cc3a2267c9
Guo et al. {{cite:085d5b60073cb7e9eb2b1e84ba8a5f34acabbe99}} LeakGAN Chinese Poems, COCO Image Captions, EMNLP2017 WMT News, Synthetic Data Leaked high-level features from the discriminator into the generator. NLL, BLEU Concluded that leaked information is vital to the network's learning.
d
a12c39a5aea40ea0d9099ca749342d87
The energy per nucleon of isospin asymmetric nuclear matter can be generally expressed as {{formula:dc226af3-0fcb-40a2-a2bc-f9befd5c9953}} , where {{formula:5a4d2b0c-cc3c-4eb8-9d98-5bafd93d4809}} is the isospin asymmetry, and {{formula:b6df280d-60eb-42a0-85c5-f6306016a5c7}} , {{formula:308191f2-d814-423e-8f4a-6c9f510b40ed}} , and {{formula:5f0a0f93-ae87-480f-82d3-266f73753ae5}} are the neutron, proton and total nucleon densities. {{formula:23f557e7-2d5f-4189-9307-2692e49a3cf8}} is the energy per nucleon of the isospin symmetric nuclear matter, while {{formula:f0b543d9-0d91-48a5-ad50-910afd6be8a6}} is the nuclear symmetry energy. Thanks to the continuing endeavor of both nuclear physicists and astrophysicists in recent years, many sensitive probes from nuclear structure, nuclear reactions and neutron stars have been used to estimate parameters (e.g., the coefficient {{formula:7f8e94d6-e26d-4308-ac98-2c113ecf9009}} and the slope parameter {{formula:c52ddf7d-6c9c-4770-8745-bb4095a3bc45}} ) of the symmetry energy at saturation density ({{formula:4d3e8018-344e-4d35-91fe-ff9b110fdb76}} ). So far, the values of the nuclear symmetry energy at {{formula:b99ecbe3-b452-44f0-9df4-e6a77e5717c1}} and at {{formula:a44db2aa-36db-45df-a77c-4ab19013d61f}} have been relatively well constrained but its value at other densities or, generally, its density dependence has still large uncertainties (see, e.g., Refs. {{cite:850cce4f8f30781bafa044315236401aa945b52a}}, {{cite:db60cd50b8d3c39f7a93e56ec7e2e21433d14afe}}, {{cite:b0afbc6a0393fc75afacbe57597fb21da213efe1}}, {{cite:0081c1d212cfcae20f7e48efea51aa978f0c0502}}, {{cite:1ff2a20492f9ae0f4519cb83d93c505381f1c0a7}}, {{cite:53dd96ad9b925f8ca3e403a2d31901d9034875d6}}, {{cite:3771978d612fcdb49ecf6787d6eaae5c357227b8}}).
i
4edfc9185c4e1279c60825b211bc6f0e
Define {{formula:7f675943-1c33-4b32-a831-ae9842df2431}} The complement of {{formula:d19410a2-99cd-44c6-911a-2b34a7f7d5b0}} in {{formula:6eb0e293-6345-4b6f-ae87-2bc6acb2e7b6}} is the projection of the set {{formula:9a67fe13-776c-4445-89fa-91642909debc}} onto the first two components. By the projection theorem {{cite:3b4a96fb0e0c60820e6a41bd05d4db37e41a4e00}}, {{formula:644a6797-94e3-49b4-9e40-8152834f4799}} is measurable w.r.t. the completion of {{formula:987fe92c-43cd-436e-a5d3-ea7aabef1b2d}} w.r.t. {{formula:9a204f95-3f4e-43fe-84c3-f70647e5e3d9}} , where {{formula:ea7dc9a4-5eeb-4d5e-806d-805f7614d4d6}} denotes Lebesgue measure on {{formula:9b4a36d9-8b12-4eec-ae24-541d6d24a871}} . Hence, there exist sets {{formula:be6729b5-86f8-47ed-9d19-cb94321ad7d6}} such that {{formula:016f8f76-2804-474c-a989-a5b4b2b16090}} and {{formula:37b84ec8-71f5-4d6c-9675-ba14bb136421}} . By assumption (iii) {{formula:9821103e-91d0-499e-ba09-469c1656b07a}} has full measure and therefore {{formula:0a335d96-744a-4da3-a902-112d10ff8e4f}} as well. Define further {{formula:aba9f014-663a-4e50-a78e-287f897fe2f2}} Invariance of {{formula:c6c98eb1-67fc-40fd-a9b9-d26791ae96c2}} under {{formula:492c66e6-a4d2-4a42-babd-b99c447ec652}} and Fubini's theorem imply {{formula:49b0f44c-d210-402c-9d90-02f7b61d9d07}} and {{formula:b4bda6a6-c42a-4f61-bbc9-208a0a0168ef}} . Note that {{formula:a826ae0f-adf2-4b96-b2f8-0a155f711922}} is invariant under {{formula:1d744725-75f0-4fd9-8d5d-1bfbb3aaea1e}} for every {{formula:790fb2a8-6ec1-4270-8683-9f8ef09532cc}} . Define for {{formula:62c58fad-9243-44d5-be5a-210f51462cc6}} {{formula:b7c5a72f-88be-4177-ab19-e16b5bab3ca2}} To see that the essential limit exists, observe that if {{formula:74b26bac-7a14-48ed-9c6c-adb143d014c0}} , {{formula:152d1bfa-711f-48d6-90ad-4c0e4ec08332}} , {{formula:fd4b93ab-4dce-476f-9a77-184a61bbf819}} , then {{formula:377ac6ef-5c9e-40dc-a4b0-77801c3687f4}} for every {{formula:ac9df24c-3db1-4500-9b46-db1b5ef08af3}} and {{formula:cd83ae03-0c2e-4e8f-8c31-2545d939d52f}} . For fixed {{formula:a8ac08da-6194-4faf-82aa-18c43f6ee26a}} and {{formula:48f5ba0d-0ab1-4e89-83c2-17b4e4eee619}} , equation (REF ) holds for {{formula:03814731-5839-4e2e-a3e8-4d782f35d2c0}} -a.a. {{formula:8e8a10ea-8c17-44f3-9868-484df253275f}} and all {{formula:4ec22687-102d-4994-a8a4-f4f0ae68a42a}} and {{formula:d439b436-2679-42cc-ba24-b5cdc7b65da7}} . Fix {{formula:f9ea4ed7-5d46-40ce-b748-8434d7aaeeee}} , {{formula:b6f22f96-854d-47de-b9b8-48368542cc66}} , {{formula:50e86d61-a4a6-4467-a7fd-5da2f1758dc7}} and assume that the first case of the definition of {{formula:56e0e873-1b4f-4eb3-b85e-f799ca319ea6}} applies. Then there exists an {{formula:def109ea-cd5a-41df-88d9-6b491ab430d5}} such that equation (REF ) holds for {{formula:4c9f8cda-62f7-4013-8b04-6187fe091899}} -a.a. {{formula:87916034-a25f-4d42-b0f7-d4db2f559605}} and {{formula:bbd09f40-6df7-460c-98e7-46393c4f4ca9}} . Assumption (iv) then implies the existence of the essential limit and also that {{formula:0a953974-9dc7-4fa9-808d-38ad51b868c8}} for all {{formula:dafab0d4-45ce-4a01-bd56-af3e1b47b8a2}} and {{formula:e93dd80f-c3f4-425f-a727-b271a6a8c719}} -a.a. {{formula:96666b87-01cd-4bc1-9116-ad1d746a003e}} where the exceptional null set may depend on {{formula:bd58a7e1-5f20-4653-833a-ad6c02087fc0}} and {{formula:6bb85c3a-1dff-4b8b-a6e6-abc414a1d748}} . If the second case of the definition of {{formula:b4cae319-c18d-4260-81f1-6ae0e3395003}} applies, then (REF ) holds as well (both sides are {{formula:cfff65b0-28e4-4c53-b344-8e9faafb79f2}} . Finally, we define {{formula:ca3b73bf-6956-4133-a591-229d20da014a}} The limit in the first case exists by (REF ) and (v). Therefore, for {{formula:84f0e797-7d8c-4899-9a69-69554151bb9b}} , (REF ) holds for all {{formula:f8ca7328-a4e1-4bda-9ae3-693454d5d02a}} and all {{formula:f9aa16c2-03d5-42bb-bd88-61216c7739d5}} . We claim that {{formula:50257ba6-48c6-4962-8e37-2403a50b49f6}} satisfies all required properties. To see that {{formula:f3f82838-05b1-4355-873f-d937be337ef9}} is measurable we use the fact that {{formula:1dcd6a29-d10d-4a46-ac93-13b71ae39246}} is countably generated and separates point and therefore, {{formula:5e785347-29af-40e2-a37c-7a48111e4ddd}} can be embedded in {{formula:1d9e730a-2c38-4a4d-835d-f2fb68ca38b0}} as a measurable space, {{cite:bec1f3ce9a307686900a11fb3220046277edeffe}}. Then {{formula:4cc3090c-9a10-4f97-9bbc-c2cfb36e173d}} and measurability of {{formula:aea81869-a4a6-4c0c-b240-0eec908e213c}} follows from the fact that {{formula:26d652db-f74f-44bf-a65a-d028c64ee3c3}} , measurability of {{formula:c24d19b6-9d13-4580-8cf6-e972e04a005e}} and Fubini's theorem. follows directly by assumption (i) and the definition of {{formula:bc56eeac-e2d2-4006-bedb-6bc7692bcb57}} , follows by (i) and (REF ), follows from (REF ) and the fact that {{formula:3d61f88a-ec92-4cc1-9df3-52315e75646a}} is invariant under {{formula:3f0723fa-420c-4738-bf38-242cd1fc7e72}} , is clear when {{formula:792e2ba3-8a34-46f4-99b6-bf2ec258754c}} . If {{formula:7076a80f-faa8-475a-ba4a-f8d1a11e2a27}} , {{formula:e634acb9-e0e7-4551-b9d0-f5fc1cfbc601}} , {{formula:98315a16-3ca8-48ad-b78f-02deec1b9f27}} , {{formula:2592919b-0f00-4445-aa5e-0e08654dad08}} and {{formula:25f77814-bde5-4bdb-a237-a71ae6263af5}} such that {{formula:540bf3f4-a210-41f6-a876-db053a411796}} , then, due to assumption (iv), there exists a Lebesgue null set {{formula:71bd255b-cf8d-4698-a24a-9be5348dd03b}} (possibly depending on {{formula:ab6747d3-9451-4e14-8ebf-12cbf3f044f2}} , the sequence {{formula:4c9f3546-d1c3-46c6-bbbf-e1676d774108}} , {{formula:53e8d8ac-79ab-4a4d-889f-afb00eda87a4}} , {{formula:3357cb5f-0f7e-48a6-bdaa-5a6aa21e41a9}} and {{formula:4d1da03b-76d7-4c39-8edd-e379921c2757}} ) such that for {{formula:ffebf82c-4f04-4684-bb0f-4b2265a84cf9}} we have {{formula:97965d52-77bf-44c1-a8f2-d27bd5f4aae3}} and {{formula:f81e1f02-b170-4eb7-a043-69aa7d9e0831}} so {{formula:b5392d12-5ce2-43c7-a7b6-c2ebe0dd4a18}} and (iv) follows. follows from (REF ) and assumption (v). Assume that {{formula:eb866c72-a32c-42c7-b1b5-a82ef4b10b58}} (otherwise there is nothing to prove since {{formula:26ac78c5-f9c9-4f18-847e-e667a4c29950}} ). For each {{formula:29abf6e9-edfc-4264-8912-5b018667a964}} , there exists a null set {{formula:55b528ef-d1fc-46be-ba1a-6412313a9212}} in {{formula:8d202ee7-f027-4950-9e50-118746f7f0c1}} such that, for some {{formula:600c665f-88d9-404a-a7be-eb1a4c24c9f6}} (possibly depending on {{formula:cc20ffa2-3005-4c18-a682-0f59c5ee01d1}} , we have {{formula:2c4654f6-52c7-41cd-bc03-286bd052c539}} for {{formula:5e065702-79d6-423e-a97e-b240acf3a5a4}} and all {{formula:f2ac5e48-092d-4401-9330-e331e61c245a}} and {{formula:9f4d1d4b-d3fd-429a-8a33-47dab4b106ec}} , so the claim follows. -(x) follow easily from the definitions and (REF ).
r
3755aa6649726ffb91fe3f22ab504cac
We propose to solve the Hyper-RL through episodic memory. As an episodic memory provides a direct binding from experiences (state-action) to final outcome (return), it enables fast utilization of past experiences and accelerates the searching of near-optimal policy {{cite:492890a8d40a9b66e5359df1cfd5354c840bb846}}. Unlike other memory forms augmenting RL agents with stronger working memory to cope with partial observations {{cite:2bb9da78988e1ab041e8406b74ff33f7b427bc8f}}, {{cite:46bb5fb05c79a7cd071e1bd1af38662982abb4cd}} or contextual changes within an episode {{cite:9e8e3a2a3800a097b54441659ef6a1d70874ae2b}}, episodic memory persists across agent lifetime to maintain a global value estimation. In our case, the memory estimates the value of a state-action pair in the Hyper-RL by nearest neighbor memory lookup {{cite:26318ec8ad4b03072e08ab546e3f4f4b3c179098}}. To store learning experience, we use a novel weighted average nearest neighbor writing rule that quickly propagates the value inside the memory by updating multiple memory slots per memory write. Our episodic memory is designed to cope with noisy and sparse rewards in the Hyper-RL.
i
086eeecd29d7d6fd430a2e689f1695f9
The magnetic anisotropy energy (MAE) is considered as a requisite in 2D materials according to Mermin-Wagner theorem {{cite:1770a8bb371aaf40bab51c20fdc93d0ce5de175a}}. To investigate the ferromagnetism that exists in V{{formula:4da12a30-c702-459b-86f8-9f93ebfbbb39}} Cr{{formula:3e2d2fb4-84e3-48b4-a634-b886fc727aa1}} I{{formula:4fc24979-2615-4cc8-97f5-87564071c8f1}} monolayer, we calculate the angular-dependent MAE for P1 and P6 (Fig. REF ), defined as MAE={{formula:335beada-e428-40fc-8c91-f2e4658b28c3}} , where {{formula:7d5e370b-d64e-475a-9797-bd7965e4c5bc}} and {{formula:5632ddb2-5a07-4388-b728-1ff07e890252}} denote polar angle and azimuth angle in polar coordinate system [Fig. REF (d)]. In P1, it displays that heavy coloration basically locates at the boundary of the cookie-like MAE distribution as shown in Fig. REF (a). Meanwhile, the ring-shaped distribution means MAE remains unchanged when spin vector S switches from {{formula:04d2cbdc-9269-49c3-8c00-6fed4a83963f}} to {{formula:46d6a049-2245-4faa-8712-d3be4d75c60c}} in ab plane [Fig. REF (b)]. But when it rotates from {{formula:f3bf8806-a7f9-45e1-8ead-2a0d27adbb53}} to {{formula:572cba28-01e4-4796-9043-e9b5f0e848af}} , a clear out-of-plane anisotropy is demonstrated with the MAE elevating from 0 to 234.7 {{formula:ca6339b5-1f48-4d72-848e-0904955fee32}} eV per Cr (V) atom. In P6, it shows a lower symmetry with MAE concentrating on [0 0 1] direction [Fig. REF (c)]. Furthermore, there exist not only in-plane anisotropy but also out-of-plane anisotropy [Fig. REF (e)]. When {{formula:8be74f2b-5fec-40a7-a235-f303455718e1}} increases from 0 to {{formula:f21972f7-5896-470c-bc1f-a0a80e351276}} , the MAE declines from maximum as 412.9 {{formula:4e8657d0-87bf-405b-86ab-8245b5f4a008}} eV per Cr (V) atom in [0 0 1] to 0 in [0 0 {{formula:7df01163-745a-419c-b50e-6de05a922261}} ] direction. Therefore, the out-of-plane MAE possesses major contribution to anisotropy in P6. Additionally, the MAE declines with distinct tendency when {{formula:41f776e1-2aa4-43cd-a382-0e8cb9e26438}} increases with different {{formula:f6d8bfac-9bb5-4035-9230-94cdc1cea358}} . It probably results from the breaking rotational symmetry by in-plane shift of V layer of P1. {{figure:798f5a3d-f2e0-4dad-bd73-5afc9b2b5170}}
r
6fcb0207ce1e819015da801232a7154d
The asymmetrization (merger due to the transfer of mass from the lighter component to the heavy one) of the binary black holes considered is not energetically favorable process and, correspondingly, the merger channel in mass asymmetry is strongly suppressed for di-black-hole. Thus, the question remains open on the mechanism of the merger of two black holes and the origin of gravitational waves {{cite:6a3c08c1debb812ee89f92a485d486ae7e9d609f}}.
m
850cd316df75ca46bdba57472ce6122e
Table REF reports the results on MS COCO in a {{formula:282d91e0-dcde-4393-82a5-f0a677cd4757}} setting (results are reported on entire val-set). For the sake of comparison with {{cite:f11bee469dd52b0798cf0524107a7bd2e3ca91d9}}, {{cite:9fc48426c1cf2e4a24b573e1578def6b889a6796}}, we also report results on mini-val, which contains the first 5000 images from the validation split. Following the standard COCO evaluation, we report average precision across multiple IoUs ({{formula:2bea452b-f847-4e54-b116-1907d68f8d3a}} -(.50:.05:.95), {{formula:365bb54f-4f81-48ff-8299-9c3f563937f6}} -.50, {{formula:f63fa937-46cc-46cf-b4fd-56160a813bfa}} -.75) and scales ({{formula:15b2b43b-e081-4a94-91f3-163319341854}} -small, {{formula:be77ec1f-3e7d-4606-8bea-e019a287ca1a}} -medium and {{formula:fd188582-572e-4d73-b165-83e5cd2d99ca}} -large) and average recall while using 1, 10 and 100 detections per image ({{formula:a0d391bf-b5a1-43bc-8bc1-b963c6e906c4}} ) and different scales ({{formula:a5200d4e-866f-4b10-abbf-6e94c3d6fa56}} -small, {{formula:857198d7-e407-402b-8886-5384c322e54a}} -medium and {{formula:9e2bb6b0-0ebf-42b8-b8a6-083b877dccf0}} -large). {{figure:fe855817-6f81-4b63-a23c-e630d809efd6}}
r
0e1a20813c201b5df234028b62752763
Theorem 2.1 (Theorem 1 {{cite:78c622ced26a604e8f34c7cd167e5c10db97d383}}) For {{formula:dceeca0e-43b6-4f43-853f-63176ec118ab}} , let {{formula:27808a89-be8f-40a7-967d-008dbef78b08}} be a bit-string where the bits are chosen uniformly and independently at random. Given {{formula:90da5003-39d6-40e9-ad75-b069cb4fc630}} , there exists {{formula:a8f5170c-2601-461b-9fa2-cee69c8d126f}} such that for all {{formula:882bf268-9883-45d4-b39e-1f14eeb099e8}} , we can reconstruct {{formula:a04cecbf-2511-45e0-b2a1-0a1e644174e4}} with probability {{formula:d8002208-ba66-43a0-98a2-9f1aaadee29d}} using {{formula:d52ddebd-9e98-4666-a9d3-84f9955ebcfb}} traces from the insertion-deletion channel with parameters {{formula:ca37402f-28fa-4322-ae9a-d69e01eb9640}} . Moreover, this can be done in {{formula:81c84087-8e1f-4428-9d33-e3e574f0414d}} time.
r
55b818cc1f88b8f4bfc824a0a7d5e653
Other examples where there are significant differences in the implications of the two kinds of regularity hypotheses arise in the study of regularity properties of the value function for state constrained optimal control problems {{cite:dde0cdffb32ec7ba58f319d47bf432ca80098e0f}}, validity of necessary conditions of optimality for free-time optimal control problems {{cite:6af794662b56bcb51189ffbb5b05abadfbe686fd}}, the interpretation of costate trajectories as gradients of the value function {{cite:731998228563c7f184cdbf9ce6bade7a7312c026}} and in more general sensitivity analysis.
i
80fa821bc762dd4fa1a12ca6cd7353a1
Krylov methods {{cite:cb2c92f1c43c077274866dd0c3abf27cab46dd46}}, {{cite:b19a6a1718747b9dad57468b8245d5f01261d7d9}} are iterative algorithms for approximating solutions to linear systems {{formula:9a058391-7fa0-4d0f-bbc7-6ed46abd0a9e}} , by generating approximations to {{formula:84003c8c-6bd1-48a0-8de1-e80ace77e5dc}} in the Krylov space given by {{formula:4d98a71d-63e6-4d79-85e2-18242852bc4d}}
m
ddd67cbe77438838b5592837fe76e3b9
The paper is organized as follows. In the setup Section we remind the formula for the entanglement entropy for configurations without and with an island for two sided eternal 4-dimensional Schwarzschild black hole obtained in {{cite:9890f746a12e693ba4d7e2519a070e6be69e0fce}}. In Section we study the exchange of dominance between different configurations with an island and without an island for the evaporating black hole in details, especially in the end of evaporation. Special attention is paid to the localization of the occurrence of quantum effects in this process. In Section we study a regularization of previous calculations that permits to consider the total evaporation of the black hole.
i
b12dcbbc6e1fa1b6c1c0f5fb307343ee
From medical imaging {{cite:5fc02c9be1b7ef8fcf6baf97719f4f9a88c56e71}}, to image reconstruction and signal processing {{cite:a5298fc1564d33563ce9677efead294445471bff}}, {{cite:6d27685a7f478993bfd8b410a96377b16b6e7f52}}, to modern data science and statistical analysis {{cite:dcdf935d5885035f57f9610310a25954b8d16c06}}, solving systems of linear equations, has long been a central problem in applied mathematics. Such systems will often be large, overdetermined, and consistent: we consider the system {{formula:a4f95f89-b51d-4af1-b6bd-c62754e0f107}} where {{formula:684a12dd-52a8-462f-a263-0412ef4ac1a5}} , {{formula:9b51ef4c-8d97-49d7-9aa5-4ecd5416ddd2}} , and {{formula:3d12aaff-a5fb-4472-b5c9-2cb607c5263f}} , with solution {{formula:8751293a-6f45-4ccb-9872-2e382778267f}} .
i
7b7ee65ce2a0f7f524db97978414c2fd
Here {{formula:79c31d63-555d-4058-866a-d49c72f8ea6f}} is the position vector of the {{formula:97f72efe-a11f-4c3a-8dd6-5e7885b55ecd}} th particle at the strain value {{formula:c82f4540-25f3-48e0-ace7-95ed6b8acc60}} , {{formula:41158543-456d-4d6c-860e-02e647a38768}} is the transformation matrix that maps the {{formula:5a28c3cb-f040-44d8-961c-3cc7afcbcc65}} th particle and its first nearest neighbor at strain {{formula:7d784eb2-d3f2-4b10-ba94-e6f83f656520}} and {{formula:be15c4e6-7af6-4446-80b5-e282ade5637f}} respectively via an affine deformation. This quantity proposed in ref. {{cite:031be0a2383c6970d65606d1005d48d546a9cc10}} is extensively used for the spatio-temporal analysis of non-affine displacements in shear-driven amorphous solids {{cite:8564ea04148f8de30417cf7e98c329acabbe4b4e}}, {{cite:f445b2e855d34484f8fd427218ed957b1cc8d8d1}}, {{cite:2f251ddb298989bed680ba6b0ab68c7ce9ddd910}}, {{cite:d3d3f446fc89877510139928a31eb7b31dab6b22}}, {{cite:c4c1604d5772ccf3800c82569c1307c48ffe94e9}}, {{cite:48745cdf9b74efed7be3fa0988906f5dc2fef0cc}}, {{cite:bf65b94e78449ccd8eda7a99d66a0ebe0c738196}}, {{cite:e396613eb3340c90c8bf88c21553118708681d56}}. The choice of {{formula:4f8fee4a-9fad-4fe5-8394-1f1e9ff0e416}} in Eq. REF plays an important role in the resolution of different plastic events and the nature of {{formula:6a6572d5-7dd7-4567-a541-c0b9da001d9d}} field. In this work, we present results for two different choices of the reference frame with respect to which the {{formula:6018531c-bc76-4fc2-b443-9c43b97edcd1}} field is computed; (a) the undeformed atomic configuration which corresponds to {{formula:b215988e-d2fe-47a1-9342-49dadd079148}} . In this case, the accumulation of all the plastic events that occurred during the total deformation {{formula:a9c7870a-a2d8-4651-9572-a87ee6d2edb2}} will contribute to the {{formula:38df13e4-50e8-4b69-94c3-7ff27202d9aa}} field calculation. (b) The reference frame is at a fixed distance {{formula:4cf552b4-2ff9-4ccc-bff7-29aaa0987244}} from the applied strain {{formula:cea535ef-8616-4c2a-826f-51876ef2fa26}} and thus maintains a constant strain window. Here the plastic events occurring within the deformation interval {{formula:125fe999-3cdc-4b91-837e-08d4d9d8acf5}} to {{formula:3dfbf2fd-3f91-43d1-960b-afccb03f0f30}} are considered.
r
fc48dc94fdad18138e4249786eaec686
Over the past few years, there have been many major developments for SELD in the areas of data augmentation, feature engineering, model architectures, and output formats. In 2015, an early monophonic SELD work by Hirvonen {{cite:839281a24d0576cbfa23db58262307234adbbca1}} formulated SELD as a classification task. In 2018, Adavanne et al. {{cite:0e7c926c0e887344f1f5c2cebf1b6acea948dc81}} pioneered a seminal polyphonic SELD work that used an end-to-end convolutional recurrent neural network (CRNN), SELDnet, to jointly detect sound events and estimate the corresponding DOAs. In 2019, SELD task was introduced in the Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE). Cao et al. {{cite:1677df383a8a6e341855fe69968c0dd860ce12d1}} proposed a two-stage strategy by training separate SED and DOAE models, then using the SED outputs as masks to select the DOA outputs. Mazzon et al. {{cite:8b6f4cd40a4234e4915ebf860e92eb6b8452be08}} proposed a spatial augmentation method by swapping channels of FOA format. Xue et al. {{cite:bc2b00e72db5fafe9ddbee465764e716f1b3cb2c}} applied eight fixed beamformers to extract signals from different directions as inputs to a modified two-stage CRNN network. Nguyen et al. {{cite:73ded96154a52c0fe48796bf3a5c886723d1a7cc}}, {{cite:af1375803fa5ea84b21242d7ad418ea57af8cfad}} explored a hybrid approach called a Sequence Matching Network (SMN) that first solved SED and DOAE separately, then matched the SED and DOAE output sequences using a bidirectional gated recurrent unit (BiGRU).
m
332890d0a41f0c324f74112ba7559143
Another important direction for future work is to carry out the orbital optimization on real NISQ devices using noise-resilient classical optimization techniques such as simultaneous perturbation stochastic approximation {{cite:41b0a7f53bc09bf27a8bc0845b41df577ea7c378}}, particle swarm optimization {{cite:ed985fb462c6257356cff1c45545de20214cab10}}, etc. In addition, other forms of error-mitigation techniques such as zero-noise extrapolation {{cite:ed9eb6a0a2af31ac5fb513696e4dcf96fb2349bb}}, {{cite:02a67bfef3ea9a193b34bc0a63e1bb976eb83eeb}}, {{cite:3b12f56b3423c35af0321e944886e7c4519940c3}} may help to improve the state fidelity and provide a more accurate energy estimate.
d
03c4cd8a0319aca4741c5f511c7f9ac7
Inferring temporal priors can help alleviate occlusion. Methods involving volumetric fields {{cite:ccdc635ab5c217c7274f66f70c9cd4bb95fbe475}} use temporal information as the field is sequentially updating with new information, instead of fully being reinitialized as per our method. The percentage of pixels recovered is also dependent of the side cameras configuration. In our clinical case, the camera setup is constrained by the C-arm design and the disparity between the X-ray source and the two RGBD cameras is low. A higher disparity would lead to less occlusion in at least one of the cameras. Even with our constrained and difficult clinical setup, the results are extremely promising and we are convinced the work could also be easily extended to less restrictive settings. A potential application is Industrial Diminished/Mediative Reality where workers wearing a HMD with two cameras placed on its side (with a higher disparity than our setup) could see their viewpoint synthesized with their hands in transparency.
d
670ee420e068f0cd0a6706a1b3600f6f
We propose a DDPG-based IRS phase control method considering the optimization problem in (REF ). Deep Q-Networks are not suitable because they deal with discrete time spaces only. Moreover, the convergence of the policy gradient (PG) algorithm is not sufficient in the context of wireless communication. DDPG merges the Q-networks and the PG scheme as shown in Fig. REF , and overcomes the disadvantages of both algorithms{{cite:55f841f286d6d7c20c1b9ce11d32ab8250279597}}. {{figure:c9a2333d-7ae8-4e93-9487-cdbfa907b66d}}
m
ba7161e5075f10b4346d8796882d1ac2
Another popular modification of SGD is Adagrad {{cite:495e47f48eaa783a2df1dfbc8f1ad6ff9b3c26c5}}. This method divides the learning rate for each parameter by the standard deviation of the sum of gradients for that parameter, averaged over time (from the beginning of optimization until the present). This naturally gives the {{formula:204edcc9-f6b9-4c66-b7bc-317345da81d0}} learning rate schedule that is believed from theory to be optimal {{cite:aa9433e9bb07a0dfec5ba5f6cd864985c5808a8f}}, as well as giving separate learning rates for each diagonal element. There are two reasons why we felt that Adagrad was very unlikely to be helpful for large-scale speech recognition. Firstly, a {{formula:73fe6f72-f8ff-4f94-91f0-f0c8da17a409}} learning rate has been found empirically be inferior to an exponentially decaying learning rate {{cite:fafa6103cee5aa742645be404eab17fd8133f299}}. Secondly, because our {{formula:36f5c42b-7535-4fed-8fa5-3cfde9d715e4}} -norm nonlinearities {{cite:253f039734c10f9991bb021ff4961843f96f6563}} are non-saturating we don't believe that our networks are susceptible to the kind of pathologies that would make some neurons in a layer require higher learning rates than others. This is also true between different hidden layers, due to special properties of the p-norm networks that we use hereThe detailed argument in involves scale invariance of the network output w.r.t. the parameters for each layer; an invariance of the learning procedure with respect to scaling up the parameters for a layer and scaling up the learning rate at the same time; and the notion that parameters in a layer will tend to grow in size due to parameter noise, if the learning rate is too high. Essentially, we have reason to believe that as far as some directions requiring higher learning rates than others is concerned, all the interesting action for our particular type of network is “off the diagonal”– that is, it cannot be captured by a diagonal matrix. That is why we have not investigated Adagrad and why we smooth our estimates of the factors of the Fisher matrix to the identity and not to a diagonal matrixActually, there is another reason for this. We have previously derived an efficient online update of a factored Fisher matrix that had a low-rank plus diagonal form (work with Oriol Vinyals, not published), and the diagonal term caused the math to become very significantly more complicated..
m
c4b560f463f189cf24d2b15690481d65
When the resonance {{formula:9542ed02-41fa-4bf6-88d8-fbe1c95d2a97}} is a bit broader, the dijet limits weaken but do not disappear completely. Fig. 10 of Ref. {{cite:d67cb6740fd2318d3417a0be24343334a3a825cb}} shows how the cross section times acceptance drops when increasing the total decay width. For example, assuming {{formula:f5f15d34-244e-4b3e-ad67-cc654c999c7a}}  TeV, and increasing the width from {{formula:26363405-3d77-4e41-9980-ebd1a1f79f41}} to {{formula:e34c4bd9-acb8-47cf-a993-1f96d764029e}} and {{formula:85fa6072-b79b-47c4-b4b7-b41e5f4e1775}} , the limit on the cross section relaxes by a factor {{formula:2bb2e493-0620-43dc-867a-ed0449faf91d}} and {{formula:8fb2e2fe-de42-4722-ad56-f585599d8905}} , respectively. The effect is less pronounced for lighter resonances, for example, when {{formula:7f751e05-8645-45e1-9983-fe5465e7f6ec}}  TeV, these factors are {{formula:8b9876d5-98a6-40ef-a2f4-f40a005de244}} and {{formula:f343b983-2238-40cd-a45a-1b0b5c40a2de}} , respectively. To conclude, the limits on the couplings of a broad resonance will be relaxed by at most {{formula:d94b88ec-4b31-4382-9e54-c1fbce4b38dc}} factor from those in Fig. REF , before the control over the calculation is lost and the strongly-coupled regime is entered.
d
82d89fc9f064ecb875bcb99e95cdd740
While islands and saddles are manifestly nonpertubative objects, an interesting question to ask is whether the perturbative dependence of {{formula:a3d0794e-9b61-4bb7-b386-154b80bafcb7}} on {{formula:89974b5b-1e6a-4c67-9bfd-d1edf6769cd7}} could be understood using the theory of bulk operator reconstruction. In the AdS/CFT correspondence, bulk reconstruction gives a prescription for how to represent bulk operators that lie in the entanglement wedge of a given boundary subregion as CFT operators supported on that subregion {{cite:c8282491dd4c33359292df9324bebf6da38c4490}}, {{cite:e596957e1e2dfc984cba357d3caf66e53cf1c777}}, {{cite:1364e4ed82d224635208cb81e187260816d5d95a}}, {{cite:7ad0611fd07b9ed9fbec34a0f10567850860b19d}}. In the case of an AdS black hole coupled to an external reservoir in which an island develops in the black hole interior, bulk operators supported on the island are then represented through this prescription as operators in the reservoir {{cite:bd1ec9bcd4ab53bf846dcc48154f29d9877c7747}}, {{cite:4881ca0255a51dd7bab12c90d6ad9b5efe60807e}}. In both cases, the bulk operator is “represented” in the sense that both it and its reconstruction's expectation values agree on a restricted set of perturbatively close states known as the code subspace. In the present cosmological setting, it would be interesting to investigate whether operators supported on {{formula:2aa2246f-8c5b-440b-9e39-28967ec3befb}} can be represented as operators supported on {{formula:8b9d94db-3b8b-4073-99bd-98649a860fd0}} for an appropriate code subspace and given a mapping between effective degrees of freedom on {{formula:92a10267-e1ef-4bcc-95ea-4893b1b8b0d0}} and fine-grained degrees of freedom on {{formula:7f1d4ba6-de45-4367-9677-4e2d81589276}} .
d
67ab3d734ae06368df01ae936be124d1
Computer experiments have been used to study real physical systems in many areas of scientific research. The parameters which determine the system are typically treated as input variables of the quantity of interest. Since running a complex model is computationally expensive, it is hard to understand fully this input-output structure without relying on another method. In {{cite:dc380a6ab5cdb9195f14d10305bb74c19e866d86}}, the fundamental framework of design and analysis of computer experiments (DACE) was developed. Their approach is to model the deterministic output from computer simulation as the realization of a stochastic process, thereby providing a statistical basis for designing experiments for efficient prediction. Statistical emulators typically based on Gaussian processes are constructed subsequently for the prediction of input-output structure in their framework and it allows us to be free from relying only on running a computer simulation. Gaussian process regression {{cite:2ebfbdffe3d9cc6b4a7ba62bd30e3ccd9e41af2e}}, {{cite:05a8a29a9202c3504adcd639f683853035e9e178}} (also called Kriging) are a commonly used class of surrogate models as a statistical emulator, which assume that prior beliefs about the input-output structure can be modelled by a Gaussian process {{cite:da8713047284f0d13be30f64d599c5f029660c78}}, {{cite:9348d5e5b294d276adfc5de8843ebc5b90e2b913}}.
i
7909aa14506204961d68108e08a4ff92
The rapidly developed few-shot learning techniques provides potential cues to alleviate the problem of lacking large-scale dataset for query-focused summarization, and knowledge transferring is one of them. In fact, when facing unseen tasks, it is natural for human beings to integrate and transfer the knowledge of known tasks to relevant new tasks. Inspired by this, we innovatively propose to decouple the query-focused summarization to two basic tasks, i.e. text summarization and question answering, and transfer the knowledge from these two tasks to query-focused summarization. However, in parameter-based knowledge learning, previous work are usually one-to-one (pre-train then fine-tune {{cite:650fa82832727520ae767c165bffa6b5edb68dad}}) or one-to-many (domain/task adaption {{cite:12a261b139da4cf0594202683b00cfc9369bc455}}, {{cite:6d3eb9e8cf56e65940f9a4b12852708a48241292}}), and seldom of them focus on many-to-one (integrate basic tasks to a complex one). In this case, the previous methods may not work well in this task.
i
9b0ca06821a2453bd6d96c9d6815459f
The generalized Feynman integral reduces to the Lee-Pomeransky (LP) representation {{cite:e708514a552cc0a10f7cda61da8cca9cc8d7bd4b}} of an {{formula:517a4ac4-43c0-4bc8-914d-b17b88ae6dc8}} -loop integral in {{formula:a6e0f94a-13e3-4b73-a53f-8d8624c29014}} dimensions, with propagator powers {{formula:bee71a06-82cc-4b08-85c6-aec0d9d2bc9c}} when: {{formula:490696c9-ca5c-4065-8949-de767c6be33a}}
m
4b7c8e7a3e4da8be2d99eb2771dae163
Likelihood-based Outlier Detection {{cite:5453baa3a84268daa18a63cf0c0206c07e45b372}} reported that deep generative models, such as VAEs, autoregressive models, and normalizing flows, fail at outlier detection by producing spuriously high likelihoods for OOD data, which are also observed in our experiments. This observation raised skepticism on the likelihood-based outlier detection, leading to the proposal of alternative metrics to the likelihood, e.g., {{cite:6051cf060d992e56c7247e378060597c71413411}}. However, we speculate that the failure of outlier detection should be attributed to the specifics of the models and not to the use of the likelihood as a metric. EBMs have been shown to be effective in outlier detection {{cite:0c44d4c1df3f8db3e7607c2449d40732f1495ecb}}, {{cite:2fc542b41b314ff061e8e51e8f55002d71e8e0b6}}, even though the model uses the likelihood as a decision function. The experimental results from NAE further confirms the effectiveness of the likelihood in outlier detection.
d
35799ea6aa27ce8de0c9cae094d12804
To further study the influence of distillation in the pre-training phase, we try to use the pre-trained CLIP model {{cite:9f10777ffd71f1daee03d46f102895611a84801f}} as the teacher model to distill the visual and linguistic encoder of our model at the feature level, in addition to the logits distillation mentioned in Section 3.2. As reported in Table REF , both feature distillation and logits distillation can improve recognition accuracy, and our method achieves the highest accuarcy on ImageNet-LT {{cite:600db93dbc90a6e7894d62aecc1e3a92a3843275}} when using logits distillation with the loss weight {{formula:157a35bf-f6db-439a-af55-60d666b294b8}} of 0.5. {{table:baae1882-5a20-469a-a10d-1e5ecfe74cb1}}{{table:b37af9ee-87d4-4efb-8821-3b10a148e56a}}
m
66c911af0224012c605ec41b132b3104
For open-shell nuclei, pairing correlations are handled in the BCS approximation. They are included using the constant gap approximation by occupation numbers of BCS-type {{cite:d8623738885e27b0ca867a5759f25c97817a9e98}}, which is possible in this work because resonant states are well separated from the continuum. Basically, it is assumed that pairing matrix elements are constant in the vicinity of the Fermi level {{cite:65b5c4febec8b69ff1e5102c1a254d146b12ba88}}. When the resonances are accounted for, pairing correlations can be dealt with using the gap equation: {{formula:92fe9558-1b3c-4088-b307-52265e762da8}}
m
48653866fa76d50b29f6878a3ff9866b
A possible solution of this problem could be modelling the spontaneous emission noise with a realistic version of an almost uncorrelated noise, for instance, an Ornstein-Uhlenbeck process with a very small value of the correlation time. We do not know the order of magnitude of that correlation time but it could be established by comparing experimental measurements of the phase variance, like those performed in {{cite:c9ac90feb08c9ea96f37cdcf646e88b1dd0a7583}}, {{cite:7391fcbb22ad54e96215d938002ea660a1105b18}}, with the numerical predictions of the stochastic rate equations for {{formula:3f4ee0b2-8a63-40ba-9db5-830928afdf83}} driven by a coloured noise, instead by a white noise. We would expect a convergent behaviour of the phase variance at integration time steps determined by the correlation time of the noise. Another possible solutions can be the use of more fundamental descriptions of the quantum noise with quantum Langevin terms {{cite:c45e2395a3ca1dc58b1bc3b4cea4fa52e9bdf9ca}}, {{cite:b208cbbe940da502d6c12750cb4a7010ffafa7f5}} or with master equations {{cite:b208cbbe940da502d6c12750cb4a7010ffafa7f5}} but this is beyond the scope of this work.
d
d44acd19c1e7c70ea28e82cc8f592bb3
Since we compare our results with {{cite:7028ee0e7c61fb02c7118b5586aac7697c87f3d0}} pearce:2018 and {{cite:80416927b2d86246c41b361577677603cf8cea46}} lakshminarayanan:2017, our work is also comparable with Bayesian approach by {{cite:b760dcc9496f0e0613a41528e097527e35cc481c}} hernandez:2015, and partly also with {{cite:368eb3de69184d00c0e4f5e035cd16bf0f99b15d}} tagasovska:2019.
r
8445dbaf88f3852da3e253616202f68c
Strictly speaking, any model or algorithm encodes certain prior information and hence exhibits certain “implicit bias”. The model performs well when its implicit bias coincides with the prior of the teacher. This is especially crucial in the over-parameterized regime where there are many solutions which perfectly fit the training data but only a small fraction of them generalize well. In the case of adversarial robustness, however, we require another level of implicit bias: the solutions picked by the model not only have to generalize well on the test data provided by the data generator, but also need to generalize to regions that are not sufficiently represented by the data generator. This is also a topic studied by out-of-distribution generalization {{cite:122811afe7b407f9aca4f44994b572c3d13a0075}}, {{cite:75eb3d842e3600aa65c2c004f6d0f5c1e68c1727}} and distribution shift {{cite:0f19c462e4a2dc347dec36adec6096cce61ca2ef}}. However, existing models cannot provide satisfactory implicit bias to learn human-like classifiers. They are either too simple (like the linearity of linear regression and the sparsity of LASSO), or hard to interpret (like deep neural networks). It is very important to design models that can directly and explicitly incorporate interpretable information provided by humans. We leave this as a major direction of future work.
d
233de6d157f193326b228d1816401900
Hjelm et al. {{cite:7cde51dfb127e101eebab94beb0b202a1cb33441}} BGAN Google One Billion Words Compute importance weights with difference measures. Visual Inspection Scratch stable training yet poor generation.
d
13ed83caa46690fd15ece142fe49c5c3
In 1936, Euler and Heisenberg {{cite:c9756f55f4906afe98f90602d966546c2f390675}} proved that photons can interact with photons through the intermediate creation of electron-positron pairs, thus introducing nonlinear corrections to the Maxwell equations. This interaction, being of second order in the fine structure constant {{formula:73acbaf3-d634-4c19-a80d-a25601aa7249}} , is difficult to observe in practical conditions. Nevertheless, there have been several works devoted to the small but not entirely negligible effects predicted by quantum electrodynamics. Some interesting effects are the induction of dichroism and birefringence in the quantum vacuum {{cite:53177db79fd594262ca1886e7d06b28d3f944f82}}, {{cite:91ee8a31fef9729d79df1463cdb92a6d9480c895}}, {{cite:726d57960d49ef6dff8716b7f7c8430ba57ea15c}}, {{cite:17800b536a399a8fbba4051c80628d5f96d15bf5}}, {{cite:b9e6e0843a8389f16a0f9cce9a01fc3f8a0f15ce}}, the splitting of photons {{cite:343eda211b19590e903f88b64c0689c10f7ac4d3}}, {{cite:30eab9886934b4c7ea55bf076edbaf0c827810df}}, {{cite:3a74c7d562ec202f7347d1eb972c167aea89bd48}}, {{cite:20760ca07ba6f3330dc681a58ec529e71ea85a76}}, the bending of light by strong magnetic fields {{cite:6c57b8826be12cffdc36007751f3722a828fb276}}, {{cite:8c8b797f44c51129225aa005c15b1200c128084e}} and other processes related to the propagation of light {{cite:ff366b63e65d2ff693be563d2daa991081085bfb}}, {{cite:5c0bb978b07d7721dbee8139ee4301f0c236a6ad}}, {{cite:9fe63d920b4000c5c2839030d877ad44e545eeae}}, {{cite:7282a404f2fb0ce5fe85fb4418fbbd01635e54b7}}, {{cite:5103ba47b8553479c2fbd5da589055dd699f6cef}}, {{cite:2bece045b4e1ae93f4c37b8971e7eeb798b47029}}.
i
e5b0bdba897245ce77a90245d666e8a1
The weighted Lebesge spaces {{formula:b0defaa3-cb7e-41e1-b4c3-ae6c4840d884}} are a generalization of the classical Lebesgue spaces {{formula:d12e7c0c-98b2-4dbe-ba41-e03b5e88dffd}} , replacing the Lebesgue measure {{formula:65f55d05-d88d-47dd-ac3d-e3a38812a5b4}} by the measure {{formula:0c23f2af-40a2-4819-a3c4-f0d8691249a7}} , where {{formula:4bfebcb1-a1bd-48c1-9bc1-2739364b186f}} is a non-negative measurable function. Then one can define the weighted Hardy spaces {{formula:96de6038-8dbf-48af-9e77-7098bab63659}} by generalizing the definition of {{formula:679635d1-079b-4b7e-9e09-b677d3ddba80}} (see {{cite:b97b78300d4bcb306173b9aee854b6dc6d12488a}}). It is well known that the harmonic analysis on these spaces is relevant if the "weights" {{formula:a7c9bf42-d5e4-4fc3-b393-8c10b18ebc07}} belong to the class {{formula:670e0316-ada3-4ff1-89fb-bfb354fbaadc}} . The atomic characterization of {{formula:4967d632-37e3-4b52-b22a-462f35f98267}} has been given in {{cite:936af258a8b48c0439d7d7e9a0d8bfabdf3f4250}} and {{cite:b97b78300d4bcb306173b9aee854b6dc6d12488a}}. The molecular characterization of {{formula:45301720-a2eb-4fce-be26-e590ae6d16c3}} was developed independently by X. Li and L. Peng in {{cite:9ba28f0631bdb37c66aaf357a813d40018917fbb}} and by M.-Y. Lee and C.-C. Lin in {{cite:99f15a0800bafc0fcb3f031b37003d6c328baa8c}}. In both works the authors obtained the boundedness of classical singular integrals in {{formula:12051174-083a-4463-822e-d6be13e79ee7}} for {{formula:aa3af9e2-d12c-41be-b874-f18fe71ab3b1}} . We extend these results for all {{formula:80e2726c-66bc-44bc-8825-f627a2acdd95}} .
i
ab00261e37bae42b0e9af5981af1e70e
In Tables 2, 3 and Figure 4 of the main paper, we provide detailed bias analysis of Crystalface and its de-biasing counterparts. We report the gender bias analysis at {{formula:0d6cb6d5-49c2-4ade-bea2-a0eae776138e}} and skintone bias analysis at {{formula:6904944a-ec05-42b1-a8da-d8dd6142805b}} , as done in {{cite:f136fa2976f96969f00f24499937197596ff52ba}}. Here, we provide the gender-wise and skintone-wise verification ROCs on IJB-C obtained using all of these methods in Figure REF . We also provide the overall verification ROCs (using all the pairs defined in the IJB-C 1:1 verification protocol {{cite:642faad7afaa5dbe08a57f4beabfd2e0148a7981}}) obtained using Crystalface and its gender and skintone de-biasing counterparts in Figure REF . In Figure 4 of the main paper, it should also be noted that our proposed baseline OSD obtains lower BPC than other baselines at most FPRs. This is because OSD (like D&D variants) also enforces the network to process the faces with attribute {{formula:09150a00-f0bd-484d-ae6a-78a04d3e89fb}} and {{formula:6c455369-cdfa-4170-ac8d-e6e6ac77ed61}} in a similar way, by mimicking a teacher network that is trained on a single attribute category ({{formula:340d0761-3ca8-4aeb-832d-f2c3213f1ff8}} ). However, D&D and D&D++ considerably outperform OSD and other baselines in bias reduction. Also, in Figure 5 of the main paper, we provide some qualitative results for our best performing method D&D++, showing that a network trained with D&D++ attends to similar face regions for both categories of the binary attribute under consideration. Here, we show more of such qualitative examples in Figure REF .
r
2b9257d7ce642103bdbf434219c4cb46
As the CMB spectrum is well-determined and the physics of ICS is well-understood, our analysis can give very stringent constraints on GeV to sub-TeV annihilating dark matter. It can provide a useful complementary analysis for constraining dark matter. In fact, it has been suggested for a long time that using X-ray data of dwarf galaxies can give certain constraints on GeV or sub-TeV annihilating dark matter {{cite:4267441909741ad0cd5cafa7f55d3ae3d2b910fd}}, {{cite:54aaf8b814da929c94f6cc26f52a6fdb345ff1c1}}, {{cite:3571cf4f86223b6a2b0c13e8e9209c97f1e5a09e}}. Nevertheless, many previous studies have mainly focused on the signals produced from dark matter with mass ranging from keV to sub-GeV (decaying dark matter or leptophilic annihilating dark matter) {{cite:ae74f80580176f61ffe3f798c87504f40b01cd57}}, {{cite:b0af0da429ca60fb72b9a56f31d1a1ce878b501c}}, {{cite:0518e0de3bcd41ea4c8ce84032b8134490afcd66}}, {{cite:28cf7903a61b6bbd3898cb24574b6d570ed50a38}}, {{cite:8bf9fe6a7fa20c3356d0cfa3cb2f65246091d449}}. Not many studies have practically applied our strategy to constrain dark matter with mass larger than GeV. Here we show that using X-ray data to constrain GeV to sub-TeV annihilating dark matter is very good, especially for the non-leptophilic channels.
d
1b5ddc3c5ee138a62877845366ede0c5
Similarly for polyp section, in the proposed survey paper are based on SVM {{cite:8fe8fa7f9dd8a3788e3550f216b71d8cbd96c224}}, {{cite:84db2c65757c23fe6f19cca1ea645b23a38ba94e}}, {{cite:5762019c483c5fc284086d260a1d043f7d7c2cc8}}, {{cite:c4ad1853dfd68a18fca98add769bb41f0f5ca273}}, {{cite:262bf818607dd17bc4ce436001f279decbdd8637}}, {{cite:e778553876be6f2c5eb379301813470a6d59a3f8}} and DL {{cite:0ea5f6ab62450712ee3217b067ad261f87a1f4ee}}, {{cite:a8659c8771be9e679914f2ca9116eb720a33df41}}, {{cite:dcea2c0f91e2d3c655fc57c6742578136441c171}} where features are extracted and those features are used as an input for classification for each approach. Take the case in {{cite:8fe8fa7f9dd8a3788e3550f216b71d8cbd96c224}} where SUSAN and Gabor filters are used for edge detection process in HSV color spaces. Then SVM as supervised learning is implemented for the classification of polyp and ulcer as the data set consisted of both diseases. Also in {{cite:5762019c483c5fc284086d260a1d043f7d7c2cc8}}, M-LBP is used as a feature extractor for spatial locality followed by SVM. Furthermore, for the DL approach, the Zernike moment analysis is done for feature extraction in {{cite:a8659c8771be9e679914f2ca9116eb720a33df41}} in HSI color space followed by MLP as a classifier. Although each method is following a generalized rule of preprocessing accompanied by a classification method for classifying only one disease at a time, none of the studies are striving to address the concerns such as JPEG compression during the generation of WCE videos and AWGN effects while transmitting these frames to a remote physician. The suggested possible future work is focusing on the cascaded approach for denoising these artifacts via DnCNN {{cite:26909acfb40c6783fa55a0c8353e9496139edc0e}} and then proposing deep learning model {{cite:f57a8de9cb1918ace96acf7e4d5485978201f5a6}} that will categorize tumor, polyp, and ulcer in a joint classification manner.
d
7c378235a0cbcff32d6502f05515b0fe
where {{formula:cb2c8161-7244-4796-b5a0-11b182326fa9}} is a diagonal surrogate for {{formula:382285d9-ae8b-4763-bdc3-16889d791d33}} . The inverse {{formula:8f1ed0fe-ff7c-422b-acdf-9569349037c6}} can be applied inexactly by means of an AMG operator, which can be efficiently used in mechanical problems preserving a linear complexity with respect to the problem size. This is a key property to guarantee the solver scalability. Recent examples of effective AMG preconditioners are, for instance, taken from the References {{cite:cf2b32da79415777bfbcdc3e4764513078dd68da}}, {{cite:f06edff5910eb3654c1b3b56247a90faec028c24}}, {{cite:72f822533828a9cde51d86cfeedb662084159063}}, {{cite:6114974a0840a0b786d4757de6d9e455648e9984}}. In this work, we use an aggregation-based multigrid as the reference AMG operator. Specifically, the application of {{formula:bf38f6c9-f001-4af9-9067-9ddd3d5765ca}} is approximated by GAMG {{cite:b647a6fcada12ed5de14e0dc1ae2998e905d1ee2}}, the state-of-the-art aggregation based multigrid provided by the PETSc package {{cite:0da6b1951af9063a6309c166be3816843cbd7969}}.
m
7ea6b4504c3c05f76af80a7977ab9e59
In this section the extension method of M.G. Krein {{cite:fb7bb8a6293b43f2810a9aa4a28381dbe40c2211}} is considered (see also Section 125 of the classic book {{cite:ad1affb36d0db0d4fa3fb5c62c5f8f693e5b0988}} for a detailed exposition). Krein shows that if {{formula:44c35df0-4807-489d-bf8f-1f865f7680d0}} is a contractive symmetric operator, then there exists a symmetric extension {{formula:b006a91c-0005-4513-ba9c-1ea292e0b902}} which is also a contraction. Later on, other authors elaborated on this problem (see for instance {{cite:bfc9bea00db7eb2d201dd45c2dc8e7f6adc53015}} were the solutions are parametrized, or the works of Parrott {{cite:c4584b70b11659d420dbd6b638b375c64af6dfe9}}). In this section, following the ideas of M.G. Krein with slight modifications, it will be shown that an {{formula:9d7df57d-9171-4b3a-8f31-fb108d7b2932}} -symmetric operator defined on a {{formula:b81739b8-b80d-4b2b-8e78-f71ef53d34af}} -compatible subspace {{formula:d772b3b0-a1b5-4066-b04f-67b16f46091a}} , which is contractive for the norm induced by {{formula:e5d9f197-529d-488a-bbbc-fefad2311835}} , can be extended to a contraction for this norm to the whole space {{formula:5d185416-e461-4c27-b655-db2ae9d824bc}} . Or rather, and equivalently, we shall work in the bigger Hilbert space {{formula:8c1f8b67-f0b4-4142-97b6-4a779fad1acc}} , extending to a symmetric contraction of {{formula:eb521b0a-277f-429b-8db8-5116c42380c2}} which leaves {{formula:2150b277-07b6-44ad-a392-00c0e9ea460b}} invariant.
m
dcd42eadb06e261a4ec7822a25d28758
There is no free lunch with respect to numerical dynamics modelling. The trajectory-based model's accurate long-term prediction and rapid data accruing is valuable to data-driven methods with parametrized controllers. One-step models will likely remain useful for other algorithms less focused on the long-term future, such as MBPO {{cite:bc171350295b05dad89e61eb49a8b59e7f3e7ab4}}, or in situations where data is non-episodic, so applying the time-dependant structure could be forced. Another limitation of the trajectory-based model relative to the entire space of MRBL is that this model is designed for scenarios where the generalization across similar trajectories is useful, likely limiting the potential generalization of the model to one type of task instead of an entire state space. By setting training structures and input-output pairs, any dynamics modeling algorithm is prioritizing its model capacity on a certain task. Finally, our approach was demonstrated mainly on modeling low-dimensional parametrized controllers. Future work will focus on evaluating and extending this approach with high-dimensional neural network policies, as well as applying it in online model-based reinforcement learning scenarios, including real-world hardware.
d
4e2c7c5adbb457408d8b19d5525f3f5f
In the negative direction, Arnold provided an example of a real-analytic circle diffeomorphism {{formula:0e2b2fd0-ac1d-48eb-9fd0-319ead8b44a6}} that is conjugate to rotation {{formula:2a5626d7-4785-4c18-aafa-915c80a13402}} by a non-differentiable homeomorphism (see {{cite:f6c7f2a08779884a7fd371a147bf8c5daddd8ed8}} or {{cite:8bf4e1ddb9c49b9e9e7b808d85de731dd0553298}}). In {{cite:8bf4e1ddb9c49b9e9e7b808d85de731dd0553298}} further constructions of {{formula:2a4e4104-b3f5-4426-bebf-f6b7fa5c3433}} circle diffeomorphisms with conjugacies of intermediate regularity are presented. For example, for every {{formula:e9adf2fd-ded3-43be-80ee-d5c7e96fa325}} there is a {{formula:51f21c50-45fe-4761-a0b7-e172f2374fea}} circle diffeomorphism {{formula:113e1256-48c4-4681-b19f-3e5b507ece7e}} that is conjugate to {{formula:e81f9705-bf9d-4803-8bdd-18aa1e408430}} via a conjugacy that is {{formula:d7669cb3-932f-4adc-a432-35c56642dc5e}} but not {{formula:34055f9a-14bc-4615-acfd-7ed5d385a720}} . These constructions are obtained by the Approximation by Conjugation method (also called AbC method or Anosov-Katok method) that was introduced in the influential paper {{cite:22ea13cca50fd8833228d41b480ca21483540024}}. Here, diffeomorphisms are constructed inductively as limits of conjugates {{formula:534e342a-d888-484e-b7f1-72f64a71c1aa}} , where at each step {{formula:a5a994ef-31b3-43a8-bb3a-1c3d34561a02}} one updates the conjugation maps {{formula:35f3b506-d354-4a6c-b3ad-2a5ccc5ee662}} as well as the numbers {{formula:88487916-f571-4d1c-9e34-81cf2a0d1437}} close to the previous number {{formula:6f45d6e0-fb50-4801-984a-fc87cbd3a58a}} . We refer to the survey papers {{cite:aafbd014bdcd2375a449585ecf715ac8ae012e15}} and {{cite:ca0afb57da8f52e8fdff6be6ad608bd6560b25e8}} for expositions of the AbC method and its wide range of applications in dynamics.
i
e1091c2ba170ec67ed73ab1cb5096cc5
We evaluate the methods on our video test set. Since we do not have ground-truth 3D shape for direct evaluation, we measure reconstruction quality via a mask reprojection accuracy from one frame to another, using the object masks predicted by Mask R-CNN {{cite:d8d3529acb0bf6c080cfbf0d2e3cabfc7b6fad80}} as the pseudo ground-truth. For each test sequence, we predict the shape at frame {{formula:109db091-a5ae-4410-9b33-6373a15d25b2}} and render the object mask from the pose at frame {{formula:1433d8c9-1062-4cc0-afa2-cebc123017d9}} with an offset {{formula:1b583cf0-b6d0-4f41-8ec5-9d173915641d}} of 0, 5 and 10 frames. We then compute the mean Intersection over Union (mIoU) between the rendered masks and the ground-truth masks over all frames. This metric not only measures the accuracy of the predicted shape, but also its consistency over time. table:miou summarizes the results, which suggests that our model achieves both better shape reconstruction and temporal consistency. We also compute the metrics on our model with frame-specific deformations predicted at frame {{formula:c1923cc2-b204-435f-8661-1a2b3adaa434}} applied to the shape predicted at frame {{formula:ce41f919-5774-440c-b4e8-32c273390d2b}} . This further improves the mask reprojection IoU, which confirms that our model learns correct frame-specific deformations. Other methods overfit their projected shape to a single frame, resulting in a larger decrease in reprojection accuracy with increasing frame offset {{formula:2069b06b-e2cf-434b-88ac-69b0f1c6ae3c}} .
m
f9f688406c99f5399326d29622554c98
In principle, complete knowledge of the short, fundamental cycles should suffice to compute ergodic averages using cycle averaging formulae {{cite:46c3e097b92a6dbae2ca0ff614e7342493a4626f}}, {{cite:3c89d741f3a34c0bb8bf1923641f2348383c35c4}}, {{cite:aaa1987f1b3807c2b5063ff2b4e654500624a8ae}}. Of relevance for our original motivation is that a formalism that relies on the sensitivity of such cycles to compute the sensitivity of ergodic averages was proposed in Refs. {{cite:5e6aa0c91619781097e5e221bbab302cd1e3d13e}}, {{cite:1ae8866a0249e84c79d69980d073ef58ad053213}}. Obtaining all short cycles up to a given topological length may be practical for low-dimensional systems (see e.g. Ref. {{cite:a17f222a5da04476b29475d78b4daacd9b472a65}}). However, this step has proven more challenging for turbulent shear flows {{cite:f9cd88ac399cb78dddad2c99cd7039c37da1a7c2}}, {{cite:cd9dd78d551b2f6f00f6ecf8da9496bb497dd96a}}, given well documented difficulties in locating invariant solutions {{cite:db736c75bb2880224044885b3650ed3b93a36245}}. This issue is particularly relevant, since the quality of cycle averaging predictions using incomplete hierarchies is as good as the most important orbit that one fails to locate {{cite:4281516ffd4bffec0279a4f3e9ad0455a9f58d1d}}.
i
01f80b3a853a9ca12592877c71a3eb0c
Two competing behaviors can then occur as one goes to the singularity. (I) Either there is no stability region and there is consequently an endless number of Kasner regimes, each characterized by its own Kasner exponents. This is the celebrated chaotic BKL oscillatory behavior, which holds in the original case of pure gravity in four spacetime dimensions, where it is also named “mixmaster” following Misner who discovered it independently in the context of homogeneous “Bianchi IX” cosmological models {{cite:41e50bfb81851fa40743e6cacb3b98f4b4e7223d}}. (II) Or there is a stability region and the system ends up in a Kasner regime with final Kasner exponents in that stability region. Although not the endless BKL oscillatory behavior, the BKL techniques of {{cite:3e3adbff857f106d3f9448b04133c4eb053fa047}}, {{cite:cbbe2483869e8eef3b155a8beee01523545569fd}} are perfectly adapted for handling this second case, which is in fact much simpler and for which analytical results can be rigorously established {{cite:1bb8b0eda9bcf18db1445146226e5e3da7fad1d1}}, {{cite:d7ea9dee3c3b969a75fad2e951cc75129d6b9e41}}, {{cite:cde9fe06d231cf6db97694346d2572db252d469e}}, {{cite:36ce05f574157df8267937da300f359ebeb7e45e}}, {{cite:e8bcd40eba2425ed3c026b2bc93ab41cc547c797}}.
i
96deee4e0956386cbfd0b40b91e0f384
The DualGAN architecture {{cite:82f7d6d54e54cd1be348708a0fd905d4ec64fa51}} consists of two image generators, which form a cycle, and two discriminators, see img:dualgan. The first generator {{formula:2926ab25-9b63-4692-b372-2380fcfae2cf}} is trained to translate MRI images to CT images; the second generator {{formula:8b4478cd-7b64-486c-a6ca-d4eaa5c748e3}} translates images from CT to MRI domain. The cycle allows comparing the reconstructed image and original to evaluate the quality of generators without the need in paired data. {{figure:fdb2cff7-12bb-4098-bf60-d39f2f89f255}}
m
b109159330f6cb1f8f75c108cfbbc612
On the honeycomb lattice, the semimetal state with Dirac nodes is stable for sufficiently small {{formula:4ff98994-32bf-4312-a0eb-bc17e41114ab}} , whereas the ground state for sufficiently large {{formula:4fd9225e-b528-49bc-b46f-2bb2bfa577f0}} is a collinear sublattice antiferromagnet with opposite spin polarization on the two sublattices {{cite:cf6e853baac80bbdd9f0f61a8a3c042f5b2eeb5e}}, {{cite:73f53ec149c344b36c460795a4aa4407ed329c6c}}. Initial simulations suggested a quantum spin liquid state at intermediate {{formula:45a07bd4-cffc-4325-a4b0-3720ec4f6b98}} between the semimetal and the AF {{cite:cd79cf119b2a918f8c15e035bdad1b999c6ab28a}}; however, the current consensus is that there is a direct continuous transition between the semimetal and AF phases with no intermediate phase {{cite:38c20811dd859dcb05ad45840fbfb95f32887994}}, {{cite:f7862be42c3f5b63ee42743e65510ae24f1d0ef1}}.
r
25bc65ae2662a9b1561d7f1ac07cd75d
(a) There exists a partially hyperbolic splitting on {{formula:049a2551-2f01-4474-8f24-71e3bc59d020}} with integrable subbundles {{formula:b5af5303-725e-49fc-83ef-3801344bd76c}} , where {{formula:bd9bb380-bbb2-4f06-9390-e25d79b25858}} ; besides, {{formula:e4223e50-4744-438f-8553-609de81b1a70}} and the center foliation is normally hyperbolic (cf. {{cite:217bd2df6d4bc337650a2901188579bfcd9111bd}}). (b) {{formula:6d3097a9-68be-4464-8911-177ec3da7cb0}} may not be a skew-product, but there exists a homeomorphism {{formula:69a12621-0585-467e-9400-7c381ab15296}} such that {{formula:297e9b28-547b-4862-ae9b-4b8f803ca5f5}} is a bundle map covering {{formula:cecf2759-76e4-4083-98ab-aa248579dfac}} and {{formula:d582fd8b-a311-48ee-ae6e-72b0125f4b9b}} -close to {{formula:cbf0bbac-73d4-4d88-9eb4-6aac8547180b}} and {{formula:a33129ff-f56c-45a7-8421-13b654968ec8}} (cf. {{cite:217bd2df6d4bc337650a2901188579bfcd9111bd}}). Moreover, the skew product {{formula:935230bd-950a-4eaa-8b4a-7fbd6c224e14}} satisfies the conditions (1)-(3) of Section . Denote by {{formula:201ebda9-998c-4eb3-8ca6-a90975187f8f}} the corresponding semi-conjugacy provided by Lemma REF . (c) {{formula:3598908e-7c24-4b75-b1c1-70375d9bc08c}} , where {{formula:eb415dce-adce-4ca3-aa31-a1fbe1cad5aa}} , for {{formula:0f22b746-7341-460b-89c6-139cdc99379b}} , and {{formula:ef760345-89a0-44ac-a1e4-4935c6e9a726}} is a transitive, locally maximal and partially hyperbolic set. (d) The restriction of {{formula:6e794dea-de28-439a-8822-1c894d9dff7a}} to each {{formula:65d78595-0f78-40c6-98e7-ea82c5d51b9d}} is entropy-expansive.
r
f238fcf1c03c7b5e209d07de6d146c5b
Partial motivation for the study of Sobolev extensions comes from PDEs (see, for example, {{cite:2b495eea4d9308600c7068af5efbd12e24f42ac2}}). In {{cite:62e8cf1097cc5bdc09e68b3de532ffc197fea814}}, {{cite:00c4310fde338d7c3a12269ad230804055884ccd}} it was proved that if {{formula:9e408559-89bd-4aa9-94e9-c8fbf0425a1b}} is a Lipschitz domain, then there exists a bounded linear extension operator {{formula:2c2a1666-5d72-47e3-a384-3c73bf03d04a}} , for each {{formula:612e5dd6-6bc7-44b2-af0f-3fb28f044112}} and all {{formula:b5071781-bf6a-4514-b57b-f8a90c0ba650}} . Here {{formula:c861d419-cb74-4c8f-9419-74124dac3496}} is the Banach space of {{formula:3dc3488b-94cc-42ce-b433-985503403a45}} -integrable functions whose weak derivatives up to order {{formula:3411a144-d64a-41dd-b8d3-a1292d3dc0fd}} belong to {{formula:7bf4b1fa-5a19-437c-9068-a65a4e2fecb8}} . More generally, the notion of {{formula:b3cb72b0-b335-42f5-aa36-29769d3839cb}} -domains was introduced in {{cite:b08815c1726228fd38bbe03c777d942d4cfbb656}} and it was proved that, for every {{formula:dbc5a9f0-1a9b-4115-97c8-53c51a00af72}} -domain there exists a bounded linear extension operator {{formula:3e77f51f-0e60-4fce-93c2-5ef9cb7493e5}} , for all {{formula:35944a1f-43a2-4cd5-ba52-b56a55d984f9}} and {{formula:f65acf81-3ac8-4303-a5ee-fadfe655ce2b}} .
i
d98b3c9d880e398006a7f4a88a7cf407
Models: Graph Reasoning {{formula:673ed2ec-bdd5-4b68-b457-ae8962246cf2}} Self-supervised Learning. Graphs are ubiquitous structures in representing various recommendation scenarios. For instance, CF could be seen as a user-item bipartite graph, content based recommendation is represented as an attributed user-item bipartite graph or a heterogeneous information network {{cite:f4461d209e29c8777b3e8f0ff6956c688efdd8f1}}, {{cite:9af1f28dbd107e268bb88ccc33f07170557f8bbc}}, and knowledge enhanced recommendation is defined as a combination of knowledge graph and user-item bipartite graph. With the great success of deep learning on graphs {{cite:9af1f28dbd107e268bb88ccc33f07170557f8bbc}}, it is promising to design graph based models for recommendation. Some recent studies have empirically demonstrated the superiority of graph embedding based recommendation models, how to explore the natural graph reasoning techniques for better recommendation is a promising direction. Besides, self-supervised learning {{cite:6981d2cdad3d2530953d82cf93813d983b66edda}}, {{cite:ecf5adb1e83053aba775009ae88aae2e84fc74cc}} is becoming emerged and showing promises in recommendation tasks {{cite:653dee46264c9911f80de6435309349317c1e10b}}, {{cite:9a516095f0d2f202102e5991a9942c75caf7844c}}, {{cite:5dc77d5ce261af1710b7c6a4150c9ea2dea66442}}, {{cite:da6a26cbad6b9aa2588b4ff1a0c63a2418611b88}}, {{cite:792a487d89506fb54ca460f8de9db1a29622e235}}. Its core is to distill extra supervision signals from the limited available user interaction data via some auxiliary tasks and facilitate the downstream recommendation tasks. As such supervisions are complementary to the user-item interactions, they enhance the representation learning of users and items. Incorporating self-supervised learning into recommendation could offer promising solutions to the long-standing issues of data sparsity and long-tail distribution.
d
c4febc71f300a4e84cea6a3228c1fd36
It is easy to see that as {{formula:35929708-26cb-411f-b420-5755d7efbaac}} , the finite iterate of the (KW) algorithm {{formula:9bf930f7-1c97-40df-a83c-8fc1431e159a}} has a slower convergence rate than the (AKW) estimator {{formula:725385ed-af16-4a9a-817b-25707e67caad}} in Theorem . In addition, the asymptotic distribution depends on the constant {{formula:acafd51f-bd0a-4840-9661-4e86268df8e9}} in the step-size {{formula:df01266b-b526-4833-be02-d2000bb8ffa4}} . The discrepancy is analogous to that in the (RM)-type SGD literature {{cite:3803c292305fc815383f5c666f27a3be7bf43c2d}}.
r
a4e54cdb5c5f861856479568a713eefe
Furthermore, it is shown in Lemmas 7 and 14 of {{cite:11cf34e6db2db29ccecbc8b7f4acab41d9f04353}} that the policy gradient {{formula:5109fbde-7d21-422d-9cc6-9aefff7437cb}} is Lipschitz continuous.
m
25e65b689ef0072cd2992fc0189f13ba
The basis is truncated to a certain number of Cooper pairs {{formula:e0d3b6b0-b7f6-440b-b273-91157c0469b2}} . We found that using more than {{formula:979d9394-5f41-40af-a0ea-7f6dc5c0eae7}} has little impact on simulation results for our set of example parameters. After diagonalization of {{formula:de4964bf-eada-4e46-95c2-73942073aff3}} we can inject the values for {{formula:4dc99e57-6a5f-4150-84bd-c090696f6268}} and {{formula:cd2fc447-9701-4ee7-a1f1-493585f5d176}} into the Hamiltonian {{formula:6b129ffb-2994-4302-a50c-eac369b8105f}} which we in turn diagonalize. Numerical calculations are performed using the Python library QuTIP {{cite:29d3c6ffdf5a9b055800850676e044a89d3c78af}}.
m
ddfdfc69775baef38d467ecc977763cb
Topological insulator{{cite:034b9a983af9e1b58df58f62e3a3742d86f79b17}} has been one of the noticeable advanced materials due to the novel physical properties{{cite:3517382c9f918a11ad9e3e6f47d94972596b45ef}}, {{cite:11837ff3c11188674525f858d5c0fc5165876de6}}, such as quantum Hall effect{{cite:eafa82d11d8e0fcac63190fd142ecd63e1f70873}}, {{cite:489eecd4402b3dcf33689707f6e6c2a28ae611b3}} and quantum spin Hall effect{{cite:2b5f105996e92974a523b4571eb7ade99062fb12}}, {{cite:52140325582a4d9a46533fdb8b11116200b451f8}}. It is of great significance to study the properties of topological insulator for developing a new generation of quantum components. Recently, the quantum anomalous Hall effect has been identified in the three-dimensional magnetic topological insulator{{cite:81dc161564a9aecc0273dc3f501dc09c0d024273}}, {{cite:8028dfb772ff5ea41657346b5f1d38f0748248ab}}, {{cite:22b67e3b5299e8ada878de83ee287ff63811c6b3}}, {{cite:1ff067964512c5620c33cf3f30800b5d1ba1aed5}}, which opens new possibilities for chiral-edge-state-based devices in zero external magnetic field{{cite:3b7e396f8a323c0b3d35ba41a919c6e25967550b}}, {{cite:a632f68ee85d030dc51bf216cf6065738cb63e2f}}. The chiral states can also appear at the domain walls between two regions with the opposite Chern number{{cite:a21f35d15536369febaf235a8ec2f5276bada640}}. For example, the graphene can be gapped by the sublattice symmetry breaking staggered on-site energy, which is showed in Fig. REF . There are one-dimensional states{{cite:643cdfa67126e96aa2b0f979050fe1cfae12c67b}}, {{cite:552e8410f84d8ce546ce8606fc9ac53cb2a54bdf}}, {{cite:ded9c5e060b3fb0c81fe18c1413582636cd573c2}}, {{cite:b4506a8563e3d08bea263b8ff5d6b2abd9d7e1a1}}, which are referred to as kink states below, presenting at the boundaries between regions with different quantized Hall conductances in the graphene nanoribbon. {{figure:7dd3fa33-e79b-44ed-ba9f-b66f4b121c7f}}
i
17a35f9e87204dfbc308dab33e1c6f7d
The region of an electron neutrino mass of 1.1 eV down to 0.2 eV, which is currently being probed by KATRIN {{cite:35ad72e9828098e95a4f986117b83e1ebbe64a79}}, {{cite:30e44fe2b2506a307b5721911ab43833b7397cab}}, approaches the cosmological limit of 0.12 eV{{cite:7499720d27373f0e3994ed23009b5be57699911e}}. Here the neutrino mass matrix is dominated by the absolute mass scale. As a result the eigenvalues of the Yukawa matrices {{formula:271d121d-8bb3-4c08-9e88-35a63bd13196}} become similar in value. Fig. REF (left) shows the ratio {{formula:4287f757-ab43-4912-8664-8d7b1f276dbd}} (grey points), which varies over the complete {{formula:bb31e502-551b-4b85-9b6c-f16a2956516a}} range in the plot for small {{formula:2d126a3d-b516-467e-b9d7-f6266e49857a}} but the spread decreases to a factor of two at larger masses. Imposing the LFV (blue) and relic density constraints (green) further reduce the spread. Combining all constraints (red) leads to {{formula:5c0e5d1a-3f0b-410b-8df4-e866f86e335b}} , as well as other ratios of eigenvalues. {{figure:74b6e0ec-0e14-4b2a-9784-f66342a40514}}
r
06c4e88937fd17603c305fd6fe0c2e3c
Elastic waves in isotropic, homogeneous layers are governed by the elastic wave equation{{cite:2e9bd024ad82f77b5d272db6e02b71aecc800e68}} {{formula:1d02fe51-e7f6-4ffc-8133-4212a8c6b51b}}
m
b0c5003193b15bec08216b909fdef2b3
One of the main challenges in federated learning lies on data heterogeneity: clients have local data that differ in distributions', i.e., they do not conform to the property of independent and identically distributed (IID) random variables. This causes difficulty in learning a single global model that is optimal for all clients. It was reported that, in typical federated learning methods, model parameters of a global model are divergent when each client has non-IID local data {{cite:19987a8e149e19076112c6ffeb947ae45e1e2e60}}, {{cite:5eba395ad2ad77128611dc1d50a57894bc872929}}. To deal with this issue, a recent trend resorts to personalized federated learning, which aims to build personalized models that are optimized for clients {{cite:bf12046497d0afbe7f5cad1e400919a970c835bc}}, {{cite:dae6d6428f82246a820afc4a56c26dde3214483b}}, {{cite:7de60d50984c1bbfd9ce64edcb0f74d1bdf6e7d9}}, {{cite:e3db18da84bfda18cf86ecf78802755940ce5805}}, {{cite:780b0af3fe98c7d8c731865f90797aef6b4ae0ec}}, {{cite:235599f8b31131728363185f803c1c93375e87b0}}, {{cite:b72a47d13c95076c1c0d5cfae1739ca5bc962a1c}}, {{cite:2855816fd0c9fa4c7d9464d3b2f35174832f179f}}, {{cite:3f12c7614f1e2483afd1af66938c98fd8593b68d}}.
i
2019fbdf0a93c2e1a832a965400b7389
Several open questions also remain as to whether risk may be qualified in this context in relation to well-being. Specifically, it would be interesting to study how recommendation interactions may disproportionately affect those afflicted with mental health disorders, and how we may design platforms, in the context of well-being outcomes, under normative goals of equity and distributive justice {{cite:391f3d5046ff5e65c0e52b7e48f3ce09b48ac31c}}.
d
ba55676f3d652c021c3cdf481d3ac25d
We provide the plots of the results in the main text across time-steps in Fig.  REF . While the performances of competing models degrade in time, our models, especially the Conditional, achieve better overall performances, along with Improved-VRNN {{cite:44da6b052b58e4760cd5d6e3e9363225e1704215}}.
r
1c3b3a7c0358e96b2ff711e7b38a984f
Let us now consider an undriven qubit; as discussed above, for the experimentally observed range of {{formula:5716720d-df28-4c18-9361-38633db7d298}} the quasiparticles are cold. However, for small-island qubits the distribution function width {{formula:ddcb65bc-c9b7-435a-b051-5cf2cc3b55b2}} is given by {{formula:ae1fa843-9f3d-42fb-af5d-dbadec0f6343}} , while for larger qubits (3D transmon {{cite:2a4d3303a953cc26206bf8e58096b7266484ab81}}, {{cite:be71422f5772441a549008ef09ad7fe033392846}}, Xmon {{cite:82b01724c081eda2992c46cdf6c33e6011166b5e}}) with electrode volume of order {{formula:71d4d5c9-3634-4ba2-a453-83d60a8b01fe}} –{{formula:000bef8d-1b1a-468f-ac20-4bd54bc9af79}} , it is given by {{formula:2f49b2df-08c9-4a55-b577-e03f7371f730}} , see Eq. (REF ). The different regimes for {{formula:0f41f62c-711b-4713-81ef-d117a9b3ee8c}} lead to different behaviors for the quasiparticle-induced excitation rate. In both cases, since {{formula:14f50dde-4877-4bef-ac93-ade2ea4483f1}} the relaxation rate is approximately given by {{cite:ef876165e1a8eebc5944b3bf46afb4bb69ce8678}} {{formula:9f21ccfc-4859-41d7-bd73-598328cc0e34}}
d
823539f92c03ab12dc34ec29ce57b5fd
The BraTS 2020 dataset was used {{cite:fa3a88b6364384d237c80bb27b64c60ea9729134}}, {{cite:fe9ecd1ea0c584cbfeef61f6e6a81a3d9780f4e5}}, {{cite:3f5b32ea39740d275a6c226b054e9a8e8041e65c}} that contains 369 pre-operative multimodal (T1, T1Gd, T2 and FLAIR) 3D MR images of both high grade glioma (HGG) (n = 293) and low grade glioma (LGG) (n = 79). Manual annotations of three tumor sub-regions for each patient are provided with the dataset identifying the necrotic (NCR) and the non-enhancing tumor core (NET), the enhancing tumor (ET) and the peritumoral edema tissue (ED). The combination of the above annotations, namely the tumor core (TC = NCR {{formula:ece04d1b-4ac4-4844-998b-db02b8753a05}} NET {{formula:0c9dbe50-2e47-4f32-ab44-44a68c2fdb02}} ET), the ET and the whole tumor (WT = TC {{formula:9f361406-ab0a-4db2-b1b8-825875afde01}} ED) are targets of the segmentation task. A complete description of the BraTS 2020 dataset is available in {{cite:3f5b32ea39740d275a6c226b054e9a8e8041e65c}}. Contextual information in form of WM, GM and CSF masks was obtained using FMRIB’s automated segmentation tool (FAST) {{cite:3d085ccd990fa6731e5332ed954197b22c74d8d6}} applied on the individually intensity normalized and zero-centered T1-weighted MR volumes. The difference between the FAST masks obtained from the raw T1 and the intensity normalized and zero-centered T1 volumes was minor. Of the total 369 subjects, 92% showed less than 10% difference in voxel classification (WM, GM or CSF). The intensity normalized and zero-centered volumes were used instead of the raw data, since a preliminary investigation of the proposed method indicated that segmentation performance was lower when using contextual information from raw T1 data compared to when it was obtained from the intensity normalized and zero-centered volumes. Before training, 36 subjects were randomly selected as the test dataset, containing an equal number of HGGs and LGGs. The nnU-Net deep learning framework {{cite:a35c51b3c68424fb4af501b3499a27a71fe5db23}} was used instead of an in-house 3D U-Net to allow replication of the reported results. nnU-Net is built upon the 3D U-Net architecture and automatically tunes the network hyper-parameters based on the training dataset and hardware available. In particular, the 3D fullres nnU-Net configuration was used, and several Nvidia Tesla V100 GPUs (32 GB memory) were used for the training. During training, the sum of Dice and cross-entropy loss was minimized using the Adam optimizer. The number of training epochs was automatically set to 1000 by nnU-Net, without any early stopping strategies. To investigate if the addition of contextual information improves glioma segmentation performance, two models were trained that differ in the number of input channels: a baseline model (BLM) with 4 input channels using the four MRI modalities provided by BraTS, and a contextual information model (CIM) that in addition used the WM, GM and CSF masks from FAST, having a total of 7 input channels. The performance of the models was compared in terms of Dice score and 95% Hausdorff distance (HD) on the segmentation targets using a two-tailed t-test. Equality of the variances assumption was tested using F-test in the SPSS (IBM SPSS, Version 27.0. Armonk, NY: IBM Corp). Fig. REF shows an overview of the method.
m
a013cbfe3f435a1b635c9e43f0d48018
Challenges     The main challenge with the GNAT model is its scalability to a larger number of label contexts. At each training step, the model requires {{formula:195eda3e-cc42-4a20-9471-eed79c39de1b}} multiplication and summation. For {{formula:94fcc76d-0040-4f37-98b0-7df8aed83297}} -gram context dependency, {{formula:d3f8d2f7-a058-4d2d-94e5-f6938de42ca4}} , thus the computation scale exponentially by value of {{formula:4b6ab2a8-475f-4ea3-97ca-f133e72c05d3}} . However as shown in Appendix , due to the particular structure of this space the practical computation and memory cost benchmarks do not scale exponentially with {{formula:ecc850d6-3885-4613-b55b-df948bb51d24}} . We also note that large value of {{formula:d6546336-bb28-4e58-b4bd-a236d6ba072f}} might also not be necessary: The HAT model {{cite:44cc8d9994be1534becb639406addba4a0f1f876}} reports that a Seq2Seq model with a label history of just the two previous phonemes performs on par with a similar model with a full history trained on very large voice-search corpus. Similar observations are reported in studies with grapheme and wordpiece units {{cite:1266c19bd30d014ef60597196456ad0410588f19}}, {{cite:60cd5a5dff051a8954307d2ab45e498fb61c4af7}}. Due to the data sparsity there might not be enough training to fully represent a {{formula:d364a786-cf80-488d-8af0-413f92ea078f}} -gram space, so increasing the value of {{formula:f9cd1a48-b80e-4169-80ab-607f34ff76c1}} might not necessarily lead to performance improvement. One way of dealing with large number of states is to use standard pruning techniques to keep only some of the most common states in the training data.
d
0e406e1fdfa93959492eca6677e8ce10
Other useful approximations of {{formula:d603083e-b219-4125-aec4-ee1cad69185d}} for quantum algorithms might be obtained using, for example, Chebyshev polynomials {{cite:fcd5aefc8dd20b49166aaf29e5da3e35903079da}}, {{cite:317024abb0db3641a283a41c86b5e555cf54730c}}, {{cite:b69e693f4ea9f091ce1109ec1effbb9070913d5a}}. In contrast with approximations based on time evolutions, these polynomials can be constructed and implemented exactly using quantum walks {{cite:fcd5aefc8dd20b49166aaf29e5da3e35903079da}}, {{cite:b69e693f4ea9f091ce1109ec1effbb9070913d5a}}. However, these Chebyshev approximations might not help when we seek the approximation to be accurate on a subspace only. For example, following the method in Ref. {{cite:fcd5aefc8dd20b49166aaf29e5da3e35903079da}} to approximate the action of {{formula:2c527f7c-ce0d-40da-825a-0c837a304e7d}} produces a Chebyshev approximation {{formula:3f303555-6229-4a14-8505-68ce69f1772e}} , where {{formula:d03cd0f0-fdcf-403f-bb52-89f8a5f05b85}} , {{formula:97aee407-d443-490a-9eb6-705d4e57633a}} is the {{formula:03cd3bd4-150e-4182-ab14-2414163778a6}} -th Chebyshev polynomial of the first kind, and {{formula:556fb967-372c-4224-86d4-ce8d03c553ed}} is an upper bound of {{formula:523acfa2-a598-48bb-9a72-140fc18835f3}} that is obtained from the presentation of {{formula:8aa74c3a-36fa-4278-a1ad-e751fe9f755f}} . The {{formula:ec5f3bc4-5810-4726-af93-167f53f2b0e0}} norm of this approximation, given by {{formula:077a973f-3142-465a-b357-6288823373b1}} , is exponential in {{formula:4bab24b4-c581-4985-90c8-5939c44a5da7}} in the worst case, even when we require the approximation to be accurate in the subspace where {{formula:27a39aad-20a7-4932-a3aa-ba0de5a074ea}} only. This is in sharp contrast with Eq. (REF ), which is exponential in {{formula:08436c5d-20ce-4266-9ee7-9a3e0da8100f}} . Finding a suitable Chebyshev approximation that is accurate in the subspace and whose {{formula:37e13d1a-7a8c-4d77-9789-7eb730362789}} norm exponential in {{formula:f87c6810-1ac2-419d-ace3-eff9f86274c1}} remains as an open problem. (A related problem was encountered in Ref. {{cite:728c8c2d8772815e46c0b48a772f3aa0bf23e197}} for simulating quantum dynamics on a subspace.)
d
a84e6192e47319031ef4bb1aab4a4411
Training loss: We train our network on a sum of scaled version of the Scale-Invariant (SI) loss introduced by Eigen et al. {{cite:6c4df4df978ea249733ab854615be7e5a4033b26}} {{formula:f6c9ede3-de8b-49d2-81e0-f130003fa231}} and Chamfer loss {{formula:6edc73c3-f514-4c1e-8298-c32a4470a823}} {{cite:60d84fb35a171e7f26862665330b999b02c5cccb}}. {{formula:fa4ecd13-1aef-4436-a314-abae8e6f26dd}} reduces the difference between the predicted depth map and the ground truth depth map. {{formula:455480cd-8018-4d22-bcb7-8fba6938ba7d}} encourages the bin centers to be close to the actual ground truth depth values and vice versa. {{formula:62d512be-7bbc-46c4-82cb-4034f201b76a}}
m
3df302db4f92526c8a6e82a5b6b48e80
Vortex photons are known since decades {{cite:35965c561dd006263497c3d14b96578f0afdc0ca}}, {{cite:e037fd45a454353326c14e5938beec44dc3fe4bb}}, {{cite:689e126fe46f8d7bef0e2c775405a21b66e774b8}}, {{cite:f08ce1c866702092180b9348a4ea5c0c37fd2a9a}}, {{cite:fd988bc78d99522bfe1594f27ac99d52c4328261}} and have become a basis of numerous applications {{cite:c60ff02913622fb7f7128115e09de8e6dd8b50dd}}, {{cite:e54cef128cf96758d134344831e7dc1428904e5b}}. A decade ago, following the suggestion of {{cite:79a08c4546c59211e29cf38dedb173dcbbbeed66}}, vortex electrons were experimentally demonstrated {{cite:be70c92898ec20e6a93ac52e862b0ba1635ab25b}}, {{cite:25405bb520b871dc251e4e939a7a001e3be5bfc7}}, {{cite:c3447463650650787d4fec184630656f23f286c9}}. They are now routinely used to probe magnetic properties of matter at the atomic scale, to excite plasmons, and to test behavior of twisted electrons in external magnetic fields, see reviews {{cite:ed980b0c6150a8c286c376f793340fc4049d2bc8}}, {{cite:50eeaea0c565c9ee8062b324ede3bc95edd5859d}}. In the past few years, neutral particles such as neutrons {{cite:301d47afa5e4e46fbca6ee9b442c23b782ea3158}}, {{cite:578eb099cacfe53e2b1e8d2c5c832af1b52a55c0}}, {{cite:ed8ec9f64b48e629161e359359c9440058f3c302}} and, very recently, atoms {{cite:570eb2bc2d32ee80a555a30a0fea25475065c7ca}} were also put in vortex states, opening new promising venues for fundamental physics and applications.
i
6e3783763dffebe58434e1707198ae4e
and decide whether the optimal {{formula:039ec7b9-58d7-46db-832c-6928a3a9b7d5}} are accepted and whether the radius {{formula:e4d89655-4e2f-4e4d-b030-cdde70ec9b8f}} should be decreased. Algorithm REF describes the RTR algorithm. The constants used are taken from {{cite:6cfd5092c7da7a1232d78c8cf9dc4384d3cacb58}}.
m
d1fb74db7ff4c626f284a7bc15f96757
Figures REF ,REF and REF present the input image, model prediction, and corresponding Grad-Cams for our trained models, for normal, COVID-19 and pneumonia input scans using {{cite:87b2684bb6c83ae27dd4b93662d91a3f088fb343}}{{cite:c24ab67de9427f8b0a62f95f769d8cd9ed025f33}} gradient class activate map (Grad-Cam) algorithm, respectively. The attention maps are a simple and efficient way to visualize the features that the model has based its final classification decisions on. We used the Grad-Cam algorithm to visualize trained model features for all three classes (i.e., COVID-19, normal, and pneumonia). The Figures REF ,REF and REF present the attention maps (i.e. Grad-Cam) for COVID-19, normal and pneumonia classes, respectively. In all three Figures REF ,REF and REF the second, fourth, and sixth rows represent the input CXR scans, while the first, third, and fifth rows represent the corresponding attention maps of our trained model. The attention maps explain the model predictions at pixels level in the input CXR scan space.
d
9b3f34a47f104bcf088dcd44b974aa09
Owing to their versatility, ultracold atomic gases provide ideal platform on which to study fascinating quantum many-body phenomena in a highly controllable and tunable way {{cite:13ce45c7733ac667aa0a2a71de6c28954bdb18d7}}, {{cite:7cebee540a734d5aed5c3811cfd197faaa90e519}}, {{cite:b575ce49559a3ca2ebca5a57629db681bd2c14d9}}. As a building block of interacting many-body systems, the two-body problem is of fundamental importance in ultracold atomic physics {{cite:1bf7f3980e4d459a435baa1de87e3a9f94d7d456}}, {{cite:8de9e273d243f75a187e2e8e4bc50016c903a1c4}}, {{cite:eed779ee8744c9b6b69193bf78ef8640b71c3b3e}}. In one hand, two-body solutions determine the essential interaction parameter in the many-body Hamiltonian. In the other hand, the two-body physics even gives rise to a set of universal relations that characterize various properties of many-body systems, ranging from macroscopic thermodynamics to microscopic correlation functions {{cite:aa9b30eec291c2bb131d94ba5cc242e1879717c2}}, {{cite:c0d47c715fae777b1c7db5245613c0d9c7f111a2}}, {{cite:d456bc0820c6d89157acde6b319411343bbe5aee}}, {{cite:b33f29c598fbe0641c9cd0712713056d5de8a330}}. It then opens up a new direction of studying many-body problems based on the two-body physics {{cite:6e6d3eef495a58e41fc17dd0a89cdfbe737af52d}}, {{cite:14f90f3e2a1c11a35acd3726aea6faf95f96a6f2}}, {{cite:a25e6eacd3a29abdfab03d3c2914f26d0a42901b}}, {{cite:8e6feafa90ce5263a19d2fbc6ee91a1dedd7caa8}}, {{cite:31c22d3e8e2ef31b578e2aa5eca3cc0975207881}}, {{cite:2e04b3c13387f52b13600a211a058f3c37f2b257}}, {{cite:8c661d50a3189a3926c688162903209ee7b58721}}, {{cite:f114296160a479f104c8a18f1289a0fbda65e500}}, {{cite:e0a0c809188854c02ae9a6fc5b4bc01d628ed5ea}}, {{cite:4d66fdfbbf11c2a39b4cb0f29b7bf76509973397}}. An important feature of ultracold atomic systems is that the mean distance between atoms is usually much larger than the length scale associated with interatomic potentials. Therefore, the two-body scattering properties outside the interatomic potential become independent of the short-range detail of the potential between atoms, and are universally characterized by the so-called scattering phase shift {{cite:1b2fbfc6fbb59d78c80d922fa079dcf2c5b5b8e6}}. Moreover, microscopic two-body scattering parameters, such as the scattering length and effective range, can be defined based on the low-energy expansion of the scattering phase shift {{cite:06f82da7686254a8aeaaf0249e91c6e07508234a}}.
i
9ef74ab055ec9d45f5c59755e9f27d84
One reason that synchronization has been studied in coupled oscillator systems is that it can be used to reduce phase noise {{cite:1cecdcfe70e11da7643c7f6ef2acca85c315a3d4}}, {{cite:d24e62b0c1883a7a69ec9a57cb1d2c1aa6b0e66d}}. In prior work, it has been demonstrated that synchronization in coupled limit cycle oscillators results in a reduction in phase noise in the system {{cite:1cecdcfe70e11da7643c7f6ef2acca85c315a3d4}}, {{cite:b696f7a3f374ee687cc5903a88a466735a2bdb2e}}. The phase noise measurement is shown to vary as {{formula:1208349a-75f4-4af5-ac51-fd1dcf0eb8a4}} , where {{formula:e849495c-7155-40b2-b7fe-12d71c650846}} is the number of coupled oscillators in the network {{cite:1cecdcfe70e11da7643c7f6ef2acca85c315a3d4}}. In our experiments, we restricted the number of oscillators to two and studied the variation of frequency fluctuations with increasing coupling strength. Frequency fluctuations were recorded in the spectrum due to noise in the system such as an unstable laser power. We recorded the frequency fluctuations in identical coupled oscillators of length 38 m at various coupling levels ({{formula:79b770a8-f144-4b21-85b2-acdb81797d9c}} and 4) and at a fixed laser power {{formula:f3d2578a-e8f5-4e80-8c7a-77e34fa82299}} mW going into the microscope body. The fluctuations were calculated using the standard deviation, {{formula:f1119719-8bca-44fb-903b-1b49e16cdc1a}} , of frequency data values from 100 spectrum analyzer sweeps at each coupling level. The standard deviation of the frequency values is plotted against the coupling strength between the oscillators in Fig. REF . We observe that the frequency fluctuation drops from {{formula:25390a6e-7749-4530-a4ca-fb13377507a4}} kHz at coupling level 0 to {{formula:770393ef-7ab8-4f92-b5d6-99b452ca2b8a}} kHz at coupling level 4. The standard deviation of the frequency fluctuations in a single oscillator of length 38 m and width 2 m was also measured to be {{formula:51daa891-da59-4622-8ec6-8b4917b5b7e3}} kHz. This data point is not included in Fig. REF but it should be noted that the frequency fluctuations in single oscillator are higher than the frequency fluctuations in coupled identical oscillators for all coupling strengths measured. The reduction in frequency fluctuations with increasing coupling strength might find applications in timing devices where frequency stability is critical. {{figure:aef16bac-1db4-4d0f-b0ef-09e7ff7c3111}}
d
7104c891d34a4ae053d2e034f8f714ee
Another attack proposed in {{cite:8f3ed1d4632293f084c71d057bc9e40cbaca6982}} works by reducing the performance (e.g., accuracy) of the neural network on the images with a specific ground truth label {{formula:52955049-ee33-49dc-afe8-c0ce70b0c126}} , i.e., given an image with ground truth label {{formula:94284516-c79a-4434-8260-177beb2c3fe5}} , the network will classify the stamped image with some label {{formula:4912b1e2-f161-4687-adc2-486ef61b751b}} . The attack can be similarly handled by focusing on images with ground truth label {{formula:bf64cafd-8a66-415e-9b03-d3ff98194751}} , although due to the disjunction introduced by {{formula:30d5a9b9-2b2b-4424-ba0a-e0fb4714e040}} , the constraints are likely to be harder to solve. That is, we can focus on images with ground truth label {{formula:bc70ee69-b652-4a11-9364-5e17574c5323}} , and define an attack to be successful if {{formula:5142e09f-49e6-494c-b18b-ef0658d1a78f}} is satisfied.
d
222a185ab21e3a5cbff6d6d77cffddf3
For both cases, we explore the effect of group structure by comparing the abundance of cooperation for group-structured populations with the corresponding results for well-mixed populations of equal size. Interestingly, we find that the effect of group structure depends on the benefit of cooperation. When this benefit is small, group-structured populations are more cooperative than well-mixed populations. This ordering reverses once the benefit of cooperation is intermediate or large. This result differs from previous models on the evolution of reciprocity in the presence of population structure. In van Veelen et al{{cite:f04f516a9a0ece60a23b22d0ff4bb14e65701d09}}, the authors explore the evolution of strategies in repeated games with relatedness. In their model, there is an assortment parameter {{formula:1ccc05b7-5bad-4743-9494-a484b9255bee}} that determines how likely players who use the same strategy are to interact with each other. Their simulation results displayed in their Figure 2 suggest that the effect of this kind of population structure is always positive. As the assortment parameter {{formula:e251838f-b864-4c08-a217-924407189060}} increase, people tend to adopt more cooperative strategies on average. In contrast, we find that additional group structure can sometimes prevent the evolution of cooperation. This effect occurs because the effect of group structure is ambivalent in our model. On the one hand, splitting a well-mixed population into smaller groups promotes cooperation, because people in cooperative groups are more likely to act as role models for between-group comparisons. On the other hand, group-structured populations lead to smaller effective population sizes, which in turn select for spite{{cite:2ae5bc36d9f8203e68df42200fd2545c493f0125}} and defection{{cite:741a9d284c0ba917d002e8d56b4076ec1dde3ace}}. The overall outcome of these two opposing effects depends on how profitable cooperation is.
d
007f04861a363706308e0dbb3789dcc4
Runtime Comparison. We compare the runtime of TAMPERS with that of Genetic Attack {{cite:a68e6b74c34714663dcb6f8209cf91bd297ce559}} and Particle Swarm Optimization (PSO) {{cite:cba71acd2ac9b0670d47450cd961380840af168d}}, both of which are also combinatorial optimization-based approaches. The results on the IMDB dataset are shown in Table REF . We observe similar results on other datasets which are included in the appendix. As can be seen, TAMPERS is much faster and can craft an adversarial text in a reasonable time frame compared with Genetic Attack and PSO, thanks to the search space reduction step that significantly improves the efficiency of the algorithm.
r
9a5370602eec115fd640d14318714a75
Remark 4.7 (Looking for a proof of Theorem REF ) If we knew the hypotheses of Theorem REF force {{formula:5e7f0e92-2075-4d4f-b794-fe6862da5330}} to be solvable, we could extract this result from Newman's classification of JM groups. However, doing so is somewhat cumbersome: one has to analyze the conjugacy classes in {{formula:1503c34d-4cb0-447e-beda-1009a4b5c740}} to show that {{formula:562ebb55-b855-4684-add8-c0f9453aedda}} , explain why {{formula:fd6fb359-9a36-465f-8737-1a84903fd994}} is an odd prime, and so on. Alternatively, once we know {{formula:5b984ed0-269a-404d-884d-a4165d9acf93}} is solvable, we could appeal to the results of Isaacs and Passman that were mentioned earlier; but this seems to require some form of Frobenius's theorem on malnormal subgroups ({{cite:cbc5e78e9b041c24358bfc0c24d95f5c67a5e776}} or {{cite:3772d9b2a3ef2acd5276dc2abbba124f593e321f}}). In both cases, of course, we would still need a separate argument to explain why the conditions in Theorem REF imply that the group is solvable! This is one reason why we chose to give a self-contained proof in the Appendix.
r
bbdf948ed18e959315abc67cceef63f7
Our attentive sampling could be naturally adapted for other metrics, e.g., latency. In this work, we closely follow the conventional NAS evaluation protocols in the literature and report the accuracy vs. FLOPs Pareto as examples to demonstrate the effectiveness of our method. In table REF , we use GPU latency as an example and provide additional latency comparisons on both 2080 Ti and V100 GPUs. Compared with EfficientNet models {{cite:a9a8c0eccd5719c222030c5a31f24a7088476a59}}, our model yield better latency vs. ImageNet validation accuracy trade-offs. {{table:2c01c0de-9ea9-4ccf-9162-8b6482819ae4}}
r
829321124b8352c1b16b7b2f26609ddc
We refer the proof to {{cite:db2ce5b1c22a75b5793716972654e20ee31bd546}}. In what follows {{formula:8921e40e-6da7-4489-ac0f-bd59d647e5da}}
m
d94b5a492e84e04cbfb1b2c2fea17fad
The here discussed connection can, in theory, be exploited to obtain a principled catalogue of equivalences between physical laws and their corresponding kernel functions as well as activation functions. There are at least three conceivable procedures for finding solutions to (REF ). One procedure would be to find matching examples in a forward manner, similar to {{cite:e8be09741105a10d2a7df03d41a389a10ac68a2f}}, by defining neural activations by trial and error until they match a particular kernel. Any of the several representations from above may be used for that. This can be guesswork, if not less effective or necessarily less efficient. A more systematic approach would be through Mercer's kernel representation, e.g. with fundamental solutions as basis functions as argued before, implying an "inverse kernel trick". For many physical laws, this might be possible numerically only, if at all, which could be crucial for adoption of the procedure to more complex physical laws. It is also argued that it is possible to "skip" the construction of the kernel and directly find the activation function by demanding the right hand side of (REF ) be zero. It should further be noted that Boogaart {{cite:c16a129fd1ee8c07ff23132f819bcfc0eceebcd1}} also displayed equivalent formulations of (REF ) and gives "four methods, too old to be found in the books I read". These should be useful for constructing physical activations, too.
d
8ced0afbf0696dc6032d53e9503bfb8e
Theorem 1 generalizes the undetectability of Werner state of the maximally entangled GHZ state {{cite:8f27d139007d00ccfb21f1da97c3b21ee9c1865a}}, {{cite:6522e93e9588578ba722e582001d4015e4c066b1}}. In particular, the generic undetectability holds for homogeneous linear Bell inequalities. Here, homogeneous means that all terms of {{formula:6a764a21-0c90-4d85-adfb-534755a61e66}} -partite Bell inequality include full correlations of {{formula:d88932ae-1cc6-4773-ac5f-5e39219ca740}} observers. Almost means for almost all {{formula:af343f81-60b6-4834-b1d6-e575553e18f2}} (approximate full measure) Werner states defined in Eq.(REF ) cannot be completely detected by any homogeneous linear Bell inequality, i.e., the critical visibility {{formula:10983780-4b4c-4c23-a19a-7c68b2803fa7}} in terms of the violation of some linear Bell inequality is larger than the critical parameter {{formula:a80b9684-3789-49db-84ad-45c1324af6d2}} in terms of the full separability. Hence, there are Werner states that are entangled but cannot be detected by Bell testing. The main idea is to estimate upper bounds of all homogeneous linear Bell inequalities (Appendices A, B and C). CHSH inequality {{cite:8fdd3bc231ad7e8e8b225fea9c92e4d1af421ae4}} and Mermin inequality {{cite:086765bd18be840442ec8a2ad5f465875df1cf6a}} are nontrivial examples that cannot completely detect almost all Werner states defined in Eq.(REF ). A general linear Bell inequality may have sub-correlations involving less than {{formula:adecd704-718d-419d-af94-c75cf06c9040}} observers and has no tight upper bound in terms of LHV model. The conditions presented in Eqs.(REF ) and (REF ) provide an accessible way to determine whether a given Werner state is undetectable or not. Some examples are shown in Appendix H.
r
93fb8adbedf78f36c7209aa5dc35411f
An interesting approach to successfully produce the observed excess of matter over antimatter in the universe is through the leptogenesis mechanism {{cite:e00f63f3e2d1ce6f2f4bb0405215df11ce0342eb}}, which relies on the right-handed (RH) Majorana neutrinos introduced in the context of type I seesaw mechanism {{cite:fb3e1b5b1cb4eab80077cccd4a4ea873682e6e8e}}, {{cite:d6a456211e48c7806e6c8aca3d651d3bab4e8850}}, {{cite:aef0077494b7e66d6d0468649af6d0eea6503cb0}}, {{cite:3d8eb6ec47503065a6a5f85e72ecf9b0c988ce7a}}, {{cite:00999a1e6f4c4bb784bf5005f69111331215a2da}}. In practical terms, this approach requires lepton number violation which arise naturally in type I seesaw models via the Majorana masses of the RH neutrinos. Then, a lepton asymmetry (equally baryon minus lepton {{formula:b3b7462b-9df7-47b6-807b-1f6a10a960cd}} asymmetry) is generated by the out of-thermal-equilibrium and {{formula:2fff30a0-bc61-4b78-bb41-e337a57db938}} violating decays of these RH neutrinos that is eventually converted into a primordial baryon asymmetry by means of the SM sphaleron processes {{cite:67e953426146662215eec70cef49e08ed548f80e}}. As a result, the three Sakharov conditions are satisfied in this scenario, which is remarkable considering that leptogenesis connects high energy scales where the BAU takes place and neutrino oscillations that take place at low energy scales.
i
dcab82a672d9ce665e220f07b61bf60f
Non-invasive optical imaging has important applications in various fields ranging from biotechnology {{cite:36aac5ab72577984a7fa6b6db7edc2489ed554b9}}, {{cite:ead10c7c238a57207fbbc65395094bc8633d7e8b}} to optical detection {{cite:eeca24e42155e5433884df03badc3e116b41d405}}. However, inhomogeneous samples, such as biological tissues, scatter light, which results in a complex speckle pattern on the detector {{cite:bb2816b8623ca0f25bfa8dce46cbebac485c2beb}}, {{cite:c921f1a90d4ea7c3842d7151f7b8262aaa6b7a4a}}. With increasing depth, separating the low amount of ballistic light from the scattered light becomes a big challenge {{cite:e69ad7b33ad7ce5e1b2f8d6701765104d2b2e587}}, {{cite:46dc3c30a9da37614b720b1604100336686e322d}}. Over the years, many approaches have been put forward to overcome this problem by exploiting or suppressing the scattered light. With the development of spatial light modulators (SLMs), multiple ways to control and manipulate scattered light have been developed {{cite:11749999157d0b712d522963ae2578ae61e716cc}}, {{cite:4501a6c637e14a59be44d6977a8b062d0ec542a6}}. Several techniques have been proposed to focus light by making use of feedback signals to optimize the incident wavefront to recreate a focus that is then used for raster-scanning microscopy {{cite:aeb319b1a15dfd04cc1e1a784a3452207716a65d}}, {{cite:5d559984c5df6989378225014c2beedfeecde561}}. These techniques require access to both sides of the scattering layer to optimize the wavefront, which strongly limits their application in real-case scenarios. To overcome this, other strategies have been proposed based on wavefront shaping and various feedback signals such as fluorescence or ultrasound signals {{cite:5d559984c5df6989378225014c2beedfeecde561}}, {{cite:f2c4c1675fadfac438ed92bed92e7bc310143b8c}}, {{cite:cbff32ae334d6cf34081d2e8b2b3d54dd75cead5}}, {{cite:05c8da4230985759b0c4ca9b27c62893d42bcfde}}. However, these approaches either require long acquisition times or are limited to small fields of view (FoV). On the other hand, several techniques exploiting the angular speckle correlations, known as the optical memory effect (ME) {{cite:34d4a5191311683fa03cc8d32ff3d2f7fb23946f}}, {{cite:4bbe235ea23d53bf00d284b7a5307c3b8846e692}}, {{cite:99dda15cf3e91647e51caace15e7dd4ca1471938}}, have also been proposed for imaging objects hidden behind scattering media {{cite:dd89e7369e5c523c7b37dca66ae8bacf85b22fe6}}, {{cite:26e00386af70e993925c8cabd043d29518aca06a}}. While these approaches are fast, their FoV is still limited by the ME range.
i
9003ea96a969af8c7de62f74e4cea095
The proposed CCSFG achieves superior results to all its competitors. Clear margins can be observed between our CCSFG and the second place method CCFP{{cite:9d39ff3070fa30489e49b0129e8de30ada76a67b}} which is the state-of-the-art ISCS Re-ID model based on self-learning and feature alignment. Comparing with CCFP, CCSFG achieves 6.3% R-1 and 4.2% mAP improvements on MSMT-SCT. Such improvements on Market-SCT are 3.0% R-1 and 4.4% mAP. The ISCS setting of person Re-ID is challenging. Many existing methods fail to achieve the ideal performance on it. The image generation method HHL {{cite:9700442890e830056c7c22369423c4f5c997ec04}} can improve the baseline methods with the cross-camera images generated for training and achieve comparable performance to the ISCS method MCNL {{cite:bcefa8b1ce34aaf339f32f5aeb266d9f155e6d53}}. However, generating the person images with cross-camera view information captured is a challenging task. The distribution alignment methods, MMD{{cite:1a80f616039bc8064220d6f7498f049d5f4af343}} and CORAL{{cite:f62591659cd4b41c72e18dc0ba9cefba151a9f59}}, also achieve substantially good performance. They align the holistic feature distributions of different camera views. A feature alignment loss, {{formula:b2cf3c26-1eb8-4c55-9238-017415989c15}} , is also used in CCSFG (eq:gsl) to align the image feature and its generated features under different cameras. {{figure:6479ffb5-4f59-4ee5-9f19-5e8da8197ec2}}
r
872acac85e64dd7315891a9bba49a2b2
For 3D superpixel generation, Simple Linear Iterative Clustering (SLIC) was used {{cite:cfbab62c9eaa0effd75cb902edcfb5ee66c33648}}, {{cite:6ce593ebe9736b2a803af4afef02649efa57f398}}. We conducted a grid search as part of the experimentation to determine the best parameters for sequence type for superpixel generation and the number of superpixels. The best set of parameters were selected based on the Dice similarity coefficient achieved on tumour segmentation with optimal thresholding, where we assumed the best thresholding would be known. For a given superpixel, we perturbed the selected region on all four modalities to be used as input. We experimented using three trivial perturbation methods as baselines: Blank perturbation, where we set all pixels in the selected region to 0 similar to LIME, max and min perturbations by setting the pixel values in the selected region to the sequence's max and min value, respectively.
m
66a543b96d8daa7b91aed738c411f1bf
Deep learning-driven approaches for processing medical images have already been shown to standardize the diagnosis of cancer and improve patient stratification{{cite:2f37d15f42d0b4caaac6ae23df2c37e9a7e0c397}}{{cite:307697f89e9d5030a85e416c623221b3b099e62c}}. In a recent pioneering study, a deep learning-based model could detect and classify lung cancer with similar accuracy as pathologists{{cite:6cddf3ece9b056cdeeffc9734571b16f9723c406}}. Previous studies suggest that deep learning could be used to develop markers that potentially use basic morphology to predict the outcome of patients with cancer{{cite:013fa3a67fa561d665dd6c5f4a6083d6c12ab119}}{{cite:ab54104229b9b8799f6371c2cd058046de353844}}. A deep learning-based model by Coudray and coworkers could predict six of the most frequent genetic alterations directly from the slides{{cite:6cddf3ece9b056cdeeffc9734571b16f9723c406}}. In gastrointestinal cancer, deep learning can enable estimation of microsatellite instability directly from histology images{{cite:d555ac930718e1b3d2fab9f7e1cc51a7a913e318}}. Kather and co-workers reported that CNN extracts the tumor components and predicts patient survival directly from histology images{{cite:3fa7cde900f7253dac463272101b7ab14ef3b9a4}}. Saillard et al. used CNN to predict the survival in HCC which extracted the features from the images by a pretrained CNNs and the network then selected 25 tiles with the highest and lowest scores for the prediction of patient survival{{cite:7bf50107fa186ba82c143778b5656f4d1984a9d8}}. In our study, a different method was used to develop MobileNetV2_HCC_Class to improve the prediction of prognosis in HCC treated by surgical resection and LT. The scientific or innovation points of our method are: random titles of each patient were used, like Skrede{{cite:fc9fa8278a952310c716a31b00b9b569dbff154e}}, trained MobileNet V2 by MIL which allow training on tile collections labelled with the label of its whole-slide image; and most importantly, nuclear architectural information was used in building model, which was useful in cancer grading and predicting patient out-comes{{cite:40c26693b826cda68d8aa58e80641da21d0966ec}}. Genetic instability could be displayed by diversify of nuclear shape and texture, playing important role in metastasis and proliferation that potentially results in cancer recurrence. MobileNetV2_HCC_Class was a strong predictor of RFS in the HCC patient treated with resection or LT, and generalized in the TCGA set across different centers.
d
e93dfea868e203a263179277fcb84b36
We used lung point-of-care ultrasound (POCUS) imagery gathered by {{cite:3224351cff12919da81d03e97d36ff8bf2cf29aa}} to train our network. This is the first publicly available dataset of lung POCUS recordings of COVID-19, bacterial pneumonia, and healthy patients. The dataset consists of 3119 frames from 195 ultrasound videos. A 5-fold cross validation over the videos was performed. We evaluated two deep CNNs on the POCUS dataset VGG16 {{cite:c9441925f124044b1546c13d503643448cc00f02}} and ResNet18 {{cite:934c533b0e49d343d2290a87c999f43355957110}}. Models were trained for 51 epochs using an SGD optimizer that had an initial learning rate of {{formula:85660d13-ba16-437f-8565-652d90243ad4}} which decayed by a factor of 10 every 15 epochs. In this paper, we report results for the best models over the 51 epochs. For standard models (trained with ERM) the best model is defined as the model with highest clean accuracy. For robust models (trained with AT) it is the model with the highest adversarial accuracy.
r
8b74f5b5c36f4e27520d15af5a8ffe00
For Problems REF and REF , the HJ equations in Theorems REF and REF can be numerically solved by grid-based methods, such as the level-set methods {{cite:57e6cf1b106073c37ff18ed560988fc43a2aa2db}} and fast marching method {{cite:44426ef49054f563522e910ed494e192dd53ccd0}}. These methods require spatial and temporal discretization, which leads to computational complexity exponential in the dimension of the state. Thus, it is intractable to utilize these grid-based methods for systems of state dimension beyond six.
m
c75462b484e6d34e1e2b471223e332ad
The band structure and the related properties of Cd{{formula:18c85fc3-e514-4b93-9a15-2eee3ffcd29f}} Se{{formula:1b926839-f154-4b97-afed-98f420fcc18e}} Te solid state solutions were calculated in the framework of the density functional theory (DFT) {{cite:6090dae9270620a0ad53197a4695a6de8c209d92}} using CASTEP code {{cite:30ef283aa0a096e82fc88284a4fc8f2d5d8b798b}}. In the present calculations, the generalized gradient approximation (GGA) and the Perdew-Burke-Ernzerhof (PBESOL) exchange-and-correlation functional {{cite:693fa64e142bd3d5de8c6dcfffa598aef797f162}} were utilized. The interaction of electrons with atomic cores was described by Vanderbilt ultrasoft pseudopotentials. Within the method used, the electronic wave functions were expanded in a plane wave basis set with the energy cut-off of 310 eV. The electrons 4{{formula:8c5caf6e-dd20-4e95-828a-23b354260eb1}} 5{{formula:5cb6c587-3ca2-45dd-9d39-d254eb332be9}} for Cd, 5{{formula:addce602-7fb3-4d5d-a5e3-0d115a22db0e}} 5{{formula:4aaab70c-6562-4bc5-9da1-8277527411ab}} for Te and 4{{formula:f4667227-09e2-4eed-ad3e-efc2846b904c}} 4{{formula:1a67df4e-4b4b-419a-b4e4-7760ae912aa7}} for Se atoms were taken as the valence ones. For DFT calculations of Cd{{formula:4908e49e-0cce-4a00-9863-75b2fdc91c51}} Se{{formula:08343d71-6ab6-414f-86e1-5fd2a52c5fcf}} Te solid state solution, the 2{{formula:5f21faed-6f6f-4279-8556-6567d2f837f7}} 1{{formula:5b717fb8-7f08-4efc-a490-8f173d0804cf}} 2 supercell containing 32 atoms was created. The 2{{formula:63bd3613-9923-4632-920d-51c6ae507447}} 4{{formula:087eca65-50ae-4507-9057-e32c62c11b98}} 2 Monkhorst-Pack mesh was used for the Brillouin zone (BZ) sampling {{cite:48b1575efbfe46b37b2d932430d8256f2a4550b3}}. The self-consistent convergence of the total energy was taken as 5.0{{formula:f90291ca-1ffb-423b-9632-be258ce4c649}} 10{{formula:2c7e8fb1-2085-46c0-8be6-cc22b9648388}}  eV/atom. The triclinic symmetry {{formula:4fea017f-fd52-4fad-b723-a4c88542182f}} 1 was kept during structure optimization of the crystal. Geometry optimization of the lattice parameters and atomic coordinates was performed using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization technique with the maximum ionic Hellmann-Feynman forces within 0.01 eV/Å, the maximum ionic displacement within 5.0{{formula:11c25f5c-b1fb-4e67-bd26-3027481d6c16}} 10{{formula:70790100-577b-4a07-9371-1c4dd0e09440}}  Å, and the maximum stress within 0.02 GPa. These parameters are sufficiently small to lead to a well-converged total energy of the structures studied.
m
1e03547c2bc6a1c518d9aff0c3a693ea
Based on the high-resolution differential equation framework, we obtain the convergence rate of objective values in both the schemes (REF ) and (REF ) is {{formula:8fb79a1a-fa1f-4078-bc71-5c44350f0736}} , where the coefficient {{formula:ae06e0e3-caaa-439a-b295-6238acfd10e2}} can be definitely improved. However, whether the coefficient {{formula:fbcfbfff-9fbe-482e-bdf1-0655c9fa3608}} can be enlarged to 1, consistent with {{cite:53be430ad72c2e77286814a9fe3eb1b17d13d45b}}, is still unknown. Furthermore, the lower bound given in {{cite:53be430ad72c2e77286814a9fe3eb1b17d13d45b}} comes from an infinite-dimensional example. In other words, for any finite dimension {{formula:a8c69e85-de9d-4a04-911e-38a82bee3c63}} , the lower bound is only valid before the iterative time {{formula:ab8aaf85-1a7b-407f-a916-dd17605393cf}} is no more than {{formula:13ed8c07-9121-4cb7-9f33-cf4fec1d1e18}} . Indeed, this is similar to the convex case, and there is also no lower bound in infinite iterates for the {{formula:43f68764-7c9e-4664-9dce-f47e7a16b98d}} -strongly convex function.
d
a6a1a54f518d2a45e7762ad231287eb6
Rearranging equation 28 of {{cite:2412c7ba86f7c30125a17765576fd5193bdf66dc}} yields a lower limit on the disc mass required to initiate self-stirring within the system lifetime: {{formula:2fcd77c7-e6fc-4403-93de-7abc2042abbb}}
m
ef527c1b6e393346d5351edd7d6bfd33
In this section, we conduct experimental evaluations of the proposed SFWFL algorithm. Particularly, we examine the performance of the proposed algorithm for two different tasks: ({{formula:b0f6513c-57fd-4a9d-ac44-01057ebe2b0a}} ) training a multi layer perceptron (MLP) on the MNIST dataset which contains the hand-written digits {{cite:17c3bd972aba3e6fb20e0af0674b0aea0de6574b}} and ({{formula:14f7738c-565b-4937-94c2-409f5ad6ffd7}} ) learning a convolutional neural network (CNN) on the CIFAR-10 dataset {{cite:d585bf5b8bb3e8d3994250b6a7f2b1e62242d904}}. The MLP is consisted of 2 hidden layers, each has 64 units and adopts the ReLu activations. We extract 60,000 data points from the MNIST dataset for training, where each UE is assigned with an independent portion that contains 600 data samples. For the non-IID setting on MNIST dataset, we use the 2-class configuration {{cite:ee0ddfe77d9c7f609c4e96990315fbba7b885a67}}, in which each UE is assigned with images from at most 2 classes. The CIFAR-10 dataset consists of 60,000 colour images in 10 classes, with 6000 images per class. And the CNN has two convolutional layers with a combination of max pooling, followed by two fully-connected layers, then a softmax output layer. We extract 50,000 data points from the CIFAR-10 dataset for training, where each agent is assigned with an independent portion that contains 500 data samples. We allocate 10,000 data points for testing. Furthermore, we adopt the Rayleigh fading to model the channel gain. Unless otherwise stated, the following parameters will be used: Tail index {{formula:ba21226d-bc34-476f-b45c-4ebacd89d136}} , number of agents {{formula:d1f33531-5238-419f-843c-bd4e503995db}} , average channel gain {{formula:20d457cc-54b5-4f17-8d3f-90743c3849b4}} . The experiments are implemented with Pytorch on Tesla P100 GPU and averaged over 3 trials.
r
74a1fd1a4c3ec72ef47c6428231a482b
In this work we combine the two perspectives into a hybrid anomaly detector. The proposed approach complements a standard semantic segmentation model with two additional predictions: i) unnormalized dense data likelihood {{formula:e42480c9-79bf-4e3e-b3c7-ab17de817e84}} {{cite:2ae780f0adb7c9403faf80d9e6a95741d863a38a}}, and ii) dense data posterior {{formula:0a1e0501-d705-47d1-b728-2b288060e209}} {{cite:ff78ff3427d9565d09d3fe467dfb9fa76093f30c}}. Both predictions require training with negative data {{cite:8399368df207dd91639fdc5fe9dd1db2fe1fbd36}}, {{cite:ff78ff3427d9565d09d3fe467dfb9fa76093f30c}}, {{cite:0476122a9de6b141d7a593bd6ba8a2dec1205070}}, {{cite:def4566745a53dbf8a35328e68ec50d76ab58bc6}}. Joining these two outputs yields an accurate yet efficient dense anomaly detector which we refer to as DenseHybrid.
i
706462df15c3254f1879477ce95652ed
BadNet-RW {{cite:4c6bd1d5ceeb7cdb34767683e0bc28fd19263f65}}, {{cite:ed2d0c277c9c6b9154c087c3da1030a18f31dc16}}, {{cite:5e17a6397d1736e15dd089546abf0ac6c18229b3}}: Attackers will first poison a part of clean samples by inserting them with a pre-defined rare word and changing their labels to the target label, then train the entire model on both poisoned samples and clean samples.
m
8aa81c29ad5decb1927830b2aebc4d2d
The Schrödinger equation describes the time evolution of an isolated quantum system. However, for a system open to an environment, one needs a broader framework. Gorini, Kossakowski and Sudarshan {{cite:2a1ae3f5fb34308ada8d33cf31f9201bff0e67f2}}, and independently Lindblad {{cite:f0a9ac420ef4a23a6a5b2ab74ff56172d710b1be}}, derived such an equation of motion governing the evolution of open quantum systems – see {{cite:10be493faa5e6146d42bb60565cfd9cc37586669}} for historical remarks. The most general form of such a GKLS generator, acting on {{formula:6cb55a60-2f69-4dc6-bdfb-e42e7e4fc064}} dimensional systems in Hilbert space {{formula:e3533e8d-2f9c-4398-8ae5-9795af57eb00}} and implying Markovian dynamics, is given by {{formula:e4306682-8fc4-4905-8e5f-88f3e388cc0d}}
i
4b12e6ea69b7429256c33bd9f88673b5
ResNet18 {{cite:20f48d3c25e677faa798d75f12c98603b1ab8852}} {{formula:be9cb989-dae5-46a6-8b79-b5fb8231a185}} {{formula:947d7cf1-902b-4aba-8974-24314b91cbce}} - -
r
dcc4f239e11f8d8fd0cd7358a2f8a5d4
We have found that the non-differentiability of {{formula:5a5105e5-4e9d-41ac-b9b5-89d5648f4f44}} at one frequency {{formula:76f5dcee-80d5-464a-a229-278acc04ca5f}} has profound consequences for the free propagation of the optical field. Even if the wave-packet spectrum does not include the non-differentiable frequency, it nevertheless exerts influence. An analogous scenario occurs in complex analysis. the function {{formula:536079e1-8db5-4097-a3a4-5833889d36ed}} is finite and differentiable everywhere on the real line; nevertheless, its Taylor expansion {{formula:2e269fd9-04fd-426e-93ca-5a55e0fce2ac}} converges only in the interval {{formula:7271659a-1260-44f0-b337-1c409fc9ab21}} . The reason is that, when expanded into the complex plane, {{formula:8465a76c-3fe9-46fb-ad27-fbb4682cb83e}} does indeed possess singularities along the imaginary axis {{formula:e04abbcf-a2a9-452c-9f71-4c1b25717f0a}} , which then limits the radius of convergence. In other words, these singularities exert a decisive influence on the behavior of the function {{formula:70ba0922-0db5-43dc-a63e-d21bda8e93c8}} even in domains that do not include these singularities {{cite:9c00112c39d2aa023883199c04f2b8cae02239b7}}. Analogously, the non-differentiable frequency {{formula:83ded4db-1d6c-42fb-858c-e0b4e5e1776e}} impacts the behavior of the ST wave packet even when {{formula:427080db-a9e8-4327-94f5-8e5a88ede763}} does not belong to the spectrum of the wave packet.
d
0d8f6b119166bd78186ce217d9748737
With the above background, we propose in section II, a consistent scheme based on the scalar Helmholtz equation (SHE) that allows estimating a reasonable lower bound of the dimensionless coupling constant {{formula:34957f65-a6d8-40fc-b288-257d3139e359}} . We follow it up in section III by exploring the formal equivalence of quantum mechanics and optics by disregarding the paraxial approximation where the refractive index has only a transverse (x) component. In section IV, we consider an optical periodic structure that addresses a distribution which varies only in the longitudinal (z) direction. It enables us to set up a scheme in which the SHE is presented with a supersymmetric structure. As a result, we are able to derive new analytical forms for the complex periodic partner of the refractive index distribution. This section also addresses the question of determining closed form solutions of the parity-time symmetric periodic structure of the refractive index, where the parity operator {{formula:8293c101-4de7-450c-89a9-0710add85324}} is defined by the operations {{formula:1a82f49e-c383-4925-9c2a-809aba6ef401}} and time reversal operator {{formula:7a54ee5b-9402-48ce-8bc2-d007d1444e7c}} by the ones {{formula:dbf08d63-5a4f-47d2-9f20-deb22003bcc4}} . It is of interest to mention here that in recent times the idea of {{formula:5ad13b5e-7ca0-4c45-82d3-cefb6b20f2a1}} has found relevance in the artificial construction of optical structures with balanced gain and loss {{cite:3bfbc73b9169f6ff197ab3746a52a048c167558b}}. Finally, in section V, we make a summary of our results.
i
cab417c3007adc4087ef53805477c901
The major aim of this paper is to prove a further analyticity result for the Dirichlet-Neumann operator {{formula:7fd847e9-2977-4729-b37c-b3044bb5e83d}} defined in (REF ) on the unbounded domain {{formula:d97fb20d-9475-4155-8a42-92554e073696}} , where {{formula:016fbc60-22ce-440a-8741-319cf1a6c8e3}} is the standard {{formula:3e40ad65-5ea3-4cec-bdb1-93a6f17cda62}} -dimensional flat torus, in any space dimension {{formula:32f28a05-c5ae-473e-8453-5f95c05d2f0c}} . Assuming that {{formula:06149a7a-5752-49de-aa14-45080c6f73ac}} is analytic, we prove in Theorem REF the analyticity of the map {{formula:1b920df6-3941-4329-835a-7670f90f2b24}} acting between suitable spaces of analytic periodic functions. The delicate point of this result is that {{formula:61d5259d-e288-4a91-ad04-50f77bcd82ed}} and {{formula:b120c9bd-e647-4f47-82c8-f567f6bbd933}} are assumed to have the same regularity (if {{formula:3a382bcd-452c-4483-8a54-a302cc9fdff6}} is more regular than {{formula:7870d95d-dde0-4c4a-a495-968e0719d907}} the result is simpler). Following Lannes {{cite:23e67cc3f4fc8471bb974e1fe936c0f64a6911a9}}, {{cite:9c168470dce69020955579be211c5abcc75dd8ea}} and Alazard-Burq-Zuily {{cite:d82f477312a840314bb22b3d12d96b006b240cb8}} we make use of a regularizing diffeomorphism to flatten the domain to the half cylinder, in which the transformed harmonic function solves a perturbed elliptic equation. Then the proof relies on a perturbative approach to invert the transformed Laplacian over suitable spaces of functions {{formula:a6894248-372d-46fd-aab2-b21b1ff511a6}} which are analytic in {{formula:2f4a3f43-48dc-495e-bff3-ae276e989c74}} , with Sobolev regularity in {{formula:bca0b562-1219-42be-b3f9-d9bf3429abd8}} and decay to zero as {{formula:3aed1743-8390-4940-9467-a52f2f5bfb18}} , cfr. (REF ). The key step is obtain linear elliptic regularity estimates for the Poisson equation in these spaces, see Lemma REF . Then the elliptic estimates for the modified problem are obtained by a perturbative argument differently from {{cite:d82f477312a840314bb22b3d12d96b006b240cb8}}.
i
5608a390692c3bac4673c7d44db5cc3b
In seven dimensions, we found that uncharged black holes with a spherical base manifold contain a novel triple point akin to the one found in {{cite:f36927becfebe9dae33ebf9d306b3c40bc402e55}}, which was not previously seen in the analyses of {{cite:078fbc3832526afc65dcc7cfcaeaca922adb02e4}} and {{cite:dd490cb70c422497c294165db5509b6b40e278f2}}. This triple point is found to be sensitive to the rescaled Gauss-Bonnet coupling {{formula:b5157aa5-0cac-4002-bb4e-316426a6eb25}} , terminating at the critical point once {{formula:6a645388-a554-4309-90f1-36fe9877b878}} reaches a certain value (after which only a Hawking-Page transition occurs). Furthermore, small-large black hole phase transitions are forbidden as they require {{formula:31728b20-f499-4bfd-b9a2-3478f936857a}} which precludes the existence of a smooth Einstein limit (see Figure REF ). When charge is present, the triple point vanishes and only small-large transitions occur with a single critical point. We clarify that although Hawking-Page transitions were reported previously for this case {{cite:dd490cb70c422497c294165db5509b6b40e278f2}}, the canonical (fixed-charge) ensemble does not allow for such transitions, even if they appear energetically favourable.
d
7a331b6c4f9d9bcf3192b04ca92e54e5
While dynamical KMS symmetry (REF )-(REF ) is on the quantum level, the constant translational symmetry (REF ) renders the effective thoery to be entirely classical, in which quantum fluctuation is switched off. Relaxing the symmetry (REF ), we have realized that, from non-equilibrium EFT perspective, the classical statistical limit (REF ) and quantum level (REF ) of dynamical KMS symmetry will give rise to different KMS relations among coefficients in the effective action. It will be interesting to investigate on this point via a direct holographic calculation, by considering an open string moving in a slowly-varying AdS black hole {{cite:bdc2ba186fcf14f5b5be4cdd0117518c87e12e46}}, {{cite:7ad19fc9490265b031de21d72e53cca4bd2e1d03}}, {{cite:8a72616024b19f61b77a9942795feeb6fdb64b5d}}, {{cite:170b33217bf7caf487b772b634cff985e45e6dc5}} of fluid-gravity correspondence {{cite:b27ac4a58738d7a3898a8835e0776a2f67eda9e9}}. Moreover, this new setup is supposed to yield more realistic {{cite:a77cc57c26428f70376708d0a36dc720ab92f35c}} effective description for Brownian motion.
d
e242d6a23196a76719b3843ee9725a28
These graph models come in many varieties. For example, the Erdős–Rényi model relies only on external parameters—typically a count of nodes and edges—to determine how it will randomly connect nodes, making it incapable of truly learning any deeper topological structure {{cite:c3ed4022a3bd8c4517f1d95dd99ee732ae1b486f}}. More recent graph models, like the Chung-Lu model, improve on the Erdős–Rényi model by combining specific extrinsic parameters with information learned directly from the input graph {{cite:306f68ebafa93a9013a5876934f796b25d3dbde9}}. Then, there are those models, including grammar-based schemes and graph neural networks, that are parameterized solely by the topology of the source graph. These latter two classes of graph models seek a more comprehensive approach by imbuing their production strategies with salient topological information extracted directly from the source graph. {{figure:85bf07d6-caf8-4ec5-b1b5-f06d0cb87588}}
i
7b286a21bc1dc8638acf4cf172a982e0
We develop a fast ECM algorithm for parameter estimates and provide a procedure for choosing the common and specific factor. The MSFA needs to be constrained to be identifiable and so the constraints used here is the popular block lower triangular matrix. Although this condition is largely used in classical FA settings, it induces an order dependence among the variables {{cite:b22b389e42dd0ad9ec287a054ca64e1cc26b04ec}}. As noted in {{cite:4a034bc95683e9183d7b8cfcbd42e69e8d6e8dcd}}, the choice of the first {{formula:374446c5-ceb0-47ef-891a-5b65c5acab81}} variables is an important modeling decision, to be made with some care. In our application, it is somewhat reassuring that the checking made on the impact of the chosen variable order on the final conclusion leads to the same conclusions, although general conclusions cannot be drawn. Others constraints or rotation methods, such as the varimax criterion {{cite:ce5b6c062b210786c43fd7b58c45f916f5150c1c}}, could be considered, though their extension to the MSFA setting would require some further investigation.
d
7e18c852ec11532694782ab0f50d5134
To synthesize the images with virtual depth, we take advantage of novel-view synthesis methods {{cite:b4d6546c8b49b9bad909fe6fd41d45c52c207d3e}}, {{cite:01ff28e1a22bc5eceb51f7b7b81d56151fa3abf5}}, {{cite:1d542f573eb925d7324d679b64addeb830d51c11}}, {{cite:af4af2d3cc4effcec826e332e1f56d1cf88335b2}} and propose an efficient rendering module that uses depth image to reconstruct the 3D scene of the reference image, and synthesize images with virtual depth through the camera displacements. The ground-truth bounding-boxes are also shifted along the depth axis according to the displacement of current scene, producing synthetic training pairs for detection model to learn. There are several challenges for the detection model to learn from those synthetic images. First, the depth images are very sparse, making the synthetic images contain many holes but few meaningful semantics. Second, the provided depth images are not aligned well with the RGB images due to the calibration error, those unmatched depth pixels around the object boundaries make their appearance highly distorted and cause ghost artifacts in the synthetic image. To address these issues, we propose a multi-stage rendering module to generate high-quality synthesized images from coarse to fine.
i
6eead52d156c646f0c39c4319a8472e7